Landscapes - People - Global change

Tag: methods

New paper: social media methods for SIA

Synthesis figure in the new Current Sociology paper showing sample workflows within a range of possibilities.

This week a new open access paper came out in a special issue (monograph) of Current Sociology about Social Impact Assessment. The special issue was led by Guadalupe Ortiz and Antonio Aledo, and their introductory essay is worth a read, as is Frank Vanclay’s epilogue, reflecting on 50 years of SIA and asking “is it still fit for purpose?”. Our offering, Social media and social impact assessment: Evolving methods in a shifting context, reflects on a decade of research using mostly Instagram to understand the social impacts of developments such as hydroelectricity, wind energy and coastal dyke realignment. The above demonstrates the current state of the art in terms of workflows, and shows how several of our studies have navigated those options. The paper also talks about the challenges, practical and ethical, of using social media datasets, and calls for government support in securing ongoing access for the purposes of public good research, a topic also recently argued by Ethan Zuckerman in Prospect Magazine. Most of the work synthesized in this paper has been published elsewhere, except the brilliant work that Mehrnoosh Mohammadi did on developing a collage approach to communicating common features in social media images to protect both copyright and privacy concerns (see below). This is a method we advocated back in 2017 and it is wonderful to see it in action.

A collage by Mehrnoosh Mohammadi of 16 photos captured in NS vineyards and posted on Instagram, showing seasonal change from left to right.

Congratulations, Gardenio

A screensnap from Gardenio da Silva's online defense today, June 30, 2021.

A screensnap from Gardenio da Silva’s online defense today, June 30, 2021.

Congratulations to Gardenio da Silva who defended his MES thesis this morning on Social impact assessment (SIA) practice for hydroelectricity in CAnada: a review of methods and monitoring. Wonderful to have IA expert Meinhard Doelle examining the thesis from Sweden,  John Parkins ringing in early from Alberta (in the midst of this heat wave) in a committee capacity, and colleague Andrew Medeiros managing it all as chair. It was a wonderful conversation about the practice of SIA, using hydro dams as a case, in a challenging context. Gardenio’s work leveraged secondary datsets, including SIA documents and longitudinal media coverage. Both papers within the thesis are at an advanced stage of publication, which makes the process a bit easier, but there was a lot to engage on. Great to see so many MES defending comfortably within the allocated two years.

ESRI Canada ‘App of the Month’

McNally's Ferry - erstwhile town and transportation infrastructure on the Saint John River, pre-Mactaquac Dam and today.

McNally’s Ferry – erstwhile town and transportation infrastructure on the Saint John River, pre-Mactaquac Dam and today.

Congratulations to MREM alum Larissa Holman, for news that our Before the Mactaquac Dam storymap was selected as ESRI Canada’s App of the Month for October (French version here).  Larissa worked with me back in 2015 supported by Energy Transitions (Parkins PI) SSHRC funding.  Larissa is now working with Ottawa Riverkeepers, and reports that her job:

… is a nice mix of keeping on top of projects, investigation work when someone reports pollution or odd activity on the river, working with some really wonderful and knowledgeable volunteers and the occasional canoe trip or boat ride out on the river.

A great alum story for a lovely fall day.

When to call a social scientist (or how to fool one)

In science, when human behavior enters the equation, things go nonlinear. That’s why Physics is easy and Sociology is hard. (Neil Degrasse Tyson, Twitter, 5 Feb 2016)

It is heartening to see increasing support for interdisciplinary applied research from funding bodies. Some countries (like Canada) still largely divide funding programs by discipline, requiring researchers to carve out feasible standalone disciplinary research subprojects within more interdisciplinary projects and subject them individually to the rigours of granting bodies. By contrast, places such as the European Union welcome large, integrative and synthetic research projects. It seems clear, however, that such opportunities do not necessarily increase the likelihood of interdisciplinary team research. In fact, sometimes it seems to encourage members of more disciplinary teams to extend into unfamiliar domains to meet granting requirements. It is human nature to want to work with people similar to us, who we understand and share language, methods and a sense of what consititutes good evidence. Specifically, based on what I have recently been asked to review from numerous journals over the past year, it is common for teams of biophysical scientists to engage in social science research, in a way that would be unheard of in reverse. In many cases the first authors are students, themselves ‘converted’ from biophysical research to take on the social angle, poorly mentored by a team of biophysical scientists.

I am very sympathetic of the drive to reach outside familiar domains in research. My own career is not linear, and my set of interests and methods broad. I have many times felt the terror of the dilettante at the conferences of various disciplines into which I ‘dipped’ (before I learned to stop going to disciplinary conferences). My first degree was Geography, so I am a natural ‘borrower’. I find my natural home at applied conferences and in problem-based journals, where researchers and readers alike are more concerned with answering an important question, than within which paradigm the answer was found. Unlike many, I am enthusiastic about the creative mixing of methods and theory as appropriate to solve problems, but believe that there is a blindness and an impotence to social science that is done in the image of biophysical science, and without building on (or even awareness of) an extant rich body of understanding about how people think, feel and behave.

Purity, a great (and relevant) webcomic by xkcd.

Purity, a great (and relevant) webcomic by xkcd.

Red flags

There are five common flaws that I see in social science papers led by biophysical teams, though of course they also are committed more broadly. Together, they are indicators of a positivistic mindset that has been set to a post-positivistic task – quantitative social science – without adequate recognition of the ways that people differ from biota, and that many scholars are already working in that space and have made substantial headway.

  1. Focus on sample size above instrument design. It is critical in any research using statistics to acquire a large enough sample of the desired population that inference can be made. Occasionally, however, it is clear that the design of the research instrument and its application has been sacrificed to the pursuit for a large sample. The sample is assumed to be the ultimate mark of quality, and used to generate blinding amounts of statistics, perhaps in the hope that the logic of the task that generated them is not interrogated. Sometimes, the pure distracting power of such academic ‘flashbang’ means editors publish the work, assuming that the presence of such tables indicates the work is rigorous. Protesting to one editor, I was told that the use of complex statistics, so long as the tools are used with technical correctness, renders the work valid even if the insight is minimal because of poor instrument or research design.
  2. Use of convenience samples. A common sacrifice in the quest for a large n is the nature of the sample. In ecological work it may take a long time to find the species of interest, but once you have done so, the only limit to finding enough to sample is time. As long as individuals meet the criteria you can take what measurements or observations are needed to suit the study. By contrast, one of the great challenges of social science is how to find your population – define them and determine their prevalence for sampling – and find a way to gather information ethically from a robust number or diversity of them. You can’t force people to participate, unless your study depends entirely upon observation in public places. You can send surveys and reminders, you can go door-to-door, you can set up desks in high-traffic areas, but people are busy and can still say no. Social scientists focus on justifying survey effort and the validity of the sample achieved, and thus the insight, but would not simply ask different people in order to fill a deficit. A biophysical researcher, by contrast, may assume a person is a person, regardless of context, and turn to a convenience sample (e.g. tourists instead of residents) even when to do so renders the question they are asking utterly nonsensical. The salience of the question, the respondents’ ‘stake’ in the subject and the outcome of the research, is critical for generating meaningful responses.
  3. Ignoring context. Context is also substantial in how the data is collected from the chosen sample. When questing for a large sample size, it is common to use multiple interviewers. Rarely, however, do biophysical researchers doing such work account for (or even seem to recognize) the ways that interpersonal dynamics may bias the resulting answers. This is not surprising, as the gender and age of someone doing biotic samples does not generally impact the measurements taken. The gender and age of different interviewers will create biases within subsets of the data, however, as research participants respond differently to one then they may have to another. Moreover, research participants who are interviewed alone may respond differently to those who are interviewed with their partner and/or their children at their elbow, listening to what they say. These biases must be recognized and discussed when working with people.
  4. Gaming Cronbach’s Alpha. Another red flag is the misuse of a common social science metric to generate indices (often called ‘scales’) based on responses to related questions. Cronbach’s Alpha is was developed to help social scientists assess whether responses to a set of questions were consistent enough across the sample for them to be collapsed into a single measure. That is, is each person’s set of responses internally consistent, even if the responses range widely across the sample? An acceptable Alpha suggests reliability, but not necessarily validity, i.e. that the index measures what it is intended to. Many researchers ‘game’ this metric (not just biophysical converts), testing various sets of their questions to identify the ‘best’ score, and simply dropping the questions from their set that are being answered differently. Biophysical scientists seem particularly prone to trusting the statistics over the respondents. The danger comes in the blind acceptance that the questions left standing – those that give the best alpha – are a genuine measure of the phenomenon that was previously represented by a larger set of questions. The remaining questions must be interrogated to generate a meaningful index name that reflects the new conceptual coverage, and some attempt made to understand why other questions were not answered similarly. There may be a logical set of unidimensional subconcepts embedded within the question set that could be converted into their own indices. Moreover, it may be that a set of questions that more comprehensively cover the phenomenon may still be better than a subset, even if the alpha is lower than it could be. Such statistics are meant to be an aid, not a replacement, for sociological thinking.
  5. Lack of engagement with social science literature. The final red flag is a lack of engagement with existing social science research, assuming that there is nothing that exists to build on, and this is very characteristic of biophysical researchers undertaking qualitative or quantitative methods. This lack of literature review is evident in the design of research, for instance not using established scales, concepts, theories or typologies from related work in survey design, leading to weak instruments. This is also evident from discussion sections that ignore existing social science research on the same or related topics, for instance discussing whether survey responses were correct in relation to the biophysical phenomena that the questions cover, instead of how the responses relate to what we know about what guides human behaviour.

Of course social scientists do this stuff sometimes, too. But we should know better.




Aiden and Michel's (2013) book reveals how big data can help us understand how culture has changed.

Aiden and Michel’s (2013) book reveals how big data can help us understand how culture has changed.

I pulled this book, Uncharted (2013), by Erez Aiden and Jean-Baptiste Michel, out of a bargain bin at Chapters a few weeks ago, and it is another example of serendipity. These Harvard PhDs collaborated with Google’s book digitization project to develop the Google Ngram tool. They liken their project to a tool to a microscope or telescope, which were tools that brought new dimensions to view for scientists. Their culture-scope is able to track uses of terms or phrases over time within Google Books’ enormous and growing database of digitized literature. They coined the term ‘culturomics‘, which is too awkward to stick, but the value is clear. Watch the holistic idea of ‘landscape’ overtake the aesthetically driven ‘scenery’ around the turn of the last century (below). Lots of food for thought in a world of Big Data.

Google Ngram View of landscape versus scenery in English text corpus, 1800 to 2000.

Google Ngram View of landscape versus scenery in English text corpus, 1800 to 2000.

© 2024 Kate Sherren

Theme by Anders NorenUp ↑