Landscapes - People - Global change

Tag: questionnaires

Ask Canada – a new RSC College Interdisciplinary Research Group

When nominated into the Royal Society of Canada College of New Scholars, Artists and Scientists, one is asked to say how they will contribute to the College. During my nomination I pitched an idea for a new Interdisciplinary Research Group (IRG) on quantitative social science research policy and data infrastructure (including survey and social media methods). With the support of a few fellow members (Mark Stoddart, Karen Foster and Elizabeth Dubois) that IRG has just been provisionally approved. Now we are looking for additional members, and writing a funding application to staff some empirical work to support it. Please get in touch with me if you are an RSC College member who is interested to hear more.  A brief description follows:

Ask Canada: Toward robust data infrastructure and policy settings for quantitative social science in Canada

The landscape of quantitative social science research methods in Canada is a chaotic mix of practices and proxies both poorly understood at a system level and poorly equipped to support replicable research. This includes inadequate social science data infrastructure, inattention to policy settings to support empirical methods, and typically (bar a few fee-for-service labs) small, atomized academic teams that experience significant transaction costs in using them. Declining survey response rates drive many to work with polling firms or online modes of distribution that may compromise data quality and generalizability of insight. The UNESCO Recommendation on Open Science highlights the need to “[invest] in open science infrastructures and services”, but there is an ongoing tension globally—recognized by SDG 16 in relation to strong institutions—between high-quality, inclusive and representative data to inform decision-making and adequate protection of privacy. Open government initiatives are improving in line with global initiatives for default-open practices but Canada has thus far been focused on data repositories and access to government data and research, rather than supporting academic-led research. In fact, academic research is explicitly prohibited from accessing some resources (e.g. electoral rolls) that in comparable jurisdictions are available to their social scientists upon application. A piecemeal set of alliances and networks exist in Canada to support various aspects of social science but leave gaps in the support of quantitative social methods. This IRG will undertake literature review, expert methods, workshops and/or public surveys. By its end we will synthesize the state of quantitative social science in Canada, widely mobilize knowledge about best practices and trade-offs, and make recommendations for social science data infrastructure and public policy to foster quantitative social science research for the public interest.

When to call a social scientist (or how to fool one)

In science, when human behavior enters the equation, things go nonlinear. That’s why Physics is easy and Sociology is hard. (Neil Degrasse Tyson, Twitter, 5 Feb 2016)

It is heartening to see increasing support for interdisciplinary applied research from funding bodies. Some countries (like Canada) still largely divide funding programs by discipline, requiring researchers to carve out feasible standalone disciplinary research subprojects within more interdisciplinary projects and subject them individually to the rigours of granting bodies. By contrast, places such as the European Union welcome large, integrative and synthetic research projects. It seems clear, however, that such opportunities do not necessarily increase the likelihood of interdisciplinary team research. In fact, sometimes it seems to encourage members of more disciplinary teams to extend into unfamiliar domains to meet granting requirements. It is human nature to want to work with people similar to us, who we understand and share language, methods and a sense of what consititutes good evidence. Specifically, based on what I have recently been asked to review from numerous journals over the past year, it is common for teams of biophysical scientists to engage in social science research, in a way that would be unheard of in reverse. In many cases the first authors are students, themselves ‘converted’ from biophysical research to take on the social angle, poorly mentored by a team of biophysical scientists.

I am very sympathetic of the drive to reach outside familiar domains in research. My own career is not linear, and my set of interests and methods broad. I have many times felt the terror of the dilettante at the conferences of various disciplines into which I ‘dipped’ (before I learned to stop going to disciplinary conferences). My first degree was Geography, so I am a natural ‘borrower’. I find my natural home at applied conferences and in problem-based journals, where researchers and readers alike are more concerned with answering an important question, than within which paradigm the answer was found. Unlike many, I am enthusiastic about the creative mixing of methods and theory as appropriate to solve problems, but believe that there is a blindness and an impotence to social science that is done in the image of biophysical science, and without building on (or even awareness of) an extant rich body of understanding about how people think, feel and behave.

Purity, a great (and relevant) webcomic by xkcd.

Purity, a great (and relevant) webcomic by xkcd.

Red flags

There are five common flaws that I see in social science papers led by biophysical teams, though of course they also are committed more broadly. Together, they are indicators of a positivistic mindset that has been set to a post-positivistic task – quantitative social science – without adequate recognition of the ways that people differ from biota, and that many scholars are already working in that space and have made substantial headway.

  1. Focus on sample size above instrument design. It is critical in any research using statistics to acquire a large enough sample of the desired population that inference can be made. Occasionally, however, it is clear that the design of the research instrument and its application has been sacrificed to the pursuit for a large sample. The sample is assumed to be the ultimate mark of quality, and used to generate blinding amounts of statistics, perhaps in the hope that the logic of the task that generated them is not interrogated. Sometimes, the pure distracting power of such academic ‘flashbang’ means editors publish the work, assuming that the presence of such tables indicates the work is rigorous. Protesting to one editor, I was told that the use of complex statistics, so long as the tools are used with technical correctness, renders the work valid even if the insight is minimal because of poor instrument or research design.
  2. Use of convenience samples. A common sacrifice in the quest for a large n is the nature of the sample. In ecological work it may take a long time to find the species of interest, but once you have done so, the only limit to finding enough to sample is time. As long as individuals meet the criteria you can take what measurements or observations are needed to suit the study. By contrast, one of the great challenges of social science is how to find your population – define them and determine their prevalence for sampling – and find a way to gather information ethically from a robust number or diversity of them. You can’t force people to participate, unless your study depends entirely upon observation in public places. You can send surveys and reminders, you can go door-to-door, you can set up desks in high-traffic areas, but people are busy and can still say no. Social scientists focus on justifying survey effort and the validity of the sample achieved, and thus the insight, but would not simply ask different people in order to fill a deficit. A biophysical researcher, by contrast, may assume a person is a person, regardless of context, and turn to a convenience sample (e.g. tourists instead of residents) even when to do so renders the question they are asking utterly nonsensical. The salience of the question, the respondents’ ‘stake’ in the subject and the outcome of the research, is critical for generating meaningful responses.
  3. Ignoring context. Context is also substantial in how the data is collected from the chosen sample. When questing for a large sample size, it is common to use multiple interviewers. Rarely, however, do biophysical researchers doing such work account for (or even seem to recognize) the ways that interpersonal dynamics may bias the resulting answers. This is not surprising, as the gender and age of someone doing biotic samples does not generally impact the measurements taken. The gender and age of different interviewers will create biases within subsets of the data, however, as research participants respond differently to one then they may have to another. Moreover, research participants who are interviewed alone may respond differently to those who are interviewed with their partner and/or their children at their elbow, listening to what they say. These biases must be recognized and discussed when working with people.
  4. Gaming Cronbach’s Alpha. Another red flag is the misuse of a common social science metric to generate indices (often called ‘scales’) based on responses to related questions. Cronbach’s Alpha is was developed to help social scientists assess whether responses to a set of questions were consistent enough across the sample for them to be collapsed into a single measure. That is, is each person’s set of responses internally consistent, even if the responses range widely across the sample? An acceptable Alpha suggests reliability, but not necessarily validity, i.e. that the index measures what it is intended to. Many researchers ‘game’ this metric (not just biophysical converts), testing various sets of their questions to identify the ‘best’ score, and simply dropping the questions from their set that are being answered differently. Biophysical scientists seem particularly prone to trusting the statistics over the respondents. The danger comes in the blind acceptance that the questions left standing – those that give the best alpha – are a genuine measure of the phenomenon that was previously represented by a larger set of questions. The remaining questions must be interrogated to generate a meaningful index name that reflects the new conceptual coverage, and some attempt made to understand why other questions were not answered similarly. There may be a logical set of unidimensional subconcepts embedded within the question set that could be converted into their own indices. Moreover, it may be that a set of questions that more comprehensively cover the phenomenon may still be better than a subset, even if the alpha is lower than it could be. Such statistics are meant to be an aid, not a replacement, for sociological thinking.
  5. Lack of engagement with social science literature. The final red flag is a lack of engagement with existing social science research, assuming that there is nothing that exists to build on, and this is very characteristic of biophysical researchers undertaking qualitative or quantitative methods. This lack of literature review is evident in the design of research, for instance not using established scales, concepts, theories or typologies from related work in survey design, leading to weak instruments. This is also evident from discussion sections that ignore existing social science research on the same or related topics, for instance discussing whether survey responses were correct in relation to the biophysical phenomena that the questions cover, instead of how the responses relate to what we know about what guides human behaviour.

Of course social scientists do this stuff sometimes, too. But we should know better.

 

 

© 2025 Kate Sherren

Theme by Anders NorenUp ↑