I’m not on Facebook. Never have been. Or Twitter. Or Instagram. Certainly not SnapChat or any of those newfangled things. But as a social scientist I’ve increasingly found useful the data that other people make public in such settings. Some reasons are pragmatic. The public has become exhausted by surveys, and are too busy to participate in interviews and workshops, at the same time that environmentally minded graduate students have become increasingly less likely to have drivers’ licenses and thus less able to head out on field work to run them. Human ethics research boards are generally unconcerned with data that people voluntarily place in the public domain, allowing quick pilot work using social media across a range of topics and publics. If you take user agreements and settings literally and assume that those data have been volunteered, it is quite easy to be ethical by aggregation and citation, like you would any source. Finally, I believe there is very real understanding to be gained by using such data as proxies to understand human values, preferences, behaviours, and yearnings. My qualitative methods course finished up this week with presentations, and it took my breath away what insight the students gained over a month on topics as diverse as sexually transmitted disease infection, sustainable food conceptualizations, and human disturbance of migratory shorebirds thanks to posts on Reddit and Instagram.
So then comes the recent horrifying news over Facebook and its business model: unscrupulously selling access to large volumes of personal data to even less scrupulous companies like Cambridge Analytica. So what do I do now, besides a quick (and perhaps smug) wipe of the brow with relief that I did not aid in either Trump or Brexit? The furor suggests that many people, maybe even some of the same ones who so clearly cherish unknown followers, are not aware their data is available to people like me. They may not see my intentions differently than the infamous personality test that fed Cambridge Analytica, for instance if I advertise a scholarly survey via Facebook to target a very specific group not otherwise easy to capture. Moreover, how implicated might I feel if I paid them for that access, knowing now what kind of algorithms are driving that cleverness? Perhaps the lesson for researchers is the same as the lessons for social media users more generally; a somewhat Methodist moral that if something is effortless, there may be something wrong with it. Yet I will mourn the loss of access to social riches that will inevitably follow this news.