The Pirate Bay as much as it tries to decentralize depends on attention concentrated on its home page. My latest article, We Like Copies, But Don’t Let the Others Fool You, explores the implications of this tension to the group and to Hacktivism. The group struggles to realized the political potential of decentralized networks while faced with the demands of contemporary politics.
I created a video of the history of The Pirate Bay through its front page to show its production of what J. Macgregor Wise calls an attention assemblage. The home page stays remarkably the same despite police raids, trials, new protocols and changes in leadership. Instead, the home page changes to mark these events and mobilize its audience. I thought I would share the video with Culture Digitally since I can’t embed it in my article.
Richard Rogers inspired me to create this video. His movie Google and the Politics of Tabs offers a history of Google using snapshots from the Internet Archive. The Digital Methods Initiative released a script for others to create videos using the Internet Archive snapshots. I created this video using 833 images collected from each of the Archive’s monthly snapshots.
-Contributed by Fenwick McKelvey, Assistant Professor in the Department of Communication Studies at Concordia University-
Mirror — Adam Fish, Lancaster University
The mirror is one of most trafficked metaphors in Western thought. In Ancient Greek mythology, Narcissus dies transfixed on his reflection in a spring. According to early sociology, we are a “looking glass self.” Our identities are formed when we mirror how we think others see us (Cooley 1902). In Philosophy and the Mirror of Nature, Richard Rorty (1979) shatters the Enlightenment goal that through scientific inquiry the mind could mirror nature, harboring replicas in mental formula. Thus, from antiquity onward, the mirror metaphor has been used to describe everything from vanity, to subject formation, to consensual reality. Today, information companies and information activists alike call data duplication mirroring but often fail to acknowledge how the symbolism of this term may impact its use. Mirrors are more complex entities than simple facsimiles. Mirrors echo the intricacies of data practice. Below I endeavor to explain how for information activists and information firms, mirroring is an exploit of networks and computers to remain visible by capturing “eyeballs.”
Mirrors are metaphors for what they reflect. In Through the Looking-Glass, Lewis Carroll (1871) has Alice journey through a mirror and into a parallel and parable-rich universe of reversals. In Oscar Wilde’s The Picture of Dorian Gray (1891), the mirroring portrait ages but the protagonist does not. Hillel Schwartz (1998) traces this history and our obsession with twins, replicas, duplicates, decoys, counterfeits, portraits, mannequins, clones, replays, photocopies, and forgeries. The mirror metaphor continues into the digital age. The United Kingdom’s Channel Four television series the Black Mirror is a drama that comments on a dystopic future of increasing connectivity. Charlie Booker’s programme sees our mobile and laptop screens as black mirrors into which we stare as if Narcissus and which reflect back our self-destructive ways. Co-founder of file-sharing company The Pirate Bay, Peter Sunde, believes that copying is genetically coded, saying: “People learn by copying others. All the knowledge we have today, and all success is based on this simple fact – we are copies.” As a locus for the confluence of metaphysics and materiality, mirrors are a way to see how the practical and metaphoric are co-constituted in database worlds.
This short entry has both the philosophical goal of discussing mirrors as a metaphor and a practical objective of showing how mirroring is a practice of activism as well as cloud computing. Below I describe how mirroring in computing is a way of keeping a copy of some or all of a particular content at another site, typically in order to protect and improve its accessibility. Mirroring is a way of working with a multiplication of data. For activists, mirroring is a method to achieve and preserve visibility on networked communication systems. Mirror multiplicities provide opportunities for cloud companies seeking to capture and sell personal information. Geographically dispersed and constituted by different trajectories, speaking of the replication of origins does not do justice to the complexity of mirrors. Instead, I choose to identify mirroring as a form of praxis, a way of being and thinking in the world. In this sometimes confusing hall of mirrors, the practical and the metaphoric, the actual and the virtual, co-create each other in acts of reflection.
Mirrors as Multiples
Computing companies would have us believe that mirroring is the non-rivalrous multiplications of data made possible by packet-switching, storing, and binary technologies. It is achievable because of the copy-and-paste functionality of computers, data, and networks. Cloud computing requires the mirroring or replication of databases for global access and security. Microsoft, which provides a number of cloud computing services, defines “database mirroring” as the maintenance of “two copies of a single database that must reside on different server instances.” Not just a tool for Fortune 500 hegemonic information companies, Wikileaks also “mirrors” its content. They and their supporters mirror content in jurisdictions outside of American reach when faced with the legal shutdown of private servers housing their incendiary cables. Today, sites in at least eleven European nations offer the Wikileaks mirror. The largest peer-to-peer file sharing service in the world, the Pirate Bay, mirrors its links in national jurisdictions where its practices have yet to be deemed illegal (18 countries presently block root access to the Pirate Bay). Mirroring, thus, is a practice for both hegemonic and counterhegemonic actors. But it would be inaccurate to claim that these mirrors are exact replicas.
Microsoft’s is a naïve realist notion that mirrors are precise copies, merely displaced within or across databases. A slightly more complex social constructivist perspective sees mirrors as symbolic representations. In constructivism, mirrors would not be conceived as duplicates but rather as iconic yet accurate depictions. Physicist Karan Barad (2003) challenges both “naïve realist” as well as constructivist interpretations of mirrors, offering a third construal. Echoing Rorty, she says “…the representationalist belief in the power of words to mirror preexisting phenomena is the metaphysical substrate that supports social constructivist, as well as traditional realist, beliefs” (Barad 2003: 802). In this way, mirrors are neither realist copies nor constructed depictions. Rather, mirrors are a data multiplication that maps a contestation over visibility.
To offer robust, secure, and non-delayed access to content it is necessary to store multiples. In cloud computing, content formation is a regenerative process of recomposition from geographically dispersed databases. Numerous scholars identify how database derived depictions of diseases, criminals, and biological processes are visualized not as singular entities but rather as complex beings constituted by numerous coded transections (Mol 2003, Ruppert 2013, MacKenzie and McNally 2013). In these diverse cases, the multiple is not a fragmented nor necessarily contradictory singularity but rather a fluid “field of multiple conjoined actions that cumulatively enact new entities” (Ruppert 2013). The “performative excesses” of visualized multiples, “undo or unmake identities as much as they make them” (Mackenzie and McNally 2013). Structured by databases and unmoored to beginnings, mirrors are multiples with numerous applications for hegemonic and counterhegemonic actors alike.
Mirroring as Activist Visibility
Mirrors transform seeing and what is seen. Through physical vanity mirrors, European Medieval people “came to reflect on, know and judge themselves and others through becoming aware of how they appeared” (Coleman 2013: 5, Melchior-Bonnet 2001). Using lenses and mirrors to transform his studio into a camera obscura, 17th century Dutch painter Johannes Vermeer painted not the depth of field and the textures seen by the unmediated human eye but the world as framed by a camera (Steadman 2002). There is power in controlling new regimes of technological-assisted seeing. Historically, writing and printing systems extended the prioritization of the ocular and through it the power given to those who could read, write, print, and cast mortal judgment based on textual studies (Ong 1977, McLuhan 1964). “Scopic regimes” developed in Western science and law to control the power of being able to make something visible and legible (Jay 1992). Likewise, visual technologies assemble the real, the natural, and the moral for Western technoscientific systems (Haraway 1997). Through “seeing like a state,” nations objectify and thereby control colonial bodies (Scott 1999). The will to visibility is also profoundly gendered with cinema historically being produced for the male gaze (Mulvey 1975). Visibility “lies at the intersection of the two domains of aesthetics (relations of perception) and politics (relations of power)” (Brighenti s2007: 324). The counterhegemonic mirroring practices of Wikileaks, The Pirate Bay, and below Anonymous are intimately linked to the ability to be seen.
Mirroring is central to the power to make visible (or invisible) in the networked society. For instance, consider how Anonymous–made famous by hacks, leaks, and performative politics-secures visibility for their political videos by mirroring them across YouTube. The content made visible by their video mirrors solicits viewers to model themselves after politically active bodies. The process by which political films hail viewers to copy revolutionary subjects is called “political mimesis” (Gaines 1994). And yet, while mirrors represent politicized bodies, they cannot be reduced to mere representations. Here, mirrors do not reveal origins but rather locate contestation (Fish forthcoming). The friction revealed by Anonymous video mirrors is over censorship as the Church of Scientology and other opponents of Anonymous attempt to force YouTube to takedown Anonymous videos critical of Scientology. Anonymous video mirrors mark a counterhegemonic will-to-visibility.
Thus, it is true, as Foucault said, that “visibility is a trap,” but not for everyone (1977: 200). For both media corporations and activists, visibility is a tool for empowerment. The power to make him visible or her invisible are powers traditionally reserved for entrenched elites. Scandals recorded on cameras and distributed online now interrupt the lives of political and economic elites who used to be able to tightly control their self-presentation (Goffman 1956, Thompson 2005). Reality television provides visibility to some and through it often stigmatizes social classes through televised spectacle (Tyler 2011, Couldry 2010). As a read/write medium capable of delivering text, image, and moving pictures, the internet exacerbates ocular-centrism as well as the dangers and possibilities of visibility. Hacks, leaks, and video mirrors are forms of visual counter-power. The power to see and not be seen-from the eye training of literacy, to the male gaze in cinema, to cultures of self-presentation and reality television, to visibility optimization industries of fashion and advertising, to video mirroring-constitutes regimes of power and counter-power in contemporary networked society.
The counterhegemonic mirroring of Wikileaks, Anonymous, and The Pirate Bay, are examples of an interventionary “misuse” of pre-existing capitalist information infrastructure that diversifies and magnifies the visualization of radical voice (Soderberg 2010). Despite using for-profit social media platforms and thereby being captured within circuits of techno-capitalism (Dean 2010), grassroots political visibility can be a practice-based form of access and voice that resists erasure (Couldry 2010). Mirroring is one among many promising but nonetheless uneven forms of technological resistance used both for and against the for-profit capture of information resources.
Capturing the Mirror
One reading sees human prehistory as the progress of information creation and control (Gleik 2011). Throughout human evolution, the size and complexity of the neocortex, language, and group dynamic increased collectively (Dunbar 1993). The storage of information in symbolic systems and durable substances of rock, wood, fiber-and later digital databases-further amplified the complexity of the brain, language, and society (Ong 1977, McLuhan 1964). Mirroring is a later manifestation of the prehistoric practice of data creation, control, and manipulation. But while corporately owned databases are continuations of prehistoric information storage, they also structure data in a particular way for a particular purpose. Structures and risks associated with the potentials of database mirrors, I would argue, are political economic in nature. In terms of the virtual, Deleuze discussed the “double-movement of liberation and capture” (1972). While mirrors offer opportunities for the liberation of activist visibility, they also provide data corporations opportunities to capture social capital. Chow says captivation “is semantically suspended between an aggressive move and an affective state, and carries within it the force of the trap in both active and reactive senses” (Chow 2012: 48 in Berry 2014). In this way, reflexively produced material and affectual data is captured within an informational economy.
The business proposition of cloud companies is that their mirroring is an affordable way of securing retrievable data. The compromise is that mirroring liberates and at once captures the very images and information it displaces, diffracts, and makes autonomous. We are hailed to be responsible with our data by backing it up, to take our lives seriously through constantly drafting autobiographical public digital artifacts, and to work on the move by having our important documents accessible in the cloud. Yet all of this plays into the surveillance and capture of our virtual lives. While the capture of mirrored data for surplus production should be clear, mirroring is also an action within counterhegemonic information activism. In this way, mirroring is not neutral but rather a tool for both liberation and capture, for activist visibility and visibility-as-a-trap.
Mirrors describe copies that are saved in different places. But mirrors are not exact copies. Disambiguated in time and space, the mirror qualitatively differs from that which it mirrors. Rather than mirrors being exact replicas or even reasonable approximations, it is instructive to consider mirrors not as products but rather as processes. Mirrors are complex, in-flux multiples constituted by numerous forces that achieve a degree of autonomy from their origins. In this way, mirroring, or the practice of making mirrors, is a praxis, neither realistic nor representational depictions, but a way of being, believing, and moving in the world. As such, mirrors map two practices that are reactions to a contestation. For activists, mirroring marks a will to remain visible in a world of censorship. Mirrors also map the conflicts around how data is captured and capitalized on by cloud companies. A way to synthesize the politics, political economy, and praxis of mirroring is to consider how mirrors are multiples, autonomous from the things they ostensibly replicate. Ancient and contemporary theories of mirrors are tools used towards the synthesis of the metaphysical and the material of database worlds.
Barad, Karen. 2003. Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter, Signs: Journal of Women in Culture and Society, 28:3, 801-831.
Berry, D.M. 2014. On Capture. http://stunlaw.blogspot.co.uk/2014/04/on-capture.html
Brighenti, AM 2007 “Visibility: a category for the social sciences”. Current Sociology, 55(3): 323-342.
Chow, R. 2012. Entanglements, or Transmedial Thinking about Capture, London: Duke University Press.
Cooley, Charles H. 1902. Human Nature and the Social Order. New York: Scribner’s.
Couldry, Nick. 2010. Why Voice Matters. London: Sage Press.
Dean, Jodi. 2010. Blog Theory. Polity Press.
Deleuze, Gilles. 1972. “How Do We Recognize Structuralism?” Trans. Melissa McMahon and Charles J. Stivale. In Desert Islands and Other Texts, 1953-1974. Ed. David Lapoujade. New York: Semiotext(e), 2004.ISBN 1-58435-018-0. 170-192.
Dunbar, R. I. M. 1993. Coevolution of neocortical size, group size and language in humans. Behavioral and Brain Sciences 16 (4): 681-735.
Foucault, M. 1977. Discipline and Punish. Pantheon Books.
Fish, Adam. Forthcoming. Mirroring: Anonymous Videos and Political Mimesis.
Gaines, Jane M. 1994. “Political Mimesis.” In Collecting Visible Evidence. Eds. Jane M. Gaines and Michael Renov. Minneapolis: University of Minnesota Press.
Gleik, James. 2011. The Information.
Goffman, Erving. 1956. Presentation of the Self in Everyday Life. Edinburgh: University of Edinburgh Press.
Haraway, Donna. 1997. Modest_Witness@Second_Millennium.FemaleMan© Meets_OncoMouse™: Feminism and Technoscience, New York: Routledge.
Jay, Martin. 1992. Scopic Regimes of Modernity in Vision and Visuality: Discussions in Contemporary Culture 2, Hal Foster, ed. New Press.
Mackenzie, Adrian and Ruth McNally. 2013. ‘Methods of the multiple: how large-scale scientific data-mining pursues identity and differences.’ Theory, Culture & Society 30, no. 4 (2013): 72-91.
McLuhan, Marshall. 1964. Understanding Media. McGraw-Hill.
Mulvey , Laura. 1975. Visual Pleasure and Narrative Cinema, Screen 16(3):6-18
Ong, W. J. 1977. Interfaces of the word: Studies in the evolution of consciousness and culture. Ithaca, NY: Cornell University Press
Rorty, Richard. Philosophy and the Mirror of Nature. Princeton: Princeton University Press, 1979.
Ruppert, Evelyn. 2013. Not Just Another Database: The Transactions that Enact Young Offenders. Computational Culture, pp. 1-13.
Schwartz, Hillel. 1996. The culture of the copy: striking likenesses, unreasonable facsimiles. Zone Books.
Soderberg, Johan. 2010. Misuser inventions and the invention of the misuser: hackers, crackers, filesharers.Science as Culture, 19 (2) 2010, pp. 151-179
Tyler, I. 2011. Pramfaced Girls: the class politics of “Maternal TV” in Reality Television and Class. Wood, H. & Skeggs, B. (eds.). Basingstoke: Palgrave Macmillan, p. 210-224 15 p.
-Contributed by Adam Fish, Sociology Department at Lancaster University-
Community — Rosemary Avance, University of Pennsylvania
The digital era poses new possibilities and challenges to our understanding of the nature and constitution of community. Hardly a techno buzzword, the term “community” has historic uses ranging from a general denotation of social organization, district, or state; to the holding of important things in common; to the existential togetherness and unity found in moments of communitas. Our English-language “community” originates from the Latin root communis, “common, public, general, shared by all or many,” which evolved into the 14th century Old French comunité meaning “commonness, everybody”. Originally the noun was affective, referencing a quality of fellowship, before it ever referred to an aggregation of souls. Traditionally the term has encompassed our neighborhoods, our religious centers, and our nation-states– historically, geographic and temporal birthrights, subjectivities unchosen by the individual. Today, we speak of a global community, made possible by communications technologies, and our geographically-specific notions of community are disrupted by the possibilities of the digital, where disembodied and socially distant beings create what they– and we, as scholars– also call community. But are the features and affordances of digital community distinct from those we associate with embodied clanship and kinship?
With shared etymological roots and many shared assumptions, the term “community” is of central importance to the field of Communication. Social scientific taxonomies have long placed the elusive notion of community at the apex of human association, as a utopian model of connection and cohesion, a place where human wills unite for the good of the group. We long to commune, as John Peters argues, yet our inability to ever truly connect with another soul keeps us grounded in sympathies and persistent in attempts. Perhaps this elusiveness is where our collective disciplinary preoccupation with the notion of community arises.
On- or offline, community is at best an idealized, imaginary structure, and that idealization obfuscates exploitation. Michel Foucault reminds us that pure community is at base a mechanism of control over social relations, a policing of our interactions. Benedict Anderson reaffirms, too, the imaginary nature of community, which we conceive of as a “deep, horizontal comradeship” in such a way that power differentials are jointly pretended away.
Victor Turner, of course, adopts the source of the word in his theories of liminality and communitas, arguing after van Gennep that ritual rites of passage move an individual from a state of social indeterminacy to a state of communal oneness and homogeneity. The outcome of an individual’s reincorporation into a group is a burdening, as the individual takes on obligation and responsibility toward defined others. This is the formation, the very root, of community– an ethical orientation outside oneself and toward others. Thus the community emerges as the social compact charged with policing the system. The implication of community, then, is citizenship-belonging. Community is an ideal, the result of individuals accepting and serving their obligations and responsibilities vis-à-vis the collective.
What do we make of Internet-based communities, united over shared interests from the mundane to the elysian, evading easy classification due to wide ranging differences in participation, influence, and affective connection? Moral panics accompany all new media technologies, and the pronounced fear associated with global connectivity via the Internet, with no little irony, reflects the fear of disconnection. In Bowling Alone, Robert Putnam notoriously gives voice to this fear, suggesting that declines in community commitment, manifest in low civic and political engagement, declining religious participation, increasing distrust, decreasing social ties, and waning altruism are at least in part attributable to technology and mass media, as entertainment and news are tailored to the individual and consumed alone. Putnam paints a bleak image of Americans in dark houses, television lights flickering and TV dinners mindlessly consumed.
Digital community seems to offer a panacea to both the problems of community as a mechanism of control, and the fear of disconnection in a new media age. Indeed, Fred Turner shows that digital community has roots in countercultural movements, as “virtual community… translated a countercultural vision of the proper relationship between technology and sociability into a resource for imagining and managing life in the network economy”. Coming, then, as a solution to the “problem” of modernity, that disembodied cyberspace somehow at once flattens and broadens our notions of self.
So what do we mean by digital community? Both features of Internet culture, scholars differentiate between “virtual” and “digital” communities, the former denoting a quasi-geographical location (e.g., a particular URL), whereas digital communities are ephemeral, united around a shared interest or identity rather than a particular virtual location. Thus virtual gaming communities, for instance, may be located at a particular website, while digital gaming communities are dispersed across social platforms and virtual spaces, united around a shared interest.
While past conceptions of community were generally outside one’s agential selection– you are born and die in your town, your religion is the faith of your parents– today’s diverse digital landscape means self-selection into communities of interest and affinity. But digital community does not entirely escape the deterministic, as availability still marks a very real digital divide between those with access to the technology and those without. Not only that, but the affordances of various platforms, both in intended and possible (read: disruptive) use, all inform what might be seen as a digital community’s blueprint. Online community formation relies on this peer-to-peer software architecture that pre-dates the community itself, so that communities evolve and adapt not in spite of but because of the affordances of the technological platform. These include format, space constraints, visuals, fixity vs. mutability, privacy vs. surveillance, peer feedback, report features/TOS, modality (cellular, tablet, desktop) — all features which inform what is possible in a given virtual community. Digital communities can evade some but not all of the fixity of these structural constraints, reaching across a variety of platforms and forums on both the light and dark web.
Both types of online community networks are dynamic and self-organizing. Many social networking sites like Facebook and Myspace are pure “intentional communities” wherein self-selection into the platform and mutual “friending” secure one’s place. Highly fragmented, niche communities redistribute power in both intangible and tangible ways — think only of the economic impact of peer-to-peer communities on the music industry, where file sharing challenges traditional conceptions of property rights and even our collective moral code. Indeed, content sharing is the basis of online community– from photos, to text, to files and links– and users themselves decide their own level of engagement in these participatory cultures. Within the communities themselves, the flattening dynamic of Internet culture, where everyone can have a platform and a voice, obfuscates the very real social hierarchies which are supported by social processes and norms– all of which evolve from platform affordances.
Some scholars and observers still express a reticence to accept Facebook, Twitter, blogs, or forums as true examples of community. They see these spaces as primarily narcissistic expressions of what Manuel Castells calls the “culture of individualism”, emphasizing consumerism, networked individualism, and autonomy; rather than the “culture of communalism”, rooted in history and geography. Ironically, perhaps, much of today’s “countercultural” vision involves little to no connectivity– refusing to participate in the exploitation and grand social experiment that is Facebook, for instance, one might opt out of forms of life available there.
Yet users, if we take them at their word, say that online community provides a space to be “real” — or somehow more authentic– in ways that embodied community might sanction. An overabundance of narrative visibility and social support on the Internet allow users to foster difference in ways that limited offline social networks simply cannot sustain. That is to say, in today’s world, it is not uncommon for youth to self-identify as queer and first “come out” in digital spaces or, to draw on my own ethnographic work, for Mormons to foster heterodox (e.g. liberal) identities in closed Facebook groups before what they too mark as a “coming out” to their conservative “real-world” family and friends. We might do well to remember the origin of our term “community”, which referenced a quality of fellowship before it ever referred to an aggregation of souls. It seems our term has come full circle, as disembodied souls unite in fellowship mediated by the digital.
1. Peters, John Durham. (1999). Speaking into the air: A history of the idea of communication. Chicago: U. of Chicago Press.
2. Foucault, Michel. (1977). Discipline and punish: The birth of the prison. New York: Random House.
3. Anderson, Benedict. (1983). Imagined communities: Reflections on the origin and spread of nationalism. London: Verso.
4. Turner, Victor. (1969). “Liminality and communitas.” in The Ritual Process: Structure and Anti-Structure, pp. 94-. New York: Aldine.
5. Putnam, Robert. (2000). Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.
6. Turner, Fred. (2005, July). “Where the counterculture met the new economy: The WELL and the origins of virtual community.” Technology and Culture: 46, 491.
7. C.f. boyd, danah. (2006,4 December). “Friends, friendsters, and myspace top 8: Writing community into being on social network sites.” First Monday 11(12). Available at http://firstmonday.org/article/view/1418/1336
8. See Hughes, Jerald & Karl Reiner Lang. (2003). “If I had a song: The culture of digital community networks and its impact on the music industry.” International Journal on Media Management 5(3):180-189.
9. “Everyone”, that is, with access, equipment, technological savvy, and, presumably, an audience.
10. Castells, Manuel. (2007). “Communication, power, and counter-power in the network society.” International Journal of Communication 1:238-266.
11. Gray, Mary L. (2009, July). “Negotiating identities/queering desires: Coming out online and the remediation of the coming-out story.” Journal of Computer-Mediated Communication 14(4):1162-1189.
12. Here I’m drawing on years of ethnographic work among Mormons on the Internet, with details forthcoming in my dissertation “Constructing Religion in the Digital Age: The Internet and Modern Mormon Identities”; for more on Mormon deconversion and online narratives see Avance, Rosemary. (2013). “Seeing the light: Mormon conversion and deconversion narratives in off- and online worlds.” Journal of Media and Religion 12(1):16-24.
-Contributed by Rosemary Avance, -
My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: we are now, officially, doing human subjects research.
I’m frustrated that the state of public intellectualism allows us, individually, to jump into the conversation about the recently published Facebook “Emotions” Study . What we—from technology builders and interface designers to data scientists and ethnographers working in industry and at universities alike—really (really) need right now is to sit down together and talk. Pointing the finger or pontificating doesn’t move us closer to the discussions we need to have, from data sharing and users’ rights to the drop in public funding for basic research itself. We need a dialogue—a thoughtful, compassionate conversation among those who are or will be training the next generation of researchers studying social media. And, like all matters of ethics, this discussion will become a personal one as we reflect on our doubts, disagreements, missteps, and misgivings. But the stakes are high. Why should the Public trust social media researchers and the platforms that make social media a thing? It is our collective job to earn and maintain the Public’s trust so that future research and social media builders have a fighting chance to learn and create more down the line. Science, in particular, is an investment in questions that precede and will live beyond the horizon of individual careers.
As more and more of us crisscross disciplines and work together to study or build better social media, we are pressed to rethink our basic methods and the ethical obligations pinned to them. Indeed “ethical dilemmas” are often signs that our methodological techniques are stretched too thin and failing us. When is something a “naturalistic experiment” if the data are always undergoing A/B tweaks? How do we determine consent if we are studying an environment that is at once controllable, like a lab, but deeply social, like a backyard BBQ? When do we need to consider someone’s information “private” if we have no way to know, for sure, what they want us to do with what we can see them doing? When, if ever, is it ok to play with someone’s data if there’s no evident harm but we have no way to clearly test the long-term impact on a nebulous number of end users?
There is nothing obvious about how to design and execute ethical research that examines people’s individual or social lives. The reality is, when it comes to studying human interaction or behavior (for profit or scientific glory), it is no more (or less) complicated whether we’re interviewing someone in their living room, watching them in a lab, testing them at the screen, or examining the content they post online. There is no clearer sign of this than the range of reactions to the news (impeccably curated here by James Grimmelmann) that for one week, back in January 2012, researchers manipulated (in the scientific sense) what 689,003 Facebook users read in their individual News Feed. Facebook’s researchers fed some users a diet containing fewer posts of “happy” and positive words than their usual News Feed; other users received a smaller than their average allotment of posts ladled with sad words. Cornell-based researchers came in after the experiment was over to help sift through and crunch the massive data set. Here’s what the team found: By the experiment’s last day (which, coincidentally, landed on the day of the SOPA online protests! Whoops), it turned out that a negligible—but statistically detectable—number of people produced fewer positive posts and more negative ones if their Feed included fewer positive news posts from friends; when the researchers scaled back the number of posts with negative cues from friends, people posted fewer negative and more positive posts. This interesting, even if small, finding was published in the June 2014 issue of the Proceedings of the National Academy of Sciences (PNAS). That’s how Science works—one small finding at a time.
At issue: the lead author, Facebook Data Scientist, Adam Kramer, never told users in the study that their News Feeds were part of this experiment, either before or after that week in January. And Cornell University’s researchers examining the secondary data set (fancy lingo for the digital records of more than half a million people’s interactions with each other) weren’t, technically, on the hook for explaining that to subjects either. Mind you, it’s often acceptable in human subjects research to conduct experiments without prior consent, as long as everyone discussing the case agrees that the experiment does not impose greater risk to the person than they might experience in a typical day. But even in those cases, at some point the research subjects are told (“debriefed”) about their participation in the study and given the option to withdraw data collected about them from the study. Researchers also have a chance to study the impact of the stimulus they introduced into the system. So, the question of the hour is: Do we cross a line when testing a product also asks a scientifically relevant question? If researchers or systems designers are “just” testing a product on end users (aka humans) and another group has access to all that luscious data, whose ethics apply? When does “testing” end and “real research” begin in the complicated world of “The Internet?”
Canonical Science teaches us that the greater the distance between researchers and our subjects (often framed as objectivity), the easier it is for us to keep trouble at arm’s length. Having carried out what we call “human subjects research” for much of my scholarly life—all of it under the close scrutiny of Institutional Review Boards (IRBs)—I feel professionally qualified to say, “researching people ain’t easy.” And, you know what makes it even harder? We are only about 10 years into this thing we call “social media”—which can morph into a telephone, newspaper, reality TV show, or school chalkboard, depending on who’s wielding it and when we’re watching them in action. Online, we are just as likely to be passionately interacting with each other, skimming prose, or casually channel-surfing, depending on our individual context. Unfortunately, it’s hard for anyone studying the digital signs of humans interacting online to know what people mean for us to see—unless we ask them. We don’t have the methods (yet) to robustly study social media as sites of always-on, dynamic human interaction. So, to date, we’ve treated the Internet as a massive stack of flat, text files to scrape and mine. We have not had a reason to collectively question this common, methodological practice as long as we maintained users’ privacy. But is individual privacy really the issue?
My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: We are now, officially, doing human subjects research. Here’s some background to orient us and the people who pay our research bills (and salaries) to this new reality.
Genealogy of Human Subjects Research Oversight in the United States
In 1966, the New England Journal of Medicine published an article by Harvard research physician, Henry Beecher, chronicling 22 ethically questionable scientific studies conducted between 1945 and 1965 (Rothman, 2003: 70-84). Dr. Beecher’s review wasn’t exposing fringe science on the margins. Federally and industry-funded experiments conducted by luminaries of biomedicine accounted for most of the work cited in his review. Even if today we feel like it’s a no brainer to call ethical foul on the studies Beecher cited, keep in mind that it took DECADES for people to reach consensus on what not to do. Take, for example, Beecher’s mention of Dr. Saul Krugman. From 1958-1964, Dr. Saul Krugman injected children with live hepatitis virus at Willowbrook State School on New York’s Staten Island, a publicly-funded institution for children with intellectual disabilities. The Office of the Surgeon General, U.S. Armed Forces Epidemiological Board, and New York State Department of Mental Hygiene funded and approved his research. Krugman directed staff to put the feces of infected children into milkshakes later fed to newly admitted children, to track the spread of the disease. Krugman pressed poor families to include their children in what he called “treatments” to secure their admission to Willowbrook, the only option for poor families with children suffering from mental disabilities. After infecting the children, Krugman experimented with their antibodies to develop what would later become the vaccines for the disease. Krugman was never called out for the lack of consent or failure to provide for the children he infected with the virus, now at risk of dying from liver disease. Indeed, he received the prestigious Lasker Prize for Medicine for developing the Hepatitis A and B vaccines and, in 1972, became the President of the American Pediatric Society. Pretty shocking. But, at the time, and for decades after that, Willowbrook did not register as unequivocally unethical. My point here is not to draw one to one comparisons of Willowbrook and the Facebook Emotions study. They are not even close to comparable. I bring up Willowbrook to point out that no matter how ethically egregious something might seem in hindsight, often such studies do not appear so at the time, especially when weighed against the good they might seem to offer in the moment. Those living in the present are never in the best position to judge what will or will not seem “obviously wrong.”
News accounts of risky experiments carried out without prior or clear consent, often targeting marginalized communities with little power, catalyzed political will for federal regulations for biomedical and behavioral researchers’ experiments (Rothman, 2003: 183-184). Everyone agreed: there’s a conflict of interest when individual researchers are given unfettered license to decide if their research (and their reputations) are more valuable to Science than an individual’s rights to opt out of research, no matter how cool and important the findings might be. The balance between the greater good and individual risk of research involving human subjects must be adjudicated by a separate review committee, made up of peers and community members, with nothing to be gained by approving or denying a researcher’s proposed project.
The Belmont Report
The National Research Act of 1974 created the Commission for the Protection of Human Subjects of Biomedical and Behavioral Research . Five years later, the Commission released The Belmont Report: The Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The Belmont Report codified the call for “respect for persons, beneficence, and justice” (The Belmont Report, 1979). More concretely, it spelled out what newly mandated university and publicly funded agency-based IRBs should expect their researchers to do to safeguard subjects’ informed consent, address the risks and benefits their participation might accrue, and more fairly distribute science’s “burdens and benefits” (The Belmont Report, 1979). The Belmont Report now guides how we define human subjects research and the attendant ethical obligations of those who engage in it.
Put simply, the Belmont Report put a Common Rule in place to manage ethics through a procedure focused on rooting out bad apples before something egregious happens or is uncovered, after the fact. But it did not—and we have not—positioned ethics as an on-going, complicated discussion among researchers actively engaging fellow researchers and the human subjects we study. And we’ve only now recognized that human subjects research is core to technology companies’ product development and, by extension, bottom lines. However, there is an element of the Belmont Report that we could use to rethink guidance for technology companies, data scientists, and social media researchers alike: the lines drawn in the Belmont Report between “practice and research.”
The fine line between practice and research
The Belmont Report drew a clear line demarcating the “boundaries between biomedical and behavioral research and the accepted and routine practice of medicine”—the difference between research and therapeutic intervention (The Belmont Report 1979). This mandate, which was in fact the Report’s first order of business, indexes the Commission’s most pressing anxiety: how to reign in biomedicine’s professional tendencies to experiment in therapeutic contexts. The history of biomedical breakthroughs—from Walter Reed’s discovery of the causes of yellow fever to Jonas Salk’s polio vaccines—attest to the profession’s culture of experimentation (Halpern 2004: 41-96). However, this professional image of the renegade (mad) scientist pioneering medical advances was increasingly at odds with the need, pressing by the 1970s, for a more restrained and cautious scientific community driven first by an accountability to the public and only second by a desire for discovery.
In redrawing the boundaries between research and practice, the Belmont Report positioned ethics as a wedge between competing interests. If a practitioner simply wanted to tweak a technique to see if it could improve an individual subjects’ experience, the experiment did not meet the threshold of “real scientific inquiry” and could be excused from more formal procedures of consent, debriefing, and peer review. Why? Practitioners already have guiding codes of ethics (“do no harm”) and, as importantly, ongoing relationships built on communication and trust with the people in their care (at least, in theory). The assumption was that practitioners and “their” subjects could hold each other mutually accountable.
But, once a researcher tests something out for testing’s sake or to work on, more broadly, a scientific puzzle, they are in the realm of research and must consider a new set of questions: Cui bono, who benefits? Will the risk or harm to an individual outweigh the benefits for the greater good? What if that researcher profits from the greater good? The truth is, in most cases, the researcher will benefit, whether they make money or not, because they will gain credibility and status through the experience of their research. Can we say the same for the individual contributing their experiences to our experiments? If not, that’s, typically, an ethical dilemma.
Constructing ethical practice in a social media world
Social media platforms and the technology companies that produce our shared social playgrounds blur the boundaries between practice and research. They (we?) have to, in many cases, to improve the products that companies provide users. That’s no easy thing if you’re in the business of providing a social experience through your technology! But that does not exempt companies, any more than it exempts researchers, from extending respect, beneficence, and justice to individuals sharing their daily interactions with us. So we need to, collectively, rethink when “testing a feature” transitions from improving customer experience to more than minimally impacting someone’s social life.
Ethical stances on methodological practices are inextricably linked to how we conceptualize our objects of study. Issues of consent hinge on whether researchers believe they are studying texts or people’s private interactions. Who needs to be solicited for consent also depends on whether researchers feel they are engaged in a single site study or dealing with an infrastructure that crosses multiple boundaries. What ethical obligations, then, should I adhere to as I read people’s posts—particularly on commercial venues such as Facebook that are often considered “public domain”—even when they may involve participants who share personal details about their lives from the walled garden of their privacy settings? Are these obligations different from those I should heed with individuals not directly involved in my research? How can I use this information and in what settings? Does consent to use information from interviews with participants include the information they publicly post about themselves online? These questions are not easily grouped as solely methods issues or strictly ethical concerns.
For me, the most pragmatic ethical practice follows from the reality that I will work with many of the people I meet through my fieldwork for years to come. And, importantly, if I burn bridges in my work, I am, literally, shutting out researchers who might want to follow in my footsteps. I can give us all a bad reputation that lasts a human subject’s lifetime. I, therefore, treat online materials as the voices of the people with whom I work. In the case of materials I would like to cite, I email the authors, tell them about my research, and ask if I may include their web pages in my analyses. I tread lightly and carefully.
The Facebook Emotions study could have included a follow up email to all those in the study, sharing the cool results with participants and offering them a link to the happy and sad moments that they missed in their News Feed while the experiment was underway (tip of the hat to Tarleton Gillespie for those ideas). And, with more than half a million people participating, I’m sure a few hundred thousand would have opted-in to Science and to let Facebook keep the results.
We do not always have the benefit of personal relationships, built over time with research participants to guide our practices. And, unfortunately, our personal identities or affinities with research participants do not safeguard us from making unethical decisions in our research. We have only just started (like, last week) to think through what might be comparable practices for data scientists or technology designers, who often never directly talk with the people they study. That means that clear, ethical frameworks will be even more vital as we build new toolkits to study social media as sites of human interaction and social life.
Considering that more and more of social media research links universities and industry-based labs, we must coordinate our methodologies and ethics no matter who pays us to do our research. None of us should be relieved from duty when it comes to making sure all facets of our collaborations are conducted with an explicit, ethical plan of action. There are, arguably, no secondary data sets in this new world.
The Belmont Report was put in place to ensure that we have conversations with the Public, among ourselves, and with our institutions about the risks of the scientific enterprise. It’s there to help us come to some agreement as to how to address those risks and create contingency plans. While IRBs as classification systems can and have provided researchers with reflexive and sometimes necessary intervention, bureaucratic mechanisms and their notions of proper science are not the only or even the best source of good ethics for our work—ongoing and reflexive conversations among researchers and practitioners sharing their work with invested peers and participants are.
Whether from the comfort of a computer or in the thick of a community gathering, studying what people do in their everyday lives is challenging. The seeming objectivity of a lab setting or the God’s eye view of a web scraping script may seem to avoid biases and desires that could, otherwise, interfere with the social situations playing out in front of us that we want to observe. But, no matter how removed we are, our presence as researchers does not evaporate when we come into contact with human interaction. One of the values of sustained, ethnographic engagement with people as we research their lives: it keeps researchers constantly accountable not only to our own scientific (and self) interests but also to the people we encounter in any observation, experiment, or engagement.
Some of my peers argue that bothering people with requests for consent or efforts to debrief them will either “contaminate the data” or “seem creepy” after the fact. They argue that it’s less intrusive and more scientifically powerful to just study “the data” from a distance or adjust the interface design on the fly. I get it. It is not easy to talk with people about what they’re doing on online. Keep in mind that by the end of USENET’s long life as the center of the Internet’s social world, many moderated newsgroups blocked two kinds of lurkers: journalists. And researchers. In the long run, keeping a distance can leave the general public more suspicious of companies’, designers’, and researchers’ intentions. People may also be less likely to talk to us down the road when we want to get a richer sense of what they’re doing online. Let’s move away from this legalistic, officious discussion of consent and frame this debate as a matter of trust.
None of us would accept someone surreptitiously recording our conversations with others to learn what we’re thinking or feeling just because “it’s easier” or it’s not clear that we are interested in sharing them if asked outright. We would all want to understand what someone wants to know about us and why they want to study what we’re doing—what do they hope to learn and why does it matter? Those are completely reasonable questions. All of us have a right to be asked if we want to share our lives with strangers (even researchers or technology companies studying the world or providing a service) so that we have a chance to say, “nah, not right now, I’m going through a bad break up.” What would it look like for all of us—from LOLcat enthusiasts and hardcore gamers, to researchers and tech companies—to (re)build trust and move toward a collective enterprise of explicitly opting-in to understand this rich, social world that we call “The Internet?”
Scientists and technology companies scrutinizing data bubbling up from the tweets, posts, driving patterns, or check-ins of people are coming to realize that we are also studying moments of humans interacting with each other. These moments call for respect, trust, mutuality. By default. Every time we even think we see social interactions online. Is working from this premise too much to ask of researchers or the companies and universities that employ us? I don’t think so.
Addendum (added June 13, 2014)
I realized after posting my thoughts on how to think about social media as a site of human interaction (and all the ethical and methodological implications of doing so) that I forgot to leave links to what are, bar none, the best resources on the planet for policy makers, researchers, and the general public thinking through all this stuff.
Run, don’t walk, to download copies of the following must-reads:
Charles Ess and the AOIR Ethics Committee (2002). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee. Approved by the Association of Internet Researchers, November 27, 2002. Available at: http://aoir.org/reports/ethics.pdf
Annette Markham and Elizabeth Buchanan (2012). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee (version 2.0). Approved by the Association of Internet Researchers, December 2012. Available at: http://aoir.org/reports/ethics2.pdf
 The United States Department of Health, Education and Welfare (HEW) was a cabinet-level, U.S. governmental department from 1953-1979. In 1979, HEW was reorganized into two separate cabin-level departments: the Department of Education and the Department of Health and Human Services (HHS). HHS is in charge of all research integrity and compliance including research involving human subjects.
 I wanted to thank my fellow MSR Ethics Advisory Board members, MSR New England Lab, and the Social Media Collective, as well as the following people for their thoughts on drafts of this essay: danah boyd, Henry Cohn, Kate Crawford, Tarleton Gillespie, James Grimmelmann, Jeff Hancock, Jaron Lanier, Tressie Cottom McMillan, Kate Miltner, Christian Sandvig, Kat Tiidenberg, Duncan Watts, and Kate Zyskowski
Bowker, Geoffrey C., and Susan Leigh Star
1999 Sorting Things Out: Classification and Its Consequences, Inside Technology. Cambridge, Mass.: MIT Press.
2006 Partial Measures. American Ethnologist 33(4): 538-40.
1994 Discourse and Discipline at the National Research Council: A Bureaucratic Bildungsroman. Cultural Anthropology 9(1): 23-36.
2007 Inclusion : The Politics of Difference in Medical Research. Chicago: University of Chicago Press.
Gieryn, Thomas F.
1983 Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists.” American Sociological Review 48(6): 781-95.
Halpern, Sydney A.
2004 Lesser Harms: The Morality of Risk in Medical Research. Chicago: University of Chicago Press.
2006 The Perils of Working at Home: Irb “Mission Creep” as Context and Content for an Ethnography of Disciplinary Knowledges. American Ethnologist 33(4): 482-91.
Rothman, David J.
2003 Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making. 2nd pbk. ed, Social Institutions and Social Change. New York: Aldine de Gruyter.
Schrag, Zachary M.
2010 Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009. John Hopkins University Press.
2012 Behind Closed Doors: IRBs and the Making of Ethical Research. University of Chicago Press. 2012
2000 Audit Cultures: Anthropological Studies in Accountability, Ethics, and the Academy. London New York: Routledge, 2000.
United States. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.
1978 Report and Recommendations: Institutional Review Boards. [Washington]: U.S. Dept. of Health, Education, and Welfare : for sale by the Supt. of Docs., U.S. Govt. Print. Off.
This essay has been cross-posted from Ethnography Matters.
-Contributed by Mary Gray, Microsoft Research New England / Associate Professor of Communication and Culture with affiliations in American Studies, Anthropology, and the Gender Studies Department at Indiana University-
Many of us who study new media, whether we do so experimentally or qualitatively, our data big or small, are tracking the unfolding debate about the Facebook “emotional contagion” study, published recently in the Proceedings of the National Academies of Science. The research, by Kramer, Guillory, and Hancock, argued that small shifts in the emotions of those around us can shift our own moods, even online. To prove this experimentally, they made alterations in the News Feeds of 310,000 Facebook users, excluding a handful of status updates from friends that had either happy words or sad words, and measuring what those users subsequently posted for its emotional content. A matching number of users had posts left out of their News Feeds, but randomly selected, in order to serve as control groups. The lead author is a data scientist at Facebook, while the others have academic appointments at UCSF and Cornell University.
I have been a bit reluctant to speak about this, as (full disclosure) I am both a colleague and friend of one of the co-authors of this study; Cornell is my home institution. And, I’m currently a visiting scholar at Microsoft Research, though I don’t conduct data science and am not on specific research projects for the Microsoft Corporation. So I’m going to leave the debates about ethics and methods in other, capable hands. (Press coverage: Forbes 1, 2, 3; Atlantic 1, 2, 3, 4, Chronicle of Higher Ed 1; Slate 1; NY Times 1, 2; WSJ 1, 2, 3; Guardian 1, 2, 3. Academic comments: Grimmelmann, Tufekci, Crawford, boyd, Peterson, Selinger and Hartzog, Solove, Lanier, Vertesi.) I will say that social science has moved into uncharted waters in the last decade, from the embrace of computational social scientific techniques, to the use of social media as experimental data stations, to new kinds of collaborations between university researchers and the information technology industry. It’s not surprising to me that we find it necessary to raise concerns about how that research should work, and look for clearer ethical guidelines when social media users are also “human subjects.” In many ways I think this piece of research happened to fall into a bigger moment of reckoning about computational social science that has been coming for a long time — and we have a responsibility to take up these questions at this moment.
But a key issue, both in the research and in the reaction to it, is about Facebook and how it algorithmically curates our social connections, sometimes in the name of research and innovation, but also in the regular provision of Facebook’s service. And that I do have an opinion about. The researchers depended on the fact that Facebook already curates your News Feed, in myriad ways. When you log onto Facebook, the posts you’re immediately shown at the top of the News Feed are not every post from your friends in reverse chronological order. Of course Facebook has the technical ability to do this, and it would in many ways be simpler. But their worry is that users will be inundated with relatively uninteresting (but recent) posts, will not scroll down far enough to find the few among them that are engaging, and will eventually quit the service. So they’ve tailored their “EdgeRank” algorithm to consider, for each status update from each friend you might receive, not only when it was posted (more recent is better) but other factors, including how regularly you interact with that user (e.g. liking or commenting on their posts), how popular they are on the service and among your mutual friends, and so forth. A post with a high rating will show up, a post with a lower rating will not.
So, for the purposes of this study, it was easy to also factor in a numerical count of happy or sad emotion words in the posts as well, and use that as an experimental variable. The fact that this algorithm does what it does also provided legal justification for the research: that Facebook curates all users’ data is already part of the site’s Terms of Service and its Data Use Policy, so it is within their rights to make whatever adjustments they want. And the Institutional Review Board at Cornell did not see a reason to even consider this as a human subjects issue: all that the Cornell researchers got was the statistical data produced from this manipulation, manipulations that are a normal part of the inner workings of Facebook.
Defenders of the research (1, 2, 3), including Facebook, have pointed to this as a reason to dismiss what they see as an overreaction. This takes a couple of forms, not entirely consistent with each other: Facebook curates users’ News Feed anyway, it’s within their right to do so. Facebook curates users’ News Feed anyway, probably already on factors such as emotion. Facebook curates users’ News Feed anyway, and needs to understand how to do so by engaging in all sorts of A/B testing, which this was an example of. Facebook curates users’ News Feed anyway, get over it. All of these imply that it’s simply naive to think of this research as a “manipulation” of an otherwise untouched list; your News Feed is a construction, built from some of the posts direct to you, according to any number of constantly shifting algorithmic criteria. This was just one more construction. Those who are upset about this research are, according to its defenders, just ignorant of the realities of Facebook and its algorithm.
More and more of our culture is curated algorithmically; Facebook is a prime example, though certainly not the only one. But it’s easy for those of us who pay a lot of attention to how social media platforms work, engineers and observers alike, to forget how unfamiliar that is. I think, among the population of Facebook users — more than a billion people — there’s a huge range of awareness about these algorithms and their influence. And I don’t just mean that there are some poor saps who still think that Facebook delivers every post. In fact, there certainly are many, many Facebook users who still don’t know they’re receiving a curated subset of their friends’ posts, despite the fact that this has been true, and “known,” for some time. But it’s more than that. Many users know that they get some subset of their friends’ posts, but don’t understand the criteria at work. Many know, but do not think about it much as they use Facebook in any particular moment. Many know, and think they understand the criteria, but are mistaken. Just because we live with Facebook’s algorithm doesn’t mean we fully understand it. And even for those who know that Facebook curates our News Feeds algorithmically, it’s difficult as a culture to get beyond some very old and deeply sedimented ways to think about how information gets to us.
The public reaction to this research is proof of these persistent beliefs — a collective groan from our society as it adjusts to a culture that is algorithmically organized. Because social media, and Facebook most of all, truly violates a century-old distinction we know very well, between what were two, distinct kinds of information services. On the one hand, we had “trusted interpersonal information conduits” — the telephone companies, the post office. Users gave them information aimed for others and the service was entrusted to deliver that information. We expected them not to curate or even monitor that content, in fact we made it illegal to do otherwise; we expected that our communication would be delivered, for a fee, and we understood the service as the commodity, not the information it conveyed. On the other hand, we had “media content producers” — radio, film, magazines, newspapers, television, video games — where the entertainment they made for us felt like the commodity we paid for (sometimes with money, sometimes with our attention to ads), and it was designed to be as gripping as possible. We knew that producers made careful selections based on appealing to us as audiences, and deliberately played on our emotions as part of their design. We were not surprised that a sitcom was designed to be funny, even that the network might conduct focus group research to decide which ending was funnier (A/B testing?). But we would be surprised, outraged, to find out that the post office delivered only some of the letters addressed to us, in order to give us the most emotionally engaging mail experience.
And Facebook is complicit in this confusion, as they often present themselves as a trusted information conduit, and have been oblique about the way they curate our content into their commodity. If Facebook promised “the BEST of what your friends have to say,” then we might have to acknowledge that their selection process is and should be designed, tested, improved. That’s where this research seems problematic to some, because it is submerged in the mechanical workings of the News Feed, a system that still seems to promise to merely deliver what your friends are saying and doing. The gaming of that delivery, be it for “making the best service” or for “research,” is still a tactic that takes cover under its promise of mere delivery. Facebook has helped create the gap between expectation and reality that it has currently fallen into.
That to me is what bothers people, about this research and about a lot of what Facebook does. I don’t think it is merely naive users not understanding that Facebook tweaks its algorithm, or that people are just souring on Facebook as a service. I think it’s an increasing, and increasingly apparent, ambivalence about what it is, and its divergence from what we think it is. Despite the cries of those most familiar with their workings, it takes a while, years, for a culture to adjust itself to the subtle workings of a new information system, and to stop expecting of it what tradition systems provided.
For each form of media, we as a public can raise concerns about its influence. For the telephone system, it was about whether they were providing service fairly and universally: a conduit’s promise is that all users will have the opportunity to connect, and as a nation we forced the telephone system to ensure universal service, even when it wasn’t profitable. Their preferred design was acceptable only until it ran up against a competing concern: public access. For media content, we have little concern about being “emotionally manipulated” by a sitcom or a tear-jerker drama. But we do worry about that kind of emotional manipulation in news, like the fear mongering of cable news pundits. Here again, their preferred design is acceptable until it runs up against a competing concern: a journalistic obligation to the public interest. So what is the competing interest here? What kind of interventions are acceptable in an algorithmically curated platform, and what competing concern do they run up against?
Is it naive to continue to want Facebook to be a trusted information conduit? Is it too late? Maybe so. Though I think there is still a different obligation when you’re delivering the communication of others — an obligation Facebook has increasingly foregone. Some of the discussion of this research suggests that the competing concern here is science: that the ethics are different because this manipulation was presented as scientific discovery, a knowledge project for which we have different standards and obligations. But, frankly, that’s a troublingly narrow view. Just because this algorithmic manipulation came to light because it was published as science doesn’t mean that it was the science that was the problem. The responsibility may extend well beyond, to Facebook’s fundamental practices.
Is there any room for a public interest concern, like for journalism? Some have argued that Facebook and other social media are now a kind of quasi public spheres. They not only serve our desire to interact with others socially, they are also important venues for public engagement and debate. The research on emotional contagion was conducted during the week of January 11-18, 2012. What was going on then, not just in the emotional lives of these users, but in the world around them? There was ongoing violence and protest in Syria. The Costa Concordia cruise ship ran aground in the Mediterranean. The U.S. Republican party was in the midst of its nomination process: John Huntsman dropped out of the race this week, and Rick Perry the day after. January 18th was the SOPA protest blackout day, something that was hotly (emotionally?) debated during the preceding week. Social media platforms like Facebook and Twitter were in many ways the primary venues for activism and broader discussion of this particular issue. Whether or not the posts that were excluded by this research pertained to any of these topics, there’s a bigger question at hand: does Facebook have an obligation to be fair-minded, or impartial, or representative, or exhaustive, in its selection of posts that address public concerns?
The answers to these questions, I believe, are not clear. And this goes well beyond one research study, it is a much broader question about Facebook’s responsibility. But the intense response to this research, on the part of press, academics, and Facebook users, should speak to them. Maybe we latch onto specific incidents like a research intervention, maybe we grab onto scary bogeymen like the NSA, maybe we get hooked on critical angles on the problem like the debate about “free labor,” maybe we lash out only when the opportunity is provided like when Facebook tries to use our posts as advertising. But together, I think these represent a deeper discomfort about an information environment where the content is ours but the selection is theirs.
-Contributed by Tarleton Gillespie, Cornell University Department of Communication-← Older posts |