Culture Digitally // Examining Contemporary Cultural Production

Culture Digitally // Examining Contemporary Cultural Production

  • With the generous support of the National Science Foundation we have developed Culture Digitally. The blog is meant to be a gathering point for scholars and others who study cultural production and information technologies. Welcome and please join our conversation.

     

    • Mirror [draft] [#digitalkeywords] Jul 25, 2014

      “The business proposition of cloud companies is that their mirroring is an affordable way of securing retrievable data. The compromise is that mirroring liberates and at once captures the very images and information it displaces, diffracts, and makes autonomous… While the capture of mirrored data for surplus production should be clear, mirroring is also an action within counterhegemonic information activism. In this way, mirroring is not neutral but rather a tool for both liberation and capture, for activist visibility and visibility-as-a-trap.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Mirror — Adam Fish, Lancaster University

      The mirror is one of most trafficked metaphors in Western thought. In Ancient Greek mythology, Narcissus dies transfixed on his reflection in a spring. According to early sociology, we are a “looking glass self.” Our identities are formed when we mirror how we think others see us (Cooley 1902). In Philosophy and the Mirror of Nature, Richard Rorty (1979) shatters the Enlightenment goal that through scientific inquiry the mind could mirror nature, harboring replicas in mental formula. Thus, from antiquity onward, the mirror metaphor has been used to describe everything from vanity, to subject formation, to consensual reality. Today, information companies and information activists alike call data duplication mirroring but often fail to acknowledge how the symbolism of this term may impact its use. Mirrors are more complex entities than simple facsimiles. Mirrors echo the intricacies of data practice. Below I endeavor to explain how for information activists and information firms, mirroring is an exploit of networks and computers to remain visible by capturing “eyeballs.”

      Mirrors are metaphors for what they reflect. In Through the Looking-Glass, Lewis Carroll (1871) has Alice journey through a mirror and into a parallel and parable-rich universe of reversals. In Oscar Wilde’s The Picture of Dorian Gray (1891), the mirroring portrait ages but the protagonist does not. Hillel Schwartz (1998) traces this history and our obsession with twins, replicas, duplicates, decoys, counterfeits, portraits, mannequins, clones, replays, photocopies, and forgeries. The mirror metaphor continues into the digital age. The United Kingdom’s Channel Four television series the Black Mirror is a drama that comments on a dystopic future of increasing connectivity. Charlie Booker’s programme sees our mobile and laptop screens as black mirrors into which we stare as if Narcissus and which reflect back our self-destructive ways. Co-founder of file-sharing company The Pirate Bay, Peter Sunde, believes that copying is genetically coded, saying: “People learn by copying others. All the knowledge we have today, and all success is based on this simple fact – we are copies.” As a locus for the confluence of metaphysics and materiality, mirrors are a way to see how the practical and metaphoric are co-constituted in database worlds.

      This short entry has both the philosophical goal of discussing mirrors as a metaphor and a practical objective of showing how mirroring is a practice of activism as well as cloud computing. Below I describe how mirroring in computing is a way of keeping a copy of some or all of a particular content at another site, typically in order to protect and improve its accessibility. Mirroring is a way of working with a multiplication of data. For activists, mirroring is a method to achieve and preserve visibility on networked communication systems. Mirror multiplicities provide opportunities for cloud companies seeking to capture and sell personal information. Geographically dispersed and constituted by different trajectories, speaking of the replication of origins does not do justice to the complexity of mirrors. Instead, I choose to identify mirroring as a form of praxis, a way of being and thinking in the world. In this sometimes confusing hall of mirrors, the practical and the metaphoric, the actual and the virtual, co-create each other in acts of reflection.

      Mirrors as Multiples

      Computing companies would have us believe that mirroring is the non-rivalrous multiplications of data made possible by packet-switching, storing, and binary technologies. It is achievable because of the copy-and-paste functionality of computers, data, and networks. Cloud computing requires the mirroring or replication of databases for global access and security. Microsoft, which provides a number of cloud computing services, defines “database mirroring” as the maintenance of “two copies of a single database that must reside on different server instances.” Not just a tool for Fortune 500 hegemonic information companies, Wikileaks also “mirrors” its content. They and their supporters mirror content in jurisdictions outside of American reach when faced with the legal shutdown of private servers housing their incendiary cables. Today, sites in at least eleven European nations offer the Wikileaks mirror. The largest peer-to-peer file sharing service in the world, the Pirate Bay, mirrors its links in national jurisdictions where its practices have yet to be deemed illegal (18 countries presently block root access to the Pirate Bay). Mirroring, thus, is a practice for both hegemonic and counterhegemonic actors. But it would be inaccurate to claim that these mirrors are exact replicas.

      Microsoft’s is a naïve realist notion that mirrors are precise copies, merely displaced within or across databases. A slightly more complex social constructivist perspective sees mirrors as symbolic representations. In constructivism, mirrors would not be conceived as duplicates but rather as iconic yet accurate depictions. Physicist Karan Barad (2003) challenges both “naïve realist” as well as constructivist interpretations of mirrors, offering a third construal. Echoing Rorty, she says “…the representationalist belief in the power of words to mirror preexisting phenomena is the metaphysical substrate that supports social constructivist, as well as traditional realist, beliefs” (Barad 2003: 802). In this way, mirrors are neither realist copies nor constructed depictions. Rather, mirrors are a data multiplication that maps a contestation over visibility.

      To offer robust, secure, and non-delayed access to content it is necessary to store multiples. In cloud computing, content formation is a regenerative process of recomposition from geographically dispersed databases. Numerous scholars identify how database derived depictions of diseases, criminals, and biological processes are visualized not as singular entities but rather as complex beings constituted by numerous coded transections (Mol 2003, Ruppert 2013, MacKenzie and McNally 2013). In these diverse cases, the multiple is not a fragmented nor necessarily contradictory singularity but rather a fluid “field of multiple conjoined actions that cumulatively enact new entities” (Ruppert 2013). The “performative excesses” of visualized multiples, “undo or unmake identities as much as they make them” (Mackenzie and McNally 2013). Structured by databases and unmoored to beginnings, mirrors are multiples with numerous applications for hegemonic and counterhegemonic actors alike.

      Mirroring as Activist Visibility

      Mirrors transform seeing and what is seen. Through physical vanity mirrors, European Medieval people “came to reflect on, know and judge themselves and others through becoming aware of how they appeared” (Coleman 2013: 5, Melchior-Bonnet 2001). Using lenses and mirrors to transform his studio into a camera obscura, 17th century Dutch painter Johannes Vermeer painted not the depth of field and the textures seen by the unmediated human eye but the world as framed by a camera (Steadman 2002). There is power in controlling new regimes of technological-assisted seeing. Historically, writing and printing systems extended the prioritization of the ocular and through it the power given to those who could read, write, print, and cast mortal judgment based on textual studies (Ong 1977, McLuhan 1964). “Scopic regimes” developed in Western science and law to control the power of being able to make something visible and legible (Jay 1992). Likewise, visual technologies assemble the real, the natural, and the moral for Western technoscientific systems (Haraway 1997). Through “seeing like a state,” nations objectify and thereby control colonial bodies (Scott 1999). The will to visibility is also profoundly gendered with cinema historically being produced for the male gaze (Mulvey 1975). Visibility “lies at the intersection of the two domains of aesthetics (relations of perception) and politics (relations of power)” (Brighenti s2007: 324). The counterhegemonic mirroring practices of Wikileaks, The Pirate Bay, and below Anonymous are intimately linked to the ability to be seen.

      Mirroring is central to the power to make visible (or invisible) in the networked society. For instance, consider how Anonymous–made famous by hacks, leaks, and performative politics-secures visibility for their political videos by mirroring them across YouTube. The content made visible by their video mirrors solicits viewers to model themselves after politically active bodies. The process by which political films hail viewers to copy revolutionary subjects is called “political mimesis” (Gaines 1994). And yet, while mirrors represent politicized bodies, they cannot be reduced to mere representations. Here, mirrors do not reveal origins but rather locate contestation (Fish forthcoming). The friction revealed by Anonymous video mirrors is over censorship as the Church of Scientology and other opponents of Anonymous attempt to force YouTube to takedown Anonymous videos critical of Scientology. Anonymous video mirrors mark a counterhegemonic will-to-visibility.

      Thus, it is true, as Foucault said, that “visibility is a trap,” but not for everyone (1977: 200). For both media corporations and activists, visibility is a tool for empowerment. The power to make him visible or her invisible are powers traditionally reserved for entrenched elites. Scandals recorded on cameras and distributed online now interrupt the lives of political and economic elites who used to be able to tightly control their self-presentation (Goffman 1956, Thompson 2005). Reality television provides visibility to some and through it often stigmatizes social classes through televised spectacle (Tyler 2011, Couldry 2010). As a read/write medium capable of delivering text, image, and moving pictures, the internet exacerbates ocular-centrism as well as the dangers and possibilities of visibility. Hacks, leaks, and video mirrors are forms of visual counter-power. The power to see and not be seen-from the eye training of literacy, to the male gaze in cinema, to cultures of self-presentation and reality television, to visibility optimization industries of fashion and advertising, to video mirroring-constitutes regimes of power and counter-power in contemporary networked society.

      The counterhegemonic mirroring of Wikileaks, Anonymous, and The Pirate Bay, are examples of an interventionary “misuse” of pre-existing capitalist information infrastructure that diversifies and magnifies the visualization of radical voice (Soderberg 2010). Despite using for-profit social media platforms and thereby being captured within circuits of techno-capitalism (Dean 2010), grassroots political visibility can be a practice-based form of access and voice that resists erasure (Couldry 2010). Mirroring is one among many promising but nonetheless uneven forms of technological resistance used both for and against the for-profit capture of information resources.

      Capturing the Mirror

      One reading sees human prehistory as the progress of information creation and control (Gleik 2011). Throughout human evolution, the size and complexity of the neocortex, language, and group dynamic increased collectively (Dunbar 1993). The storage of information in symbolic systems and durable substances of rock, wood, fiber-and later digital databases-further amplified the complexity of the brain, language, and society (Ong 1977, McLuhan 1964). Mirroring is a later manifestation of the prehistoric practice of data creation, control, and manipulation. But while corporately owned databases are continuations of prehistoric information storage, they also structure data in a particular way for a particular purpose. Structures and risks associated with the potentials of database mirrors, I would argue, are political economic in nature. In terms of the virtual, Deleuze discussed the “double-movement of liberation and capture” (1972). While mirrors offer opportunities for the liberation of activist visibility, they also provide data corporations opportunities to capture social capital. Chow says captivation “is semantically suspended between an aggressive move and an affective state, and carries within it the force of the trap in both active and reactive senses” (Chow 2012: 48 in Berry 2014). In this way, reflexively produced material and affectual data is captured within an informational economy.

      The business proposition of cloud companies is that their mirroring is an affordable way of securing retrievable data. The compromise is that mirroring liberates and at once captures the very images and information it displaces, diffracts, and makes autonomous. We are hailed to be responsible with our data by backing it up, to take our lives seriously through constantly drafting autobiographical public digital artifacts, and to work on the move by having our important documents accessible in the cloud. Yet all of this plays into the surveillance and capture of our virtual lives. While the capture of mirrored data for surplus production should be clear, mirroring is also an action within counterhegemonic information activism. In this way, mirroring is not neutral but rather a tool for both liberation and capture, for activist visibility and visibility-as-a-trap.

      Conclusion

      Mirrors describe copies that are saved in different places. But mirrors are not exact copies. Disambiguated in time and space, the mirror qualitatively differs from that which it mirrors. Rather than mirrors being exact replicas or even reasonable approximations, it is instructive to consider mirrors not as products but rather as processes. Mirrors are complex, in-flux multiples constituted by numerous forces that achieve a degree of autonomy from their origins. In this way, mirroring, or the practice of making mirrors, is a praxis, neither realistic nor representational depictions, but a way of being, believing, and moving in the world. As such, mirrors map two practices that are reactions to a contestation. For activists, mirroring marks a will to remain visible in a world of censorship. Mirrors also map the conflicts around how data is captured and capitalized on by cloud companies. A way to synthesize the politics, political economy, and praxis of mirroring is to consider how mirrors are multiples, autonomous from the things they ostensibly replicate. Ancient and contemporary theories of mirrors are tools used towards the synthesis of the metaphysical and the material of database worlds.


       

      Works Cited

      Barad, Karen. 2003. Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter, Signs: Journal of Women in Culture and Society, 28:3, 801-831.

      Berry, D.M. 2014. On Capture. http://stunlaw.blogspot.co.uk/2014/04/on-capture.html

      Brighenti, AM 2007 “Visibility: a category for the social sciences”. Current Sociology, 55(3): 323-342.

      Chow, R. 2012. Entanglements, or Transmedial Thinking about Capture, London: Duke University Press.

      Cooley, Charles H. 1902. Human Nature and the Social Order. New York: Scribner’s.

      Couldry, Nick. 2010. Why Voice Matters. London: Sage Press.

      Dean, Jodi. 2010. Blog Theory. Polity Press.

      Deleuze, Gilles. 1972. “How Do We Recognize Structuralism?” Trans. Melissa McMahon and Charles J. Stivale. In Desert Islands and Other Texts, 1953-1974. Ed. David Lapoujade. New York: Semiotext(e), 2004.ISBN 1-58435-018-0. 170-192.

      Dunbar, R. I. M. 1993. Coevolution of neocortical size, group size and language in humans. Behavioral and Brain Sciences 16 (4): 681-735.

      Foucault, M. 1977. Discipline and Punish. Pantheon Books.

      Fish, Adam. Forthcoming. Mirroring: Anonymous Videos and Political Mimesis.

      Gaines, Jane M. 1994. “Political Mimesis.” In Collecting Visible Evidence. Eds. Jane M. Gaines and Michael Renov. Minneapolis: University of Minnesota Press.

      Gleik, James. 2011. The Information.

      Goffman, Erving. 1956. Presentation of the Self in Everyday Life. Edinburgh: University of Edinburgh Press.

      Haraway, Donna. 1997. Modest_Witness@Second_Millennium.FemaleMan© Meets_OncoMouse™: Feminism and Technoscience, New York: Routledge.

      Jay, Martin. 1992. Scopic Regimes of Modernity in Vision and Visuality: Discussions in Contemporary Culture 2, Hal Foster, ed. New Press.

      Mackenzie, Adrian and Ruth McNally. 2013. ‘Methods of the multiple: how large-scale scientific data-mining pursues identity and differences.’ Theory, Culture & Society 30, no. 4 (2013): 72-91.

      McLuhan, Marshall. 1964. Understanding Media. McGraw-Hill.

      Mulvey , Laura. 1975. Visual Pleasure and Narrative Cinema, Screen 16(3):6-18

      Ong, W. J. 1977. Interfaces of the word: Studies in the evolution of consciousness and culture. Ithaca, NY: Cornell University Press

      Rorty, Richard. Philosophy and the Mirror of Nature. Princeton: Princeton University Press, 1979.

      Ruppert, Evelyn. 2013. Not Just Another Database: The Transactions that Enact Young Offenders. Computational Culture, pp. 1-13.

      Schwartz, Hillel. 1996. The culture of the copy: striking likenesses, unreasonable facsimiles. Zone Books.

      Soderberg, Johan. 2010. Misuser inventions and the invention of the misuser: hackers, crackers, filesharers.Science as Culture, 19 (2) 2010, pp. 151-179

      Tyler, I. 2011. Pramfaced Girls: the class politics of “Maternal TV” in Reality Television and Class. Wood, H. & Skeggs, B. (eds.). Basingstoke: Palgrave Macmillan, p. 210-224 15 p.

      -Contributed by ,  Sociology Department at Lancaster University-

      Posted in Uncategorized | Tagged , , , | Leave a comment

      Community [draft] [#digitalkeywords] Jul 11, 2014

      “Today, we speak of a global community, made possible by communications technologies, and our geographically-specific notions of community are disrupted by the possibilities of the digital, where disembodied and socially distant beings create what they – and we, as scholars – also call community. But are the features and affordances of digital community distinct from those we associate with embodied clanship and kinship?”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Community — Rosemary AvanceUniversity of Pennsylvania

      The digital era poses new possibilities and challenges to our understanding of the nature and constitution of community. Hardly a techno buzzword, the term “community” has historic uses ranging from a general denotation of social organization, district, or state; to the holding of important things in common; to the existential togetherness and unity found in moments of communitas. Our English-language “community” originates from the Latin root communis, “common, public, general, shared by all or many,” which evolved into the 14th century Old French comunité meaning “commonness, everybody”. Originally the noun was affective, referencing a quality of fellowship, before it ever referred to an aggregation of souls. Traditionally the term has encompassed our neighborhoods, our religious centers, and our nation-states– historically, geographic and temporal birthrights, subjectivities unchosen by the individual. Today, we speak of a global community, made possible by communications technologies, and our geographically-specific notions of community are disrupted by the possibilities of the digital, where disembodied and socially distant beings create what they– and we, as scholars– also call community. But are the features and affordances of digital community distinct from those we associate with embodied clanship and kinship?

      With shared etymological roots and many shared assumptions, the term “community” is of central importance to the field of Communication. Social scientific taxonomies have long placed the elusive notion of community at the apex of human association, as a utopian model of connection and cohesion, a place where human wills unite for the good of the group. We long to commune, as John Peters argues[1], yet our inability to ever truly connect with another soul keeps us grounded in sympathies and persistent in attempts. Perhaps this elusiveness is where our collective disciplinary preoccupation with the notion of community arises.

      On- or offline, community is at best an idealized, imaginary structure, and that idealization obfuscates exploitation. Michel Foucault reminds us that pure community is at base a mechanism of control over social relations, a policing of our interactions.[2] Benedict Anderson reaffirms, too, the imaginary nature of community, which we conceive of as a “deep, horizontal comradeship”[3] in such a way that power differentials are jointly pretended away.

      Victor Turner[4], of course, adopts the source of the word in his theories of liminality and communitas, arguing after van Gennep that ritual rites of passage move an individual from a state of social indeterminacy to a state of communal oneness and homogeneity. The outcome of an individual’s reincorporation into a group is a burdening, as the individual takes on obligation and responsibility toward defined others. This is the formation, the very root, of community– an ethical orientation outside oneself and toward others. Thus the community emerges as the social compact charged with policing the system. The implication of community, then, is citizenship-belonging. Community is an ideal, the result of individuals accepting and serving their obligations and responsibilities vis-à-vis the collective.

      What do we make of Internet-based communities, united over shared interests from the mundane to the elysian, evading easy classification due to wide ranging differences in participation, influence, and affective connection? Moral panics accompany all new media technologies, and the pronounced fear associated with global connectivity via the Internet, with no little irony, reflects the fear of disconnection. In Bowling Alone[5], Robert Putnam notoriously gives voice to this fear, suggesting that declines in community commitment, manifest in low civic and political engagement, declining religious participation, increasing distrust, decreasing social ties, and waning altruism are at least in part attributable to technology and mass media, as entertainment and news are tailored to the individual and consumed alone. Putnam paints a bleak image of Americans in dark houses, television lights flickering and TV dinners mindlessly consumed.

      Digital community seems to offer a panacea to both the problems of community as a mechanism of control, and the fear of disconnection in a new media age. Indeed, Fred Turner shows that digital community has roots in countercultural movements, as “virtual community… translated a countercultural vision of the proper relationship between technology and sociability into a resource for imagining and managing life in the network economy”[6]. Coming, then, as a solution to the “problem” of modernity, that disembodied cyberspace somehow at once flattens and broadens our notions of self.

      So what do we mean by digital community? Both features of Internet culture, scholars differentiate between “virtual” and “digital” communities, the former denoting a quasi-geographical location (e.g., a particular URL), whereas digital communities are ephemeral, united around a shared interest or identity rather than a particular virtual location. Thus virtual gaming communities, for instance, may be located at a particular website, while digital gaming communities are dispersed across social platforms and virtual spaces, united around a shared interest.

      While past conceptions of community were generally outside one’s agential selection– you are born and die in your town, your religion is the faith of your parents– today’s diverse digital landscape means self-selection into communities of interest and affinity. But digital community does not entirely escape the deterministic, as availability still marks a very real digital divide between those with access to the technology and those without. Not only that, but the affordances of various platforms, both in intended and possible (read: disruptive) use, all inform what might be seen as a digital community’s blueprint. Online community formation relies on this peer-to-peer software architecture that pre-dates the community itself, so that communities evolve and adapt not in spite of but because of the affordances of the technological platform. These include format, space constraints, visuals, fixity vs. mutability, privacy vs. surveillance, peer feedback, report features/TOS, modality (cellular, tablet, desktop) — all features which inform what is possible in a given virtual community. Digital communities can evade some but not all of the fixity of these structural constraints, reaching across a variety of platforms and forums on both the light and dark web.

      Both types of online community networks are dynamic and self-organizing. Many social networking sites like Facebook and Myspace are pure “intentional communities” wherein self-selection into the platform and mutual “friending” secure one’s place[7]. Highly fragmented, niche communities redistribute power in both intangible and tangible ways — think only of the economic impact of peer-to-peer communities on the music industry, where file sharing challenges traditional conceptions of property rights and even our collective moral code[8]. Indeed, content sharing is the basis of online community– from photos, to text, to files and links– and users themselves decide their own level of engagement in these participatory cultures. Within the communities themselves, the flattening dynamic of Internet culture, where everyone[9] can have a platform and a voice, obfuscates the very real social hierarchies which are supported by social processes and norms– all of which evolve from platform affordances.

      Some scholars and observers still express a reticence to accept Facebook, Twitter, blogs, or forums as true examples of community. They see these spaces as primarily narcissistic expressions of what Manuel Castells calls the “culture of individualism”, emphasizing consumerism, networked individualism, and autonomy; rather than the “culture of communalism”, rooted in history and geography[10]. Ironically, perhaps, much of today’s “countercultural” vision involves little to no connectivity– refusing to participate in the exploitation and grand social experiment that is Facebook, for instance, one might opt out of forms of life available there.

      Yet users, if we take them at their word, say that online community provides a space to be “real” — or somehow more authentic– in ways that embodied community might sanction. An overabundance of narrative visibility and social support on the Internet allow users to foster difference in ways that limited offline social networks simply cannot sustain. That is to say, in today’s world, it is not uncommon for youth to self-identify as queer and first “come out” in digital spaces[11] or, to draw on my own ethnographic work, for Mormons to foster heterodox (e.g. liberal) identities in closed Facebook groups before what they too mark as a “coming out” to their conservative “real-world” family and friends[12]. We might do well to remember the origin of our term “community”, which referenced a quality of fellowship before it ever referred to an aggregation of souls. It seems our term has come full circle, as disembodied souls unite in fellowship mediated by the digital.


      Notes

      1. Peters, John Durham. (1999). Speaking into the air: A history of the idea of communication. Chicago: U. of Chicago Press.

      2. Foucault, Michel. (1977). Discipline and punish: The birth of the prison. New York: Random House.

      3. Anderson, Benedict. (1983). Imagined communities: Reflections on the origin and spread of nationalism. London: Verso.

      4. Turner, Victor. (1969). “Liminality and communitas.” in The Ritual Process: Structure and Anti-Structure, pp. 94-. New York: Aldine.

      5. Putnam, Robert. (2000). Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.

      6. Turner, Fred. (2005, July). “Where the counterculture met the new economy: The WELL and the origins of virtual community.” Technology and Culture: 46, 491.

      7. C.f. boyd, danah. (2006,4 December). “Friends, friendsters, and myspace top 8: Writing community into being on social network sites.” First Monday 11(12). Available at http://firstmonday.org/article/view/1418/1336

      8. See Hughes, Jerald & Karl Reiner Lang. (2003). “If I had a song: The culture of digital community networks and its impact on the music industry.” International Journal on Media Management 5(3):180-189.

      9. “Everyone”, that is, with access, equipment, technological savvy, and, presumably, an audience.

      10. Castells, Manuel. (2007). “Communication, power, and counter-power in the network society.” International Journal of Communication 1:238-266.

      11. Gray, Mary L. (2009, July). “Negotiating identities/queering desires: Coming out online and the remediation of the coming-out story.” Journal of Computer-Mediated Communication 14(4):1162-1189.

      12. Here I’m drawing on years of ethnographic work among Mormons on the Internet, with details forthcoming in my dissertation “Constructing Religion in the Digital Age: The Internet and Modern Mormon Identities”; for more on Mormon deconversion and online narratives see Avance, Rosemary. (2013). “Seeing the light: Mormon conversion and deconversion narratives in off- and online worlds.” Journal of Media and Religion 12(1):16-24.

      -Contributed by ,  -

      Posted in Uncategorized | Tagged , , | Leave a comment

      When Science, Customer Service, and Human Subjects Research Collide. Now What? Jul 9, 2014

      My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: we are now, officially, doing human subjects research.

      I’m frustrated that the state of public intellectualism allows us, individually, to jump into the conversation about the recently published Facebook “Emotions” Study [1]. What we—from technology builders and interface designers to data scientists and ethnographers working in industry and at universities alike—really (really) need right now is to sit down together and talk. Pointing the finger or pontificating doesn’t move us closer to the discussions we need to have, from data sharing and users’ rights to the drop in public funding for basic research itself. We need a dialogue—a thoughtful, compassionate conversation among those who are or will be training the next generation of researchers studying social media. And, like all matters of ethics, this discussion will become a personal one as we reflect on our doubts, disagreements, missteps, and misgivings. But the stakes are high. Why should the Public trust social media researchers and the platforms that make social media a thing? It is our collective job to earn and maintain the Public’s trust so that future research and social media builders have a fighting chance to learn and create more down the line. Science, in particular, is an investment in questions that precede and will live beyond the horizon of individual careers.

      As more and more of us crisscross disciplines and work together to study or build better social media, we are pressed to rethink our basic methods and the ethical obligations pinned to them. Indeed “ethical dilemmas” are often signs that our methodological techniques are stretched too thin and failing us. When is something a “naturalistic experiment” if the data are always undergoing A/B tweaks? How do we determine consent if we are studying an environment that is at once controllable, like a lab, but deeply social, like a backyard BBQ? When do we need to consider someone’s information “private” if we have no way to know, for sure, what they want us to do with what we can see them doing? When, if ever, is it ok to play with someone’s data if there’s no evident harm but we have no way to clearly test the long-term impact on a nebulous number of end users?

      There is nothing obvious about how to design and execute ethical research that examines people’s individual or social lives. The reality is, when it comes to studying human interaction or behavior (for profit or scientific glory), it is no more (or less) complicated whether we’re interviewing someone in their living room, watching them in a lab, testing them at the screen, or examining the content they post online. There is no clearer sign of this than the range of reactions to the news (impeccably curated here by James Grimmelmann) that for one week, back in January 2012, researchers manipulated (in the scientific sense) what 689,003 Facebook users read in their individual News Feed. Facebook’s researchers fed some users a diet containing fewer posts of “happy” and positive words than their usual News Feed; other users received a smaller than their average allotment of posts ladled with sad words. Cornell-based researchers came in after the experiment was over to help sift through and crunch the massive data set. Here’s what the team found: By the experiment’s last day (which, coincidentally, landed on the day of the SOPA online protests! Whoops), it turned out that a negligible—but statistically detectable—number of people produced fewer positive posts and more negative ones if their Feed included fewer positive news posts from friends; when the researchers scaled back the number of posts with negative cues from friends, people posted fewer negative and more positive posts. This interesting, even if small, finding was published in the June 2014 issue of the Proceedings of the National Academy of Sciences (PNAS). That’s how Science works—one small finding at a time.

      At issue: the lead author, Facebook Data Scientist, Adam Kramer, never told users in the study that their News Feeds were part of this experiment, either before or after that week in January. And Cornell University’s researchers examining the secondary data set (fancy lingo for the digital records of more than half a million people’s interactions with each other) weren’t, technically, on the hook for explaining that to subjects either. Mind you, it’s often acceptable in human subjects research to conduct experiments without prior consent, as long as everyone discussing the case agrees that the experiment does not impose greater risk to the person than they might experience in a typical day. But even in those cases, at some point the research subjects are told (“debriefed”) about their participation in the study and given the option to withdraw data collected about them from the study. Researchers also have a chance to study the impact of the stimulus they introduced into the system. So, the question of the hour is: Do we cross a line when testing a product also asks a scientifically relevant question? If researchers or systems designers are “just” testing a product on end users (aka humans) and another group has access to all that luscious data, whose ethics apply? When does “testing” end and “real research” begin in the complicated world of “The Internet?”

      Canonical Science teaches us that the greater the distance between researchers and our subjects (often framed as objectivity), the easier it is for us to keep trouble at arm’s length. Having carried out what we call “human subjects research” for much of my scholarly life—all of it under the close scrutiny of Institutional Review Boards (IRBs)—I feel professionally qualified to say, “researching people ain’t easy.” And, you know what makes it even harder? We are only about 10 years into this thing we call “social media”—which can morph into a telephone, newspaper, reality TV show, or school chalkboard, depending on who’s wielding it and when we’re watching them in action. Online, we are just as likely to be passionately interacting with each other, skimming prose, or casually channel-surfing, depending on our individual context. Unfortunately, it’s hard for anyone studying the digital signs of humans interacting online to know what people mean for us to see—unless we ask them. We don’t have the methods (yet) to robustly study social media as sites of always-on, dynamic human interaction. So, to date, we’ve treated the Internet as a massive stack of flat, text files to scrape and mine. We have not had a reason to collectively question this common, methodological practice as long as we maintained users’ privacy. But is individual privacy really the issue?

      My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: We are now, officially, doing human subjects research. Here’s some background to orient us and the people who pay our research bills (and salaries) to this new reality.

      Genealogy of Human Subjects Research Oversight in the United States

      In 1966, the New England Journal of Medicine published an article by Harvard research physician, Henry Beecher, chronicling 22 ethically questionable scientific studies conducted between 1945 and 1965 (Rothman, 2003: 70-84). Dr. Beecher’s review wasn’t exposing fringe science on the margins. Federally and industry-funded experiments conducted by luminaries of biomedicine accounted for most of the work cited in his review. Even if today we feel like it’s a no brainer to call ethical foul on the studies Beecher cited, keep in mind that it took DECADES for people to reach consensus on what not to do. Take, for example, Beecher’s mention of Dr. Saul Krugman. From 1958-1964, Dr. Saul Krugman injected children with live hepatitis virus at Willowbrook State School on New York’s Staten Island, a publicly-funded institution for children with intellectual disabilities. The Office of the Surgeon General, U.S. Armed Forces Epidemiological Board, and New York State Department of Mental Hygiene funded and approved his research. Krugman directed staff to put the feces of infected children into milkshakes later fed to newly admitted children, to track the spread of the disease. Krugman pressed poor families to include their children in what he called “treatments” to secure their admission to Willowbrook, the only option for poor families with children suffering from mental disabilities. After infecting the children, Krugman experimented with their antibodies to develop what would later become the vaccines for the disease. Krugman was never called out for the lack of consent or failure to provide for the children he infected with the virus, now at risk of dying from liver disease. Indeed, he received the prestigious Lasker Prize for Medicine for developing the Hepatitis A and B vaccines and, in 1972, became the President of the American Pediatric Society. Pretty shocking. But, at the time, and for decades after that, Willowbrook did not register as unequivocally unethical. My point here is not to draw one to one comparisons of Willowbrook and the Facebook Emotions study. They are not even close to comparable. I bring up Willowbrook to point out that no matter how ethically egregious something might seem in hindsight, often such studies do not appear so at the time, especially when weighed against the good they might seem to offer in the moment. Those living in the present are never in the best position to judge what will or will not seem “obviously wrong.”

      News accounts of risky experiments carried out without prior or clear consent, often targeting marginalized communities with little power, catalyzed political will for federal regulations for biomedical and behavioral researchers’ experiments (Rothman, 2003: 183-184). Everyone agreed: there’s a conflict of interest when individual researchers are given unfettered license to decide if their research (and their reputations) are more valuable to Science than an individual’s rights to opt out of research, no matter how cool and important the findings might be. The balance between the greater good and individual risk of research involving human subjects must be adjudicated by a separate review committee, made up of peers and community members, with nothing to be gained by approving or denying a researcher’s proposed project.

      The Belmont Report

      The National Research Act of 1974 created the Commission for the Protection of Human Subjects of Biomedical and Behavioral Research [2]. Five years later, the Commission released The Belmont Report: The Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The Belmont Report codified the call for “respect for persons, beneficence, and justice” (The Belmont Report, 1979). More concretely, it spelled out what newly mandated university and publicly funded agency-based IRBs should expect their researchers to do to safeguard subjects’ informed consent, address the risks and benefits their participation might accrue, and more fairly distribute science’s “burdens and benefits” (The Belmont Report, 1979). The Belmont Report now guides how we define human subjects research and the attendant ethical obligations of those who engage in it.

      Put simply, the Belmont Report put a Common Rule in place to manage ethics through a procedure focused on rooting out bad apples before something egregious happens or is uncovered, after the fact. But it did not—and we have not—positioned ethics as an on-going, complicated discussion among researchers actively engaging fellow researchers and the human subjects we study. And we’ve only now recognized that human subjects research is core to technology companies’ product development and, by extension, bottom lines. However, there is an element of the Belmont Report that we could use to rethink guidance for technology companies, data scientists, and social media researchers alike: the lines drawn in the Belmont Report between “practice and research.”

      The fine line between practice and research

      The Belmont Report drew a clear line demarcating the “boundaries between biomedical and behavioral research and the accepted and routine practice of medicine”—the difference between research and therapeutic intervention (The Belmont Report 1979). This mandate, which was in fact the Report’s first order of business, indexes the Commission’s most pressing anxiety: how to reign in biomedicine’s professional tendencies to experiment in therapeutic contexts. The history of biomedical breakthroughs—from Walter Reed’s discovery of the causes of yellow fever to Jonas Salk’s polio vaccines—attest to the profession’s culture of experimentation (Halpern 2004: 41-96). However, this professional image of the renegade (mad) scientist pioneering medical advances was increasingly at odds with the need, pressing by the 1970s, for a more restrained and cautious scientific community driven first by an accountability to the public and only second by a desire for discovery.

      In redrawing the boundaries between research and practice, the Belmont Report positioned ethics as a wedge between competing interests. If a practitioner simply wanted to tweak a technique to see if it could improve an individual subjects’ experience, the experiment did not meet the threshold of “real scientific inquiry” and could be excused from more formal procedures of consent, debriefing, and peer review. Why? Practitioners already have guiding codes of ethics (“do no harm”) and, as importantly, ongoing relationships built on communication and trust with the people in their care (at least, in theory). The assumption was that practitioners and “their” subjects could hold each other mutually accountable.

      But, once a researcher tests something out for testing’s sake or to work on, more broadly, a scientific puzzle, they are in the realm of research and must consider a new set of questions: Cui bono, who benefits? Will the risk or harm to an individual outweigh the benefits for the greater good? What if that researcher profits from the greater good? The truth is, in most cases, the researcher will benefit, whether they make money or not, because they will gain credibility and status through the experience of their research. Can we say the same for the individual contributing their experiences to our experiments? If not, that’s, typically, an ethical dilemma.

      Constructing ethical practice in a social media world

      Social media platforms and the technology companies that produce our shared social playgrounds blur the boundaries between practice and research. They (we?) have to, in many cases, to improve the products that companies provide users. That’s no easy thing if you’re in the business of providing a social experience through your technology! But that does not exempt companies, any more than it exempts researchers, from extending respect, beneficence, and justice to individuals sharing their daily interactions with us. So we need to, collectively, rethink when “testing a feature” transitions from improving customer experience to more than minimally impacting someone’s social life.

      Ethical stances on methodological practices are inextricably linked to how we conceptualize our objects of study. Issues of consent hinge on whether researchers believe they are studying texts or people’s private interactions. Who needs to be solicited for consent also depends on whether researchers feel they are engaged in a single site study or dealing with an infrastructure that crosses multiple boundaries. What ethical obligations, then, should I adhere to as I read people’s posts—particularly on commercial venues such as Facebook that are often considered “public domain”—even when they may involve participants who share personal details about their lives from the walled garden of their privacy settings? Are these obligations different from those I should heed with individuals not directly involved in my research? How can I use this information and in what settings? Does consent to use information from interviews with participants include the information they publicly post about themselves online? These questions are not easily grouped as solely methods issues or strictly ethical concerns.

      For me, the most pragmatic ethical practice follows from the reality that I will work with many of the people I meet through my fieldwork for years to come. And, importantly, if I burn bridges in my work, I am, literally, shutting out researchers who might want to follow in my footsteps. I can give us all a bad reputation that lasts a human subject’s lifetime. I, therefore, treat online materials as the voices of the people with whom I work. In the case of materials I would like to cite, I email the authors, tell them about my research, and ask if I may include their web pages in my analyses. I tread lightly and carefully.

      The Facebook Emotions study could have included a follow up email to all those in the study, sharing the cool results with participants and offering them a link to the happy and sad moments that they missed in their News Feed while the experiment was underway (tip of the hat to Tarleton Gillespie for those ideas). And, with more than half a million people participating, I’m sure a few hundred thousand would have opted-in to Science and to let Facebook keep the results.

      We do not always have the benefit of personal relationships, built over time with research participants to guide our practices. And, unfortunately, our personal identities or affinities with research participants do not safeguard us from making unethical decisions in our research. We have only just started (like, last week) to think through what might be comparable practices for data scientists or technology designers, who often never directly talk with the people they study. That means that clear, ethical frameworks will be even more vital as we build new toolkits to study social media as sites of human interaction and social life.

      Conclusion

      Considering that more and more of social media research links universities and industry-based labs, we must coordinate our methodologies and ethics no matter who pays us to do our research. None of us should be relieved from duty when it comes to making sure all facets of our collaborations are conducted with an explicit, ethical plan of action. There are, arguably, no secondary data sets in this new world.

      The Belmont Report was put in place to ensure that we have conversations with the Public, among ourselves, and with our institutions about the risks of the scientific enterprise. It’s there to help us come to some agreement as to how to address those risks and create contingency plans. While IRBs as classification systems can and have provided researchers with reflexive and sometimes necessary intervention, bureaucratic mechanisms and their notions of proper science are not the only or even the best source of good ethics for our work—ongoing and reflexive conversations among researchers and practitioners sharing their work with invested peers and participants are.

      Whether from the comfort of a computer or in the thick of a community gathering, studying what people do in their everyday lives is challenging. The seeming objectivity of a lab setting or the God’s eye view of a web scraping script may seem to avoid biases and desires that could, otherwise, interfere with the social situations playing out in front of us that we want to observe. But, no matter how removed we are, our presence as researchers does not evaporate when we come into contact with human interaction. One of the values of sustained, ethnographic engagement with people as we research their lives: it keeps researchers constantly accountable not only to our own scientific (and self) interests but also to the people we encounter in any observation, experiment, or engagement.

      Some of my peers argue that bothering people with requests for consent or efforts to debrief them will either “contaminate the data” or “seem creepy” after the fact. They argue that it’s less intrusive and more scientifically powerful to just study “the data” from a distance or adjust the interface design on the fly. I get it. It is not easy to talk with people about what they’re doing on online. Keep in mind that by the end of USENET’s long life as the center of the Internet’s social world, many moderated newsgroups blocked two kinds of lurkers: journalists. And researchers. In the long run, keeping a distance can leave the general public more suspicious of companies’, designers’, and researchers’ intentions. People may also be less likely to talk to us down the road when we want to get a richer sense of what they’re doing online. Let’s move away from this legalistic, officious discussion of consent and frame this debate as a matter of trust.

      None of us would accept someone surreptitiously recording our conversations with others to learn what we’re thinking or feeling just because “it’s easier” or it’s not clear that we are interested in sharing them if asked outright. We would all want to understand what someone wants to know about us and why they want to study what we’re doing—what do they hope to learn and why does it matter? Those are completely reasonable questions. All of us have a right to be asked if we want to share our lives with strangers (even researchers or technology companies studying the world or providing a service) so that we have a chance to say, “nah, not right now, I’m going through a bad break up.” What would it look like for all of us—from LOLcat enthusiasts and hardcore gamers, to researchers and tech companies—to (re)build trust and move toward a collective enterprise of explicitly opting-in to understand this rich, social world that we call “The Internet?”

      Scientists and technology companies scrutinizing data bubbling up from the tweets, posts, driving patterns, or check-ins of people are coming to realize that we are also studying moments of humans interacting with each other. These moments call for respect, trust, mutuality. By default. Every time we even think we see social interactions online. Is working from this premise too much to ask of researchers or the companies and universities that employ us? I don’t think so.

       

      Addendum (added June 13, 2014)

      I realized after posting my thoughts on how to think about social media as a site of human interaction (and all the ethical and methodological implications of doing so) that I forgot to leave links to what are, bar none, the best resources on the planet for policy makers, researchers, and the general public thinking through all this stuff.

      Run, don’t walk, to download copies of the following must-reads:

      Charles Ess and the AOIR Ethics Committee (2002). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee. Approved by the Association of Internet Researchers, November 27, 2002. Available at: http://aoir.org/reports/ethics.pdf

      Annette Markham and Elizabeth Buchanan (2012). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee (version 2.0). Approved by the Association of Internet Researchers, December 2012. Available at: http://aoir.org/reports/ethics2.pdf

       


      Notes/Bibliography/Additional Reading

      [1] The United States Department of Health, Education and Welfare (HEW) was a cabinet-level, U.S. governmental department from 1953-1979. In 1979, HEW was reorganized into two separate cabin-level departments: the Department of Education and the Department of Health and Human Services (HHS). HHS is in charge of all research integrity and compliance including research involving human subjects.

      [2] I wanted to thank my fellow MSR Ethics Advisory Board members, MSR New England Lab, and the Social Media Collective, as well as the following people for their thoughts on drafts of this essay: danah boyd, Henry Cohn, Kate Crawford, Tarleton Gillespie, James Grimmelmann, Jeff Hancock, Jaron Lanier, Tressie Cottom McMillan, Kate Miltner, Christian Sandvig, Kat Tiidenberg, Duncan Watts, and Kate Zyskowski

       

      Bowker, Geoffrey C., and Susan Leigh Star

      1999 Sorting Things Out: Classification and Its Consequences, Inside Technology. Cambridge, Mass.: MIT Press.

      Brenneis, Donald

      2006 Partial Measures. American Ethnologist 33(4): 538-40.

      Brenneis, Donald

      1994 Discourse and Discipline at the National Research Council: A Bureaucratic Bildungsroman. Cultural Anthropology 9(1): 23-36.

      Epstein, Steven

      2007 Inclusion : The Politics of Difference in Medical Research. Chicago: University of Chicago Press.

      Gieryn, Thomas F.

      1983 Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists.” American Sociological Review 48(6): 781-95.

      Halpern, Sydney A.

      2004 Lesser Harms: The Morality of Risk in Medical Research. Chicago: University of Chicago Press.

      Lederman, Rena

      2006 The Perils of Working at Home: Irb “Mission Creep” as Context and Content for an Ethnography of Disciplinary Knowledges. American Ethnologist 33(4): 482-91.

      Rothman, David J.

      2003 Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making. 2nd pbk. ed, Social Institutions and Social Change. New York: Aldine de Gruyter.

      Schrag, Zachary M.

      2010 Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009. John Hopkins University Press.

      Stark, Laura

      2012 Behind Closed Doors: IRBs and the Making of Ethical Research. University of Chicago Press. 2012

      Strathern, Marilyn

      2000 Audit Cultures: Anthropological Studies in Accountability, Ethics, and the Academy. London New York: Routledge, 2000.

      United States. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.

      1978 Report and Recommendations: Institutional Review Boards. [Washington]: U.S. Dept. of Health, Education, and Welfare : for sale by the Supt. of Docs., U.S. Govt. Print. Off.

      This essay has been cross-posted from Ethnography Matters.

      -Contributed by ,  Microsoft Research New England / Associate Professor of Communication and Culture with affiliations in American Studies, Anthropology, and the Gender Studies Department at Indiana University-

      Posted in Uncategorized | Tagged , , , , | Comments Off

      Facebook’s algorithm — why our assumptions are wrong, and our concerns are right Jul 4, 2014

      Many of us who study new media, whether we do so experimentally or qualitatively, our data big or small, are tracking the unfolding debate about the Facebook “emotional contagion” study, published recently in the Proceedings of the National Academies of Science. The research, by Kramer, Guillory, and Hancock, argued that small shifts in the emotions of those around us can shift our own moods, even online. To prove this experimentally, they made alterations in the News Feeds of 310,000 Facebook users, excluding a handful of status updates from friends that had either happy words or sad words, and measuring what those users subsequently posted for its emotional content. A matching number of users had posts left out of their News Feeds, but randomly selected, in order to serve as control groups. The lead author is a data scientist at Facebook, while the others have academic appointments at UCSF and Cornell University.

      I have been a bit reluctant to speak about this, as (full disclosure) I am both a colleague and friend of one of the co-authors of this study; Cornell is my home institution. And, I’m currently a visiting scholar at Microsoft Research, though I don’t conduct data science and am not on specific research projects for the Microsoft Corporation. So I’m going to leave the debates about ethics and methods in other, capable hands. (Press coverage: Forbes 1, 2, 3; Atlantic 1, 2, 3, 4, Chronicle of Higher Ed 1; Slate 1; NY Times 1, 2; WSJ 1, 2, 3; Guardian 1, 2, 3. Academic comments: Grimmelmann, Tufekci, Crawford, boyd, Peterson, Selinger and Hartzog, Solove, Lanier, Vertesi.) I will say that social science has moved into uncharted waters in the last decade, from the embrace of computational social scientific techniques, to the use of social media as experimental data stations, to new kinds of collaborations between university researchers and the information technology industry. It’s not surprising to me that we find it necessary to raise concerns about how that research should work, and look for clearer ethical guidelines when social media users are also “human subjects.” In many ways I think this piece of research happened to fall into a bigger moment of reckoning about computational social science that has been coming for a long time — and we have a responsibility to take up these questions at this moment.

      But a key issue, both in the research and in the reaction to it, is about Facebook and how it algorithmically curates our social connections, sometimes in the name of research and innovation, but also in the regular provision of Facebook’s service. And that I do have an opinion about. The researchers depended on the fact that Facebook already curates your News Feed, in myriad ways. When you log onto Facebook, the posts you’re immediately shown at the top of the News Feed are not every post from your friends in reverse chronological order. Of course Facebook has the technical ability to do this, and it would in many ways be simpler. But their worry is that users will be inundated with relatively uninteresting (but recent) posts, will not scroll down far enough to find the few among them that are engaging, and will eventually quit the service. So they’ve tailored their “EdgeRank” algorithm to consider, for each status update from each friend you might receive, not only when it was posted (more recent is better) but other factors, including how regularly you interact with that user (e.g. liking or commenting on their posts), how popular they are on the service and among your mutual friends, and so forth. A post with a high rating will show up, a post with a lower rating will not.

      So, for the purposes of this study, it was easy to also factor in a numerical count of happy or sad emotion words in the posts as well, and use that as an experimental variable. The fact that this algorithm does what it does also provided legal justification for the research: that Facebook curates all users’ data is already part of the site’s Terms of Service and its Data Use Policy, so it is within their rights to make whatever adjustments they want. And the Institutional Review Board at Cornell did not see a reason to even consider this as a human subjects issue: all that the Cornell researchers got was the statistical data produced from this manipulation, manipulations that are a normal part of the inner workings of Facebook.

      Defenders of the research (1, 2, 3), including Facebook, have pointed to this as a reason to dismiss what they see as an overreaction. This takes a couple of forms, not entirely consistent with each other: Facebook curates users’ News Feed anyway, it’s within their right to do so. Facebook curates users’ News Feed anyway, probably already on factors such as emotion. Facebook curates users’ News Feed anyway, and needs to understand how to do so by engaging in all sorts of A/B testing, which this was an example of. Facebook curates users’ News Feed anyway, get over it. All of these imply that it’s simply naive to think of this research as a “manipulation” of an otherwise untouched list; your News Feed is a construction, built from some of the posts direct to you, according to any number of constantly shifting algorithmic criteria. This was just one more construction. Those who are upset about this research are, according to its defenders, just ignorant of the realities of Facebook and its algorithm.

      More and more of our culture is curated algorithmically; Facebook is a prime example, though certainly not the only one. But it’s easy for those of us who pay a lot of attention to how social media platforms work, engineers and observers alike, to forget how unfamiliar that is. I think, among the population of Facebook users — more than a billion people — there’s a huge range of awareness about these algorithms and their influence. And I don’t just mean that there are some poor saps who still think that Facebook delivers every post. In fact, there certainly are many, many Facebook users who still don’t know they’re receiving a curated subset of their friends’ posts, despite the fact that this has been true, and “known,” for some time. But it’s more than that. Many users know that they get some subset of their friends’ posts, but don’t understand the criteria at work. Many know, but do not think about it much as they use Facebook in any particular moment. Many know, and think they understand the criteria, but are mistaken. Just because we live with Facebook’s algorithm doesn’t mean we fully understand it. And even for those who know that Facebook curates our News Feeds algorithmically, it’s difficult as a culture to get beyond some very old and deeply sedimented ways to think about how information gets to us.

      The public reaction to this research is proof of these persistent beliefs — a collective groan from our society as it adjusts to a culture that is algorithmically organized. Because social media, and Facebook most of all, truly violates a century-old distinction we know very well, between what were two, distinct kinds of information services. On the one hand, we had “trusted interpersonal information conduits” — the telephone companies, the post office. Users gave them information aimed for others and the service was entrusted to deliver that information. We expected them not to curate or even monitor that content, in fact we made it illegal to do otherwise; we expected that our communication would be delivered, for a fee, and we understood the service as the commodity, not the information it conveyed. On the other hand, we had “media content producers” — radio, film, magazines, newspapers, television, video games — where the entertainment they made for us felt like the commodity we paid for (sometimes with money, sometimes with our attention to ads), and it was designed to be as gripping as possible. We knew that producers made careful selections based on appealing to us as audiences, and deliberately played on our emotions as part of their design. We were not surprised that a sitcom was designed to be funny, even that the network might conduct focus group research to decide which ending was funnier (A/B testing?). But we would be surprised, outraged, to find out that the post office delivered only some of the letters addressed to us, in order to give us the most emotionally engaging mail experience.

        
      Now we find ourselves dealing with a third category. Facebook promises to connect person to person, entrusted with our messages to be delivered to a proscribed audience (now it’s sometimes one person, sometimes a friend list, sometimes all Facebook users who might want to find it). But then, as a part of its service, it provides the News Feed, which appears to be a running list of those posts but is increasingly a constructed subset, carefully crafted to be an engaging flow of material. The information coming in is entrusted interpersonal communication, but it then becomes the raw material for an emotionally engaging commodity, the News Feed. All comes in, but only some comes out. It iOS this quiet curation that is so new, that makes Facebook different than anything before. (And it makes this any research that changes the algorithmic factors in order to withhold posts quite different from other kinds of research we know Facebook to have done, including the A/B testing of the site’s design, the study of Facebook activity to understand the dynamics of social ties, or the selective addition of political information to understand the effect on voter turnout – but would include their effort to study the power of social ties by manipulating users’ feeds.)

      And Facebook is complicit in this confusion, as they often present themselves as a trusted information conduit, and have been oblique about the way they curate our content into their commodity. If Facebook promised “the BEST of what your friends have to say,” then we might have to acknowledge that their selection process is and should be designed, tested, improved. That’s where this research seems problematic to some, because it is submerged in the mechanical workings of the News Feed, a system that still seems to promise to merely deliver what your friends are saying and doing. The gaming of that delivery, be it for “making the best service” or for “research,” is still a tactic that takes cover under its promise of mere delivery. Facebook has helped create the gap between expectation and reality that it has currently fallen into.

      That to me is what bothers people, about this research and about a lot of what Facebook does. I don’t think it is merely naive users not understanding that Facebook tweaks its algorithm, or that people are just souring on Facebook as a service. I think it’s an increasing, and increasingly apparent, ambivalence about  what it is, and its divergence from what we think it is. Despite the cries of those most familiar with their workings, it takes a while, years, for a culture to adjust itself to the subtle workings of a new information system, and to stop expecting of it what tradition systems provided.

      For each form of media, we as a public can raise concerns about its influence. For the telephone system, it was about whether they were providing service fairly and universally: a conduit’s promise is that all users will have the opportunity to connect, and as a nation we forced the telephone system to ensure universal service, even when it wasn’t profitable. Their preferred design was acceptable only until it ran up against a competing concern: public access. For media content, we have little concern about being “emotionally manipulated” by a sitcom or a tear-jerker drama. But we do worry about that kind of emotional manipulation in news, like the fear mongering of cable news pundits. Here again, their preferred design is acceptable until it runs up against a competing concern: a journalistic obligation to the public interest. So what is the competing interest here? What kind of interventions are acceptable in an algorithmically curated platform, and what competing concern do they run up against?

      Is it naive to continue to want Facebook to be a trusted information conduit? Is it too late? Maybe so. Though I think there is still a different obligation when you’re delivering the communication of others — an obligation Facebook has increasingly foregone. Some of the discussion of this research suggests that the competing concern here is science: that the ethics are different because this manipulation was presented as scientific discovery, a knowledge project for which we have different standards and obligations. But, frankly, that’s a troublingly narrow view. Just because this algorithmic manipulation came to light because it was published as science doesn’t mean that it was the science that was the problem. The responsibility may extend well beyond, to Facebook’s fundamental practices.

      Is there any room for a public interest concern, like for journalism? Some have argued that Facebook and other social media are now a kind of quasi public spheres. They not only serve our desire to interact with others socially, they are also important venues for public engagement and debate. The research on emotional contagion was conducted during the week of January 11-18, 2012. What was going on then, not just in the emotional lives of these users, but in the world around them? There was ongoing violence and protest in Syria. The Costa Concordia cruise ship ran aground in the Mediterranean. The U.S. Republican party was in the midst of its nomination process: John Huntsman dropped out of the race this week, and Rick Perry the day after. January 18th was the SOPA protest blackout day, something that was hotly (emotionally?) debated during the preceding week. Social media platforms like Facebook and Twitter were in many ways the primary venues for activism and broader discussion of this particular issue. Whether or not the posts that were excluded by this research pertained to any of these topics, there’s a bigger question at hand: does Facebook have an obligation to be fair-minded, or impartial, or representative, or exhaustive, in its selection of posts that address public concerns?

      The answers to these questions, I believe, are not clear. And this goes well beyond one research study, it is a much broader question about Facebook’s responsibility. But the intense response to this research, on the part of press, academics, and Facebook users, should speak to them. Maybe we latch onto specific incidents like a research intervention, maybe we grab onto scary bogeymen like the NSA, maybe we get hooked on critical angles on the problem like the debate about “free labor,” maybe we lash out only when the opportunity is provided like when Facebook tries to use our posts as advertising. But together, I think these represent a deeper discomfort about an information environment where the content is ours but the selection is theirs.

      -Contributed by ,  Cornell University Department of Communication-

      Posted in Uncategorized | Tagged , , , , | Comments Off

      Algorithm [draft] [#digitalkeywords] Jun 25, 2014

      “What we are really concerned with when we invoke the “algorithmic” here is not the algorithm per se but the insertion of procedure into human knowledge and social experience. What makes something algorithmic is that it is produced by or related to an information system that is committed (functionally and ideologically) to the computational generation of knowledge or decisions.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Algorithm — Tarleton Gillespie, Cornell University

      In Keywords, Raymond Williams urges us to think about how our use of a term has changed over time. But the concern with many of these “digital keywords” is the simultaneous and competing uses of a term by different communities, particularly those inside and outside of technical professions, who seem often to share common words but speak different languages. Williams points to this concern too: “When we come to say ‘we just don’t speak the same language’ we mean something more general: that we have different immediate values or different kinds of valuation, or that we are aware, often intangibly, of different formations and distributions of energy and interest.” (11)

      For “algorithm,” there is a sense that the technical communities, the social scientists, and the broader public are using the word in different ways. For software engineers, algorithms are often quite simple things; for the broader public they name something unattainably complex. For social scientists there is danger in the way “algorithm” lures us away from the technical meaning, offering an inscrutable artifact that nevertheless has some elusive and explanatory power (Barocas et al, 3). We find ourselves more ready to proclaim the impact of algorithms than to say what they are. I’m not insisting that critique requires settling on a singular meaning, or that technical meanings necessarily trumps others. But we do need to be cognizant of the multiple meanings of “algorithm” as well as the type of discursive work it does in our own scholarship.

      algorithm as a technical solution to a technical problem

      In the scholarly effort to pinpoint the values that are enacted, or even embedded, in computational technology, it may in fact not be the “algorithms” that we need be most concerned about — if what we meant by algorithm was restricted to software engineers’ use the term. For their makers, “algorithm” refers specifically to the logical series of steps for organizing and acting on a body of data to quickly achieve a desired outcome. MacCormick (2012), in an attempt to explain algorithms to a general audience, calls them “tricks,” (5) by which he means “tricks of the trade” more than tricks in the magical sense — or perhaps like magic, but as a magician understands it. An algorithm is a recipe composed in programmable steps; most of the “values” that concern us lie elsewhere in the technical systems and the work that produces them.

      For its designers, the “algorithm” comes after the generation of a “model,” i.e. the formalization of the problem and the goal in computational terms. So, the task of giving a user the most relevant search results for their queries might be operationalized into a model for efficiently calculating the combined values of pre-weighted objects in the index database, in order to improve the percentage likelihood that the user clicks on one of the first five results.[1] This is where the complex social activity and the values held about it are translated into a functional interaction of variables, indicators, and outcomes. Measurable relationships are posited as existing between some of these elements; a strategic target is selected, as a proxy for some broader social goal; a threshold is determined as an indication of success, at least for this iteration.

      The “algorithm” that might follow, then, is merely the steps for aggregating those assigned values efficiently, or delivering the results rapidly, or identifying the strongest relationships according to some operationalized notion of “strong.” All is in the service of the model’s understanding of the data and what it represents, and in service of the model’s goal and how it has been formalized. There may be many algorithms that would reach the same result inside a given model, just like bubble sorts and shell sorts both put lists of words into alphabetical order. Engineers choose between them based on values such as how quickly they return the result, the load they impose on the system’s available memory, perhaps their computational elegance. The embedded values that make a sociological difference are probably more about the problem being solved, the way it has been modeled, the goal chosen, and the way that goal has been operationalized (Reider).

      Of course, simple alphabetical sorting may be a misleading an example to use here. The algorithms we’re concerned about today are rarely designed to reach a single and certifiable answer, like a correctly alphabetized list. More common are algorithms that must choose one of many possible results, none of which are certifiably “correct.” Algorithm designers must instead achieve some threshold of operator or user satisfaction — understood in the model, perhaps, in terms of percent clicks on the top results, or percentage of correctly identified human faces from digital images.

      This brings us to the second value-laden element around the algorithm. To efficiently design algorithms that achieve a target goal (rather than reaching a known answer), algorithms are “trained” on a corpus of known data. This data has been in some way certified, either by the designers or by past user practices: this photo is of a human face, this photo is not; this search result has been selected by many users in response to this query, this one has not. The algorithm is then run on this data so that it may “learn” to pair queries and results found satisfactory in the past, or to distinguish images with faces from images without.

      The values, assumptions, and workarounds that go into the selection and preparation of this training data may also be of much more importance to our sociological concerns than the algorithm learning from it. For example, the training data must be a reasonable approximation of the data that algorithm will operate on in the wild. The most common problem in algorithm design is that the new data turns out not to match the training data in some consequential way. Sometimes new phenomena emerge that the training data simply did not include and could not have anticipated; just as often, something important was overlooked as irrelevant, or was scrubbed from the training data in preparation for the development of the algorithm.

      Furthermore, improving an algorithm is rarely about redesigning it. Rather, designers will “tune” an array of parameters and thresholds, each of which represents a tiny assessment or distinction. In search, this might mean the weight given to a word based on where it appears in a webpage, or assigned when two words appear in proximity, or given to words that are categorically equivalent to the query term. These values have been assigned and are already part of the training data, or are thresholds that can be dialed up or down in the algorithm’s calculation of which webpage has a score high enough to warrant ranking it among the results returned to the user.

      Finally, these exhaustively trained and finely tuned algorithms are instantiated inside of what we might call an application, which actually performs the functions we’re concerned with. For algorithm designers, the algorithm is the conceptual sequence of steps, which should be expressible in any computer language, or in human or logical language. They are instantiated in code, running on servers somewhere, attended to by other helper applications (Geiger 2014), triggered when a query comes in or an image is scanned. I find it easiest the think about the difference between the “book” in your hand and the “story” within it. These applications embody values as well, outside of their reliance on a particular algorithm.

      To inquire into the implications of “algorithms,” if we meant what software engineers mean when they use the term, could only be something so picky as investigating the political implications of using a bubble sort or a shell sort — setting aside bigger questions like why “alphabetical” in the first place, or why train on this particular dataset. Perhaps there are lively insights to be had about the implications of different algorithms in this technical sense,{2] but by and large we in fact mean something else when we talk about algorithms as having “social implications.”

      algorithm as synecdoche

      While it is important to understand the technical specificity of the term, “algorithm” has now achieved some purchase in the broader public discourse about information technologies, where it is typically used to mean everything described in the previous section, combined. As Goffey puts it, “Algorithms act, but they do so as part of an ill-defined network of actions upon actions.” (19) “Algorithm” may in fact serve as an abbreviation for the sociotechnical assemblage that includes algorithm, model, target goal, data, training data, application, hardware — and connect it all to a broader social endeavor. Beyond the technical assemblage there are people at every point: people debating the models, cleaning the training data, designing the algorithms, tuning the parameters, deciding on which algorithms to depend on in which context. “These algorithmic systems are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them, tweaking and tuning, swapping out parts and experimenting with new arrangements… We need to examine the logic that guides the hands.” (Seaver 2013) Perhaps “algorithm” is just the name for one kind of socio-technical ensemble, part of a family of authoritative systems for knowledge production or decision-making: in this one, humans involved are rendered legible as data, are put into systematic / mathematical relationships with each other and with information, and then are given information resources based on calculated assessments of them and their inputs.

      But what is gained and lost by using “algorithm” this way? Calling the complex sociotechnical assemblage an “algorithm” avoids the need for the kind of expertise that could parse and understand the different elements; a reporter may not need to know the relationship between model, training data, thresholds, and application in order to call into question the impact of that “algorithm” in a specific instance. It also acknowledges that, when designed well, an algorithm is meant to function seamlessly as a tool; perhaps it can, in practice, be understood as a singular entity. Even algorithm designers, in their own discourse, shift between the more precise meaning, and using the term more broadly in this way.

      On the other hand, this conflation risks obscuring the ways in which political values may come in elsewhere than at what designers call the “algorithm.” This helps account for the way many algorithm designers seem initially surprised by the interest of sociologists in what they do — because they may not see the values in their “algorithms” (precisely understood) that we see in their algorithms (broadly understood), because questions of value are very much bracketed in the early decisions about how to operationalize a social activity into a model and into the miniscule, mathematical moments of assigning scores and tuning thresholds.

      In our own scholarship, this kind of synecdoche is perhaps unavoidable. Like the journalists, most sociologists do not have the technical expertise or the access to investigate each of the elements of what they call the algorithm. But when we settle uncritically on this shiny, alluring term, we risk reifying the processes that constitute it. All the classic problems we face when trying to unpack a technology, the term packs for us. It becomes too easy to treat it as a single artifact, when in the cases we’re most interested in it’s rarely one algorithm, but many tools functioning together, sometimes different tools for different users.[3] It also tends to erase the people involved, downplay their role, and distance them from accountability. In the end, whether this synecdoche is acceptable depends on our intellectual aims. Calling all these social and technical elements “the algorithm” may give us a handle with which to grip we want to closely interrogate; at the same time it can produce a “mystified abstraction” (Striphas 2012) that, for other research questions, it might be better to demystify.

      algorithm as talisman

      The information industries have found value in the term “algorithm” in their public-facing discursive efforts as well. To call their service or process an algorithm is to lend a set of associations to that service: mathematical, logical, impartial, consistent. Algorithms seem to have a “disposition towards objectivity” (Hillis et al 2013: 37); this objectivity is regularly performed as a feature of algorithmic systems. (Gillespie 2014) Conclusions that can be described as having been generated by an algorithm come with a powerful legitimacy, much the way statistical data bolsters scientific claims, with the human hands yet another step removed. It is a very different kind of legitimacy than one that rests on the subjective expertise of an editor or a consultant, though it is important not to assume that it trumps such claims in all cases. A market prediction that is “algorithmic” is different from a prediction that comes from an expert broker highly respected for their expertise and acumen; a claim about an emergent social norm in a community generated by an algorithm is different from one generated ethnographically. Each makes its own play for legitimacy, and implies its own framework for what legitimacy is (quantification or interpretation, mechanical distance or human closeness). But in the context of nearly a century of celebration of the statistical production of knowledge and longstanding trust in automated calculation over human judgment, the algorithmic does enjoy a particular cultural authority.

      More than that, the term offers the corporate owner a powerful talisman to ward off criticism, when companies must justify themselves and their services to their audience, explain away errors and unwanted outcomes, and justify and defend the increasingly significant roles they play in public life. (Gillespie 2014) Information services can point to “the algorithm” as having been responsible for particular results or conclusions, as a way to distance those results from the providers. (Morozov, 2013: 142) The term generates an entity that is somehow separate, the assembly line inside the factory, that can be praised as efficient or blamed for mistakes.

      The term “algorithm” is also quite often used as a stand-in for its designer or corporate owner. When a critic says “Facebook’s algorithm” they often mean Facebook and the choices it makes, some of which are made in code. This may be another way of making the earlier point, that the singular term stands for a complex sociotechnical assemblage: Facebook’s algorithm really means “Facebook,” and Facebook really means the people, things, priorities, infrastructures, aims, and discourses that animate them. But it may also be a political economic conflation: this is Facebook acting through its algorithm, intervening in an algorithmic way, building a business precisely on its ability to construct complex models of social/expressive activity, train on an immense corpus of data, tune countless parameters, and reach formalized goals extremely efficiently.

      Maybe saying “Facebook’s algorithm” and really meaning the choices and interventions made by Facebook the company into our social practices is a way to assign accountability (Diakopoulos 2013, Ziewitz 2011). It makes the algorithm theirs in a powerful way, and works to reduce the distance some providers put between “them” (their aims, their business model, their footprint, their responsibility) and “the algorithm” (as somehow autonomous from all that). On the other hand, conflating the algorithmic mechanism and the corporate owner may obscure the ways these two entities are not always aligned. It is crucial that we discern between things done by the algorithmic system and things done in other ways, such as the deletion of obscene images from a content platform, which is sometimes handled algorithmically and sometimes performed manually. (Gillespie 2012b) It is crucial to note slippage between a provider’s financial or political aims and the way the algorithmic system actually functions. And conflating algorithmic mechanism and corporate owner misses how some algorithmic approaches are common to multiple stakeholders, circulate across them, and embody a tactic that exceeds any one implementation.

      algorithmic as committed to procedure

      In recent scholarship on the social significance of algorithms, it is common for the term to appear not as a noun but as an adjective. To talk about “algorithmic identity” (Cheney-Lippold), “algorithmic regulation” (O’Reilly), “algorithmic power” (Bucher), “algorithmic publics” (Leavitt), “algorithmic culture” (Striphas, 2010) or the “algorithmic turn (Uricchio, 2011) is to highlight a social phenomenon that is driven by and committed to algorithmic systems — which include not just algorithms themselves, but also the computational networks in which they function, the people who design and operate them, the data (and users) on which they act, and the institutions that provide these services.

      What we are really concerned with when we invoke the “algorithmic” here is not the algorithm per se but the insertion of procedure into human knowledge and social experience. What makes something algorithmic is that it is produced by or related to an information system that is committed (functionally and ideologically) to the computational generation of knowledge or decisions. This requires the formalization of social facts into measurable data and the “clarification” (Cheney-Lippold) of social phenomena into computational models that operationalize both problem and solution. These are often proxies for human judgment or action, meant to simulate it as nearly as possible. But the “algorithmic” intervenes in terms of step-by-step procedures that one (computer or human) can enact on this formalized information, such that it can be computed. This process is automated so that it can happen instantly, repetitively, and across many contexts, away from the guiding hand of its implementers. This is not the same as suggesting that knowledge is produced exclusively by a machine, abstracted from human agency or intervention. Information systems are always swarming with people, we just can’t always see them. (Downey, 2014; Kushner 2013) And an assembly line might be just as “algorithmic” in this sense of the word, or at least the parallels are important to consider. What is central is the commitment to procedure, and the way procedure distances its human operators from both the point of contact with others and the mantle of responsibility for the intervention they make. It is a principled commitment to the “if/then” logic of computation.

      Yet what does “algorithmic” refer to, exactly? To put it another way, what is it that is not “algorithmic”? What kind of “regulation” is being condemned as insufficient when Tim O’Reilly calls for “algorithmic regulation”? It would be all too easy to invoke the algorithmic as simply the opposite of what is done subjectively or by hand, or of what can only be accomplished with persistent human oversight, or of what is beholden to and limited by context. To do so would draw too stark a contrast between the algorithm and something either irretrievably subjective (if we are glorifying the impartiality of the algorithmic) or warmly human (if we’re condemning the algorithmic for its inhumanity). If “algorithmic” market predictions and search results are produced by a complex assemblage of people, machines, and procedures, what makes their particular arrangement feel different than other ways of producing information, which are also produced by a complex assemblage of people, machines, and procedures, such that it makes sense to peg them as “algorithmic?” It is imperative to look closely at those pre- and non-algorithmic practices that precede or stand in contrast to those we posit as algorithmic, and recognize how they too strike a balance between the procedural and the subjective, the machinic and the human, the measured and the ineffable. And it is crucial that we continue to examine algorithmic systems and their providers and users ethnographically, to explore how the systemic and the ad hoc coexist and are managed within them.

      To highlight their automaticity and mathematical quality, then, is not to contrast algorithms to human judgment. Instead it is to recognize them as part of mechanisms that introduce and privilege quantification, proceduralization, and automation in human endeavors. Our concern for the politics of algorithms is an extension of worries about Taylorism and the automation of industrial labor; to actuarial accounting, the census, and the quantification of knowledge about people and populations; and to management theory and the dominion of bureaucracy. At the same time, we sometimes wish for more “algorithmic” interventions when the ones we face are discriminatory, nepotistic, and fraught with error; sometimes procedure is truly democratic. I’m reminded of the sensation of watching complex traffic patterns from a high vantage point: it is clear that this “algorithmic” system privileges the imposition of procedure, and users must in many ways accept it as a kind of provisional tyranny in order to even participate in such a complex social interaction. The elements can only be known in operational terms, so as to calculate the relations between them; every possible operationalized interaction within the system must be anticipated; and stakeholders often point to the system-ness of the system to explain success and explain away failure. The system always struggles with the tension between the operationalized aims and the way humanity inevitably undermines, alters, or exceeds those aims. At the same time, it’s not clear how to organize such complex behavior in any other way, and still have it be functional and fair. Commitment to the system and the complex scale at which it is expected to function makes us beholden to the algorithmic procedures that must manage it. From this vantage point, algorithms are merely the latest instantiation of the modern tension between ad hoc human sociality and procedural systemization — but one that is now powerfully installed as the beating heart of the network technologies we surround ourselves with and increasingly depend upon.


      Endnotes

      1. This parallels Kowalski’s well-known definition of an algorithm as “logic + control”: “An algorithm can be regarded as consisting of a logic component, which specifies the knowledge to be used in solving problems, and a control component, which determines the problem-solving strategies by means of which that knowledge is used. The logic component determines the meaning of the algorithm whereas the control component only affects its efficiency.” (Kowalksi, 424) I prefer to use “model” because I want to reserve “logic” for the underlying premise of the entire algorithmic system and its deployment.

      2.See Kockelman 2013 for a dense but superb example.

      3.See Brian Christian, The A/B Test: Inside the Technology That’s Changing the Rules of Business.” Wired, April 25. http://www.wired.com/2012/04/ff_abtesting/


      References

      Barocas, Solon, Sophie Hood, and Malte Ziewitz. 2013. “Governing Algorithms: A Provocation Piece.” Available at SSRN 2245322. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2245322

      Beer, David. 2009. “Power through the Algorithm? Participatory Web Cultures and the Technological Unconscious.” New Media & Society 11 (6): 985-1002.

      Bucher, T. 2012. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media & Society 14 (7): 1164-80.

      Cheney-Lippold, J. 2011. “A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control.” Theory, Culture & Society 28 (6): 164-81.

      Diakopoulos, Nicholas. 2013. “Algorithmic Accountability Reporting: On the Investigation of Black Boxes.” A Tow/Knight Brief. Tow Center for Digital Journalism, Columbia Journalism School. http://towcenter.org/algorithmic-accountability-2/

      Downey, Gregory J. 2014. “Making Media Work: Time, Space, Identity, and Labor in the Analysis of Information and Communication Infrastructures.” In Media Technologies: Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A Foot, 141-66. Cambridge, MA: The MIT Press.

      Geiger, R. Stuart. 2014. “Bots, Bespoke, Code and the Materiality of Software Platforms.” Information, Communication & Society 17 (3): 342-56.

      Gillespie, Tarleton. 2012a. “Can an Algorithm Be Wrong?” Limn 1 (2). http://escholarship.org/uc/item/0jk9k4hj

      Gillespie, Tarleton. 2012b. “The Dirty Job of Keeping Facebook Clean.” Culture Digitally (Feb 22). http://culturedigitally.org/2012/02/the-dirty-job-of-keeping-facebook-clean/

      Gillespie, Tarleton. 2014. “The Relevance of Algorithms.” In Media Technologies: Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A Foot, 167-93. Cambridge, MA: The MIT Press.

      Gitelman, Lisa. 2006. Always Already New: Media, History and the Data of Culture. Cambridge, MA: MIT Press.

      Hillis, Ken, Michael Petit, and Kylie Jarrett. 2013. Google and the Culture of Search. Abingdon: Routledge.

      Kockelman, Paul. 2013. “The Anthropology of an Equation. Sieves, Spam Filters, Agentive Algorithms, and Ontologies of Transformation.” HAU: Journal of Ethnographic Theory 3 (3): 33-61.

      Kowalski, Robert. 1979. “Algorithm = Logic + Control.” Communications of the ACM 22 (7): 424-36.

      Kushner, S. 2013. “The Freelance Translation Machine: Algorithmic Culture and the Invisible Industry.” New Media & Society 15 (8): 1241-58.

      MacCormick, John. 2012. 9 Algorithms That Changed the Future. Princeton: Princeton University Press.

      Mager, Astrid. 2012. “Algorithmic Ideology: How Capitalist Society Shapes Search Engines.” Information, Communication & Society 15 (5): 769-87.

      Morozov, Evgeny. 2014. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.

      O’Reilly, Tim. 2013. “Open Data and Algorithmic Regulation.” In Beyond Transparency: Open Data and the Future of Civic Innovation, edited by Lauren Goldstein and Lauren Dyson. San Francisco, Calif.: Code for America Press. http://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/

      Rieder, Bernhard. 2012. “What Is in PageRank? A Historical and Conceptual Investigation of a Recursive Status Index.” Computational Culture 2. http://computationalculture.net/article/what_is_in_pagerank

      Seaver, Nick. 2013. “Knowing Algorithms.” Media in Transition 8, Cambridge, MA. http://nickseaver.net/papers/seaverMiT8.pdf 

      Striphas, Ted (2010) “How to Have Culture in an Algorithmic Age” The Late Age of Print June 14. http://www.thelateageofprint.org/2010/06/14/how-to-have-culture-in-an-algorithmic-age/

      Striphas, Ted (2012) “What is an Algorithm?” Culture Digitally Feb 1. http://culturedigitally.org/2012/02/what-is-an-algorithm/

      Uricchio, William. 2011. “The Algorithmic Turn: Photosynth, Augmented Reality and the Changing Implications of the Image.” Visual Studies 26 (1): 25-35.

      Williams, Raymond (1976/1983) Keywords: A Vocabulary of Culture and Society. 2nd ed. Oxford: Oxford University Press.

      Ziewitz, Malte. 2011. “How to think about an algorithm? Notes from a not quite random walk,” Discussion paper for Symposium on “Knowledge Machines between Freedom and Control”, 29 September 29. http://ziewitz.org/papers/ziewitz_algorithm.pdf

      -Contributed by ,  Cornell University Department of Communication-

      Posted in Uncategorized | Tagged , , | Comments Off

      ← Older posts |