Culture Digitally // Examining Contemporary Cultural Production

Culture Digitally // Examining Contemporary Cultural Production

  • With the generous support of the National Science Foundation we have developed Culture Digitally. The blog is meant to be a gathering point for scholars and others who study cultural production and information technologies. Welcome and please join our conversation.

     

    • Community [draft] [#digitalkeywords] Jul 11, 2014

      “Today, we speak of a global community, made possible by communications technologies, and our geographically-specific notions of community are disrupted by the possibilities of the digital, where disembodied and socially distant beings create what they – and we, as scholars – also call community. But are the features and affordances of digital community distinct from those we associate with embodied clanship and kinship?”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Community — Rosemary AvanceUniversity of Pennsylvania

      The digital era poses new possibilities and challenges to our understanding of the nature and constitution of community. Hardly a techno buzzword, the term “community” has historic uses ranging from a general denotation of social organization, district, or state; to the holding of important things in common; to the existential togetherness and unity found in moments of communitas. Our English-language “community” originates from the Latin root communis, “common, public, general, shared by all or many,” which evolved into the 14th century Old French comunité meaning “commonness, everybody”. Originally the noun was affective, referencing a quality of fellowship, before it ever referred to an aggregation of souls. Traditionally the term has encompassed our neighborhoods, our religious centers, and our nation-states– historically, geographic and temporal birthrights, subjectivities unchosen by the individual. Today, we speak of a global community, made possible by communications technologies, and our geographically-specific notions of community are disrupted by the possibilities of the digital, where disembodied and socially distant beings create what they– and we, as scholars– also call community. But are the features and affordances of digital community distinct from those we associate with embodied clanship and kinship?

      With shared etymological roots and many shared assumptions, the term “community” is of central importance to the field of Communication. Social scientific taxonomies have long placed the elusive notion of community at the apex of human association, as a utopian model of connection and cohesion, a place where human wills unite for the good of the group. We long to commune, as John Peters argues[1], yet our inability to ever truly connect with another soul keeps us grounded in sympathies and persistent in attempts. Perhaps this elusiveness is where our collective disciplinary preoccupation with the notion of community arises.

      On- or offline, community is at best an idealized, imaginary structure, and that idealization obfuscates exploitation. Michel Foucault reminds us that pure community is at base a mechanism of control over social relations, a policing of our interactions.[2] Benedict Anderson reaffirms, too, the imaginary nature of community, which we conceive of as a “deep, horizontal comradeship”[3] in such a way that power differentials are jointly pretended away.

      Victor Turner[4], of course, adopts the source of the word in his theories of liminality and communitas, arguing after van Gennep that ritual rites of passage move an individual from a state of social indeterminacy to a state of communal oneness and homogeneity. The outcome of an individual’s reincorporation into a group is a burdening, as the individual takes on obligation and responsibility toward defined others. This is the formation, the very root, of community– an ethical orientation outside oneself and toward others. Thus the community emerges as the social compact charged with policing the system. The implication of community, then, is citizenship-belonging. Community is an ideal, the result of individuals accepting and serving their obligations and responsibilities vis-à-vis the collective.

      What do we make of Internet-based communities, united over shared interests from the mundane to the elysian, evading easy classification due to wide ranging differences in participation, influence, and affective connection? Moral panics accompany all new media technologies, and the pronounced fear associated with global connectivity via the Internet, with no little irony, reflects the fear of disconnection. In Bowling Alone[5], Robert Putnam notoriously gives voice to this fear, suggesting that declines in community commitment, manifest in low civic and political engagement, declining religious participation, increasing distrust, decreasing social ties, and waning altruism are at least in part attributable to technology and mass media, as entertainment and news are tailored to the individual and consumed alone. Putnam paints a bleak image of Americans in dark houses, television lights flickering and TV dinners mindlessly consumed.

      Digital community seems to offer a panacea to both the problems of community as a mechanism of control, and the fear of disconnection in a new media age. Indeed, Fred Turner shows that digital community has roots in countercultural movements, as “virtual community… translated a countercultural vision of the proper relationship between technology and sociability into a resource for imagining and managing life in the network economy”[6]. Coming, then, as a solution to the “problem” of modernity, that disembodied cyberspace somehow at once flattens and broadens our notions of self.

      So what do we mean by digital community? Both features of Internet culture, scholars differentiate between “virtual” and “digital” communities, the former denoting a quasi-geographical location (e.g., a particular URL), whereas digital communities are ephemeral, united around a shared interest or identity rather than a particular virtual location. Thus virtual gaming communities, for instance, may be located at a particular website, while digital gaming communities are dispersed across social platforms and virtual spaces, united around a shared interest.

      While past conceptions of community were generally outside one’s agential selection– you are born and die in your town, your religion is the faith of your parents– today’s diverse digital landscape means self-selection into communities of interest and affinity. But digital community does not entirely escape the deterministic, as availability still marks a very real digital divide between those with access to the technology and those without. Not only that, but the affordances of various platforms, both in intended and possible (read: disruptive) use, all inform what might be seen as a digital community’s blueprint. Online community formation relies on this peer-to-peer software architecture that pre-dates the community itself, so that communities evolve and adapt not in spite of but because of the affordances of the technological platform. These include format, space constraints, visuals, fixity vs. mutability, privacy vs. surveillance, peer feedback, report features/TOS, modality (cellular, tablet, desktop) — all features which inform what is possible in a given virtual community. Digital communities can evade some but not all of the fixity of these structural constraints, reaching across a variety of platforms and forums on both the light and dark web.

      Both types of online community networks are dynamic and self-organizing. Many social networking sites like Facebook and Myspace are pure “intentional communities” wherein self-selection into the platform and mutual “friending” secure one’s place[7]. Highly fragmented, niche communities redistribute power in both intangible and tangible ways — think only of the economic impact of peer-to-peer communities on the music industry, where file sharing challenges traditional conceptions of property rights and even our collective moral code[8]. Indeed, content sharing is the basis of online community– from photos, to text, to files and links– and users themselves decide their own level of engagement in these participatory cultures. Within the communities themselves, the flattening dynamic of Internet culture, where everyone[9] can have a platform and a voice, obfuscates the very real social hierarchies which are supported by social processes and norms– all of which evolve from platform affordances.

      Some scholars and observers still express a reticence to accept Facebook, Twitter, blogs, or forums as true examples of community. They see these spaces as primarily narcissistic expressions of what Manuel Castells calls the “culture of individualism”, emphasizing consumerism, networked individualism, and autonomy; rather than the “culture of communalism”, rooted in history and geography[10]. Ironically, perhaps, much of today’s “countercultural” vision involves little to no connectivity– refusing to participate in the exploitation and grand social experiment that is Facebook, for instance, one might opt out of forms of life available there.

      Yet users, if we take them at their word, say that online community provides a space to be “real” — or somehow more authentic– in ways that embodied community might sanction. An overabundance of narrative visibility and social support on the Internet allow users to foster difference in ways that limited offline social networks simply cannot sustain. That is to say, in today’s world, it is not uncommon for youth to self-identify as queer and first “come out” in digital spaces[11] or, to draw on my own ethnographic work, for Mormons to foster heterodox (e.g. liberal) identities in closed Facebook groups before what they too mark as a “coming out” to their conservative “real-world” family and friends[12]. We might do well to remember the origin of our term “community”, which referenced a quality of fellowship before it ever referred to an aggregation of souls. It seems our term has come full circle, as disembodied souls unite in fellowship mediated by the digital.


      Notes

      1. Peters, John Durham. (1999). Speaking into the air: A history of the idea of communication. Chicago: U. of Chicago Press.

      2. Foucault, Michel. (1977). Discipline and punish: The birth of the prison. New York: Random House.

      3. Anderson, Benedict. (1983). Imagined communities: Reflections on the origin and spread of nationalism. London: Verso.

      4. Turner, Victor. (1969). “Liminality and communitas.” in The Ritual Process: Structure and Anti-Structure, pp. 94-. New York: Aldine.

      5. Putnam, Robert. (2000). Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.

      6. Turner, Fred. (2005, July). “Where the counterculture met the new economy: The WELL and the origins of virtual community.” Technology and Culture: 46, 491.

      7. C.f. boyd, danah. (2006,4 December). “Friends, friendsters, and myspace top 8: Writing community into being on social network sites.” First Monday 11(12). Available at http://firstmonday.org/article/view/1418/1336

      8. See Hughes, Jerald & Karl Reiner Lang. (2003). “If I had a song: The culture of digital community networks and its impact on the music industry.” International Journal on Media Management 5(3):180-189.

      9. “Everyone”, that is, with access, equipment, technological savvy, and, presumably, an audience.

      10. Castells, Manuel. (2007). “Communication, power, and counter-power in the network society.” International Journal of Communication 1:238-266.

      11. Gray, Mary L. (2009, July). “Negotiating identities/queering desires: Coming out online and the remediation of the coming-out story.” Journal of Computer-Mediated Communication 14(4):1162-1189.

      12. Here I’m drawing on years of ethnographic work among Mormons on the Internet, with details forthcoming in my dissertation “Constructing Religion in the Digital Age: The Internet and Modern Mormon Identities”; for more on Mormon deconversion and online narratives see Avance, Rosemary. (2013). “Seeing the light: Mormon conversion and deconversion narratives in off- and online worlds.” Journal of Media and Religion 12(1):16-24.

      -Contributed by ,  -

      Posted in Uncategorized | Tagged , , | Leave a comment

      When Science, Customer Service, and Human Subjects Research Collide. Now What? Jul 9, 2014

      My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: we are now, officially, doing human subjects research.

      I’m frustrated that the state of public intellectualism allows us, individually, to jump into the conversation about the recently published Facebook “Emotions” Study [1]. What we—from technology builders and interface designers to data scientists and ethnographers working in industry and at universities alike—really (really) need right now is to sit down together and talk. Pointing the finger or pontificating doesn’t move us closer to the discussions we need to have, from data sharing and users’ rights to the drop in public funding for basic research itself. We need a dialogue—a thoughtful, compassionate conversation among those who are or will be training the next generation of researchers studying social media. And, like all matters of ethics, this discussion will become a personal one as we reflect on our doubts, disagreements, missteps, and misgivings. But the stakes are high. Why should the Public trust social media researchers and the platforms that make social media a thing? It is our collective job to earn and maintain the Public’s trust so that future research and social media builders have a fighting chance to learn and create more down the line. Science, in particular, is an investment in questions that precede and will live beyond the horizon of individual careers.

      As more and more of us crisscross disciplines and work together to study or build better social media, we are pressed to rethink our basic methods and the ethical obligations pinned to them. Indeed “ethical dilemmas” are often signs that our methodological techniques are stretched too thin and failing us. When is something a “naturalistic experiment” if the data are always undergoing A/B tweaks? How do we determine consent if we are studying an environment that is at once controllable, like a lab, but deeply social, like a backyard BBQ? When do we need to consider someone’s information “private” if we have no way to know, for sure, what they want us to do with what we can see them doing? When, if ever, is it ok to play with someone’s data if there’s no evident harm but we have no way to clearly test the long-term impact on a nebulous number of end users?

      There is nothing obvious about how to design and execute ethical research that examines people’s individual or social lives. The reality is, when it comes to studying human interaction or behavior (for profit or scientific glory), it is no more (or less) complicated whether we’re interviewing someone in their living room, watching them in a lab, testing them at the screen, or examining the content they post online. There is no clearer sign of this than the range of reactions to the news (impeccably curated here by James Grimmelmann) that for one week, back in January 2012, researchers manipulated (in the scientific sense) what 689,003 Facebook users read in their individual News Feed. Facebook’s researchers fed some users a diet containing fewer posts of “happy” and positive words than their usual News Feed; other users received a smaller than their average allotment of posts ladled with sad words. Cornell-based researchers came in after the experiment was over to help sift through and crunch the massive data set. Here’s what the team found: By the experiment’s last day (which, coincidentally, landed on the day of the SOPA online protests! Whoops), it turned out that a negligible—but statistically detectable—number of people produced fewer positive posts and more negative ones if their Feed included fewer positive news posts from friends; when the researchers scaled back the number of posts with negative cues from friends, people posted fewer negative and more positive posts. This interesting, even if small, finding was published in the June 2014 issue of the Proceedings of the National Academy of Sciences (PNAS). That’s how Science works—one small finding at a time.

      At issue: the lead author, Facebook Data Scientist, Adam Kramer, never told users in the study that their News Feeds were part of this experiment, either before or after that week in January. And Cornell University’s researchers examining the secondary data set (fancy lingo for the digital records of more than half a million people’s interactions with each other) weren’t, technically, on the hook for explaining that to subjects either. Mind you, it’s often acceptable in human subjects research to conduct experiments without prior consent, as long as everyone discussing the case agrees that the experiment does not impose greater risk to the person than they might experience in a typical day. But even in those cases, at some point the research subjects are told (“debriefed”) about their participation in the study and given the option to withdraw data collected about them from the study. Researchers also have a chance to study the impact of the stimulus they introduced into the system. So, the question of the hour is: Do we cross a line when testing a product also asks a scientifically relevant question? If researchers or systems designers are “just” testing a product on end users (aka humans) and another group has access to all that luscious data, whose ethics apply? When does “testing” end and “real research” begin in the complicated world of “The Internet?”

      Canonical Science teaches us that the greater the distance between researchers and our subjects (often framed as objectivity), the easier it is for us to keep trouble at arm’s length. Having carried out what we call “human subjects research” for much of my scholarly life—all of it under the close scrutiny of Institutional Review Boards (IRBs)—I feel professionally qualified to say, “researching people ain’t easy.” And, you know what makes it even harder? We are only about 10 years into this thing we call “social media”—which can morph into a telephone, newspaper, reality TV show, or school chalkboard, depending on who’s wielding it and when we’re watching them in action. Online, we are just as likely to be passionately interacting with each other, skimming prose, or casually channel-surfing, depending on our individual context. Unfortunately, it’s hard for anyone studying the digital signs of humans interacting online to know what people mean for us to see—unless we ask them. We don’t have the methods (yet) to robustly study social media as sites of always-on, dynamic human interaction. So, to date, we’ve treated the Internet as a massive stack of flat, text files to scrape and mine. We have not had a reason to collectively question this common, methodological practice as long as we maintained users’ privacy. But is individual privacy really the issue?

      My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: We are now, officially, doing human subjects research. Here’s some background to orient us and the people who pay our research bills (and salaries) to this new reality.

      Genealogy of Human Subjects Research Oversight in the United States

      In 1966, the New England Journal of Medicine published an article by Harvard research physician, Henry Beecher, chronicling 22 ethically questionable scientific studies conducted between 1945 and 1965 (Rothman, 2003: 70-84). Dr. Beecher’s review wasn’t exposing fringe science on the margins. Federally and industry-funded experiments conducted by luminaries of biomedicine accounted for most of the work cited in his review. Even if today we feel like it’s a no brainer to call ethical foul on the studies Beecher cited, keep in mind that it took DECADES for people to reach consensus on what not to do. Take, for example, Beecher’s mention of Dr. Saul Krugman. From 1958-1964, Dr. Saul Krugman injected children with live hepatitis virus at Willowbrook State School on New York’s Staten Island, a publicly-funded institution for children with intellectual disabilities. The Office of the Surgeon General, U.S. Armed Forces Epidemiological Board, and New York State Department of Mental Hygiene funded and approved his research. Krugman directed staff to put the feces of infected children into milkshakes later fed to newly admitted children, to track the spread of the disease. Krugman pressed poor families to include their children in what he called “treatments” to secure their admission to Willowbrook, the only option for poor families with children suffering from mental disabilities. After infecting the children, Krugman experimented with their antibodies to develop what would later become the vaccines for the disease. Krugman was never called out for the lack of consent or failure to provide for the children he infected with the virus, now at risk of dying from liver disease. Indeed, he received the prestigious Lasker Prize for Medicine for developing the Hepatitis A and B vaccines and, in 1972, became the President of the American Pediatric Society. Pretty shocking. But, at the time, and for decades after that, Willowbrook did not register as unequivocally unethical. My point here is not to draw one to one comparisons of Willowbrook and the Facebook Emotions study. They are not even close to comparable. I bring up Willowbrook to point out that no matter how ethically egregious something might seem in hindsight, often such studies do not appear so at the time, especially when weighed against the good they might seem to offer in the moment. Those living in the present are never in the best position to judge what will or will not seem “obviously wrong.”

      News accounts of risky experiments carried out without prior or clear consent, often targeting marginalized communities with little power, catalyzed political will for federal regulations for biomedical and behavioral researchers’ experiments (Rothman, 2003: 183-184). Everyone agreed: there’s a conflict of interest when individual researchers are given unfettered license to decide if their research (and their reputations) are more valuable to Science than an individual’s rights to opt out of research, no matter how cool and important the findings might be. The balance between the greater good and individual risk of research involving human subjects must be adjudicated by a separate review committee, made up of peers and community members, with nothing to be gained by approving or denying a researcher’s proposed project.

      The Belmont Report

      The National Research Act of 1974 created the Commission for the Protection of Human Subjects of Biomedical and Behavioral Research [2]. Five years later, the Commission released The Belmont Report: The Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The Belmont Report codified the call for “respect for persons, beneficence, and justice” (The Belmont Report, 1979). More concretely, it spelled out what newly mandated university and publicly funded agency-based IRBs should expect their researchers to do to safeguard subjects’ informed consent, address the risks and benefits their participation might accrue, and more fairly distribute science’s “burdens and benefits” (The Belmont Report, 1979). The Belmont Report now guides how we define human subjects research and the attendant ethical obligations of those who engage in it.

      Put simply, the Belmont Report put a Common Rule in place to manage ethics through a procedure focused on rooting out bad apples before something egregious happens or is uncovered, after the fact. But it did not—and we have not—positioned ethics as an on-going, complicated discussion among researchers actively engaging fellow researchers and the human subjects we study. And we’ve only now recognized that human subjects research is core to technology companies’ product development and, by extension, bottom lines. However, there is an element of the Belmont Report that we could use to rethink guidance for technology companies, data scientists, and social media researchers alike: the lines drawn in the Belmont Report between “practice and research.”

      The fine line between practice and research

      The Belmont Report drew a clear line demarcating the “boundaries between biomedical and behavioral research and the accepted and routine practice of medicine”—the difference between research and therapeutic intervention (The Belmont Report 1979). This mandate, which was in fact the Report’s first order of business, indexes the Commission’s most pressing anxiety: how to reign in biomedicine’s professional tendencies to experiment in therapeutic contexts. The history of biomedical breakthroughs—from Walter Reed’s discovery of the causes of yellow fever to Jonas Salk’s polio vaccines—attest to the profession’s culture of experimentation (Halpern 2004: 41-96). However, this professional image of the renegade (mad) scientist pioneering medical advances was increasingly at odds with the need, pressing by the 1970s, for a more restrained and cautious scientific community driven first by an accountability to the public and only second by a desire for discovery.

      In redrawing the boundaries between research and practice, the Belmont Report positioned ethics as a wedge between competing interests. If a practitioner simply wanted to tweak a technique to see if it could improve an individual subjects’ experience, the experiment did not meet the threshold of “real scientific inquiry” and could be excused from more formal procedures of consent, debriefing, and peer review. Why? Practitioners already have guiding codes of ethics (“do no harm”) and, as importantly, ongoing relationships built on communication and trust with the people in their care (at least, in theory). The assumption was that practitioners and “their” subjects could hold each other mutually accountable.

      But, once a researcher tests something out for testing’s sake or to work on, more broadly, a scientific puzzle, they are in the realm of research and must consider a new set of questions: Cui bono, who benefits? Will the risk or harm to an individual outweigh the benefits for the greater good? What if that researcher profits from the greater good? The truth is, in most cases, the researcher will benefit, whether they make money or not, because they will gain credibility and status through the experience of their research. Can we say the same for the individual contributing their experiences to our experiments? If not, that’s, typically, an ethical dilemma.

      Constructing ethical practice in a social media world

      Social media platforms and the technology companies that produce our shared social playgrounds blur the boundaries between practice and research. They (we?) have to, in many cases, to improve the products that companies provide users. That’s no easy thing if you’re in the business of providing a social experience through your technology! But that does not exempt companies, any more than it exempts researchers, from extending respect, beneficence, and justice to individuals sharing their daily interactions with us. So we need to, collectively, rethink when “testing a feature” transitions from improving customer experience to more than minimally impacting someone’s social life.

      Ethical stances on methodological practices are inextricably linked to how we conceptualize our objects of study. Issues of consent hinge on whether researchers believe they are studying texts or people’s private interactions. Who needs to be solicited for consent also depends on whether researchers feel they are engaged in a single site study or dealing with an infrastructure that crosses multiple boundaries. What ethical obligations, then, should I adhere to as I read people’s posts—particularly on commercial venues such as Facebook that are often considered “public domain”—even when they may involve participants who share personal details about their lives from the walled garden of their privacy settings? Are these obligations different from those I should heed with individuals not directly involved in my research? How can I use this information and in what settings? Does consent to use information from interviews with participants include the information they publicly post about themselves online? These questions are not easily grouped as solely methods issues or strictly ethical concerns.

      For me, the most pragmatic ethical practice follows from the reality that I will work with many of the people I meet through my fieldwork for years to come. And, importantly, if I burn bridges in my work, I am, literally, shutting out researchers who might want to follow in my footsteps. I can give us all a bad reputation that lasts a human subject’s lifetime. I, therefore, treat online materials as the voices of the people with whom I work. In the case of materials I would like to cite, I email the authors, tell them about my research, and ask if I may include their web pages in my analyses. I tread lightly and carefully.

      The Facebook Emotions study could have included a follow up email to all those in the study, sharing the cool results with participants and offering them a link to the happy and sad moments that they missed in their News Feed while the experiment was underway (tip of the hat to Tarleton Gillespie for those ideas). And, with more than half a million people participating, I’m sure a few hundred thousand would have opted-in to Science and to let Facebook keep the results.

      We do not always have the benefit of personal relationships, built over time with research participants to guide our practices. And, unfortunately, our personal identities or affinities with research participants do not safeguard us from making unethical decisions in our research. We have only just started (like, last week) to think through what might be comparable practices for data scientists or technology designers, who often never directly talk with the people they study. That means that clear, ethical frameworks will be even more vital as we build new toolkits to study social media as sites of human interaction and social life.

      Conclusion

      Considering that more and more of social media research links universities and industry-based labs, we must coordinate our methodologies and ethics no matter who pays us to do our research. None of us should be relieved from duty when it comes to making sure all facets of our collaborations are conducted with an explicit, ethical plan of action. There are, arguably, no secondary data sets in this new world.

      The Belmont Report was put in place to ensure that we have conversations with the Public, among ourselves, and with our institutions about the risks of the scientific enterprise. It’s there to help us come to some agreement as to how to address those risks and create contingency plans. While IRBs as classification systems can and have provided researchers with reflexive and sometimes necessary intervention, bureaucratic mechanisms and their notions of proper science are not the only or even the best source of good ethics for our work—ongoing and reflexive conversations among researchers and practitioners sharing their work with invested peers and participants are.

      Whether from the comfort of a computer or in the thick of a community gathering, studying what people do in their everyday lives is challenging. The seeming objectivity of a lab setting or the God’s eye view of a web scraping script may seem to avoid biases and desires that could, otherwise, interfere with the social situations playing out in front of us that we want to observe. But, no matter how removed we are, our presence as researchers does not evaporate when we come into contact with human interaction. One of the values of sustained, ethnographic engagement with people as we research their lives: it keeps researchers constantly accountable not only to our own scientific (and self) interests but also to the people we encounter in any observation, experiment, or engagement.

      Some of my peers argue that bothering people with requests for consent or efforts to debrief them will either “contaminate the data” or “seem creepy” after the fact. They argue that it’s less intrusive and more scientifically powerful to just study “the data” from a distance or adjust the interface design on the fly. I get it. It is not easy to talk with people about what they’re doing on online. Keep in mind that by the end of USENET’s long life as the center of the Internet’s social world, many moderated newsgroups blocked two kinds of lurkers: journalists. And researchers. In the long run, keeping a distance can leave the general public more suspicious of companies’, designers’, and researchers’ intentions. People may also be less likely to talk to us down the road when we want to get a richer sense of what they’re doing online. Let’s move away from this legalistic, officious discussion of consent and frame this debate as a matter of trust.

      None of us would accept someone surreptitiously recording our conversations with others to learn what we’re thinking or feeling just because “it’s easier” or it’s not clear that we are interested in sharing them if asked outright. We would all want to understand what someone wants to know about us and why they want to study what we’re doing—what do they hope to learn and why does it matter? Those are completely reasonable questions. All of us have a right to be asked if we want to share our lives with strangers (even researchers or technology companies studying the world or providing a service) so that we have a chance to say, “nah, not right now, I’m going through a bad break up.” What would it look like for all of us—from LOLcat enthusiasts and hardcore gamers, to researchers and tech companies—to (re)build trust and move toward a collective enterprise of explicitly opting-in to understand this rich, social world that we call “The Internet?”

      Scientists and technology companies scrutinizing data bubbling up from the tweets, posts, driving patterns, or check-ins of people are coming to realize that we are also studying moments of humans interacting with each other. These moments call for respect, trust, mutuality. By default. Every time we even think we see social interactions online. Is working from this premise too much to ask of researchers or the companies and universities that employ us? I don’t think so.

       

      Addendum (added June 13, 2014)

      I realized after posting my thoughts on how to think about social media as a site of human interaction (and all the ethical and methodological implications of doing so) that I forgot to leave links to what are, bar none, the best resources on the planet for policy makers, researchers, and the general public thinking through all this stuff.

      Run, don’t walk, to download copies of the following must-reads:

      Charles Ess and the AOIR Ethics Committee (2002). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee. Approved by the Association of Internet Researchers, November 27, 2002. Available at: http://aoir.org/reports/ethics.pdf

      Annette Markham and Elizabeth Buchanan (2012). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee (version 2.0). Approved by the Association of Internet Researchers, December 2012. Available at: http://aoir.org/reports/ethics2.pdf

       


      Notes/Bibliography/Additional Reading

      [1] The United States Department of Health, Education and Welfare (HEW) was a cabinet-level, U.S. governmental department from 1953-1979. In 1979, HEW was reorganized into two separate cabin-level departments: the Department of Education and the Department of Health and Human Services (HHS). HHS is in charge of all research integrity and compliance including research involving human subjects.

      [2] I wanted to thank my fellow MSR Ethics Advisory Board members, MSR New England Lab, and the Social Media Collective, as well as the following people for their thoughts on drafts of this essay: danah boyd, Henry Cohn, Kate Crawford, Tarleton Gillespie, James Grimmelmann, Jeff Hancock, Jaron Lanier, Tressie Cottom McMillan, Kate Miltner, Christian Sandvig, Kat Tiidenberg, Duncan Watts, and Kate Zyskowski

       

      Bowker, Geoffrey C., and Susan Leigh Star

      1999 Sorting Things Out: Classification and Its Consequences, Inside Technology. Cambridge, Mass.: MIT Press.

      Brenneis, Donald

      2006 Partial Measures. American Ethnologist 33(4): 538-40.

      Brenneis, Donald

      1994 Discourse and Discipline at the National Research Council: A Bureaucratic Bildungsroman. Cultural Anthropology 9(1): 23-36.

      Epstein, Steven

      2007 Inclusion : The Politics of Difference in Medical Research. Chicago: University of Chicago Press.

      Gieryn, Thomas F.

      1983 Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists.” American Sociological Review 48(6): 781-95.

      Halpern, Sydney A.

      2004 Lesser Harms: The Morality of Risk in Medical Research. Chicago: University of Chicago Press.

      Lederman, Rena

      2006 The Perils of Working at Home: Irb “Mission Creep” as Context and Content for an Ethnography of Disciplinary Knowledges. American Ethnologist 33(4): 482-91.

      Rothman, David J.

      2003 Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making. 2nd pbk. ed, Social Institutions and Social Change. New York: Aldine de Gruyter.

      Schrag, Zachary M.

      2010 Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009. John Hopkins University Press.

      Stark, Laura

      2012 Behind Closed Doors: IRBs and the Making of Ethical Research. University of Chicago Press. 2012

      Strathern, Marilyn

      2000 Audit Cultures: Anthropological Studies in Accountability, Ethics, and the Academy. London New York: Routledge, 2000.

      United States. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.

      1978 Report and Recommendations: Institutional Review Boards. [Washington]: U.S. Dept. of Health, Education, and Welfare : for sale by the Supt. of Docs., U.S. Govt. Print. Off.

      This essay has been cross-posted from Ethnography Matters.

      -Contributed by ,  Microsoft Research New England / Associate Professor of Communication and Culture with affiliations in American Studies, Anthropology, and the Gender Studies Department at Indiana University-

      Posted in Uncategorized | Tagged , , , , | Comments Off

      Facebook’s algorithm — why our assumptions are wrong, and our concerns are right Jul 4, 2014

      Many of us who study new media, whether we do so experimentally or qualitatively, our data big or small, are tracking the unfolding debate about the Facebook “emotional contagion” study, published recently in the Proceedings of the National Academies of Science. The research, by Kramer, Guillory, and Hancock, argued that small shifts in the emotions of those around us can shift our own moods, even online. To prove this experimentally, they made alterations in the News Feeds of 310,000 Facebook users, excluding a handful of status updates from friends that had either happy words or sad words, and measuring what those users subsequently posted for its emotional content. A matching number of users had posts left out of their News Feeds, but randomly selected, in order to serve as control groups. The lead author is a data scientist at Facebook, while the others have academic appointments at UCSF and Cornell University.

      I have been a bit reluctant to speak about this, as (full disclosure) I am both a colleague and friend of one of the co-authors of this study; Cornell is my home institution. And, I’m currently a visiting scholar at Microsoft Research, though I don’t conduct data science and am not on specific research projects for the Microsoft Corporation. So I’m going to leave the debates about ethics and methods in other, capable hands. (Press coverage: Forbes 1, 2, 3; Atlantic 1, 2, 3, 4, Chronicle of Higher Ed 1; Slate 1; NY Times 1, 2; WSJ 1, 2, 3; Guardian 1, 2, 3. Academic comments: Grimmelmann, Tufekci, Crawford, boyd, Peterson, Selinger and Hartzog, Solove, Lanier, Vertesi.) I will say that social science has moved into uncharted waters in the last decade, from the embrace of computational social scientific techniques, to the use of social media as experimental data stations, to new kinds of collaborations between university researchers and the information technology industry. It’s not surprising to me that we find it necessary to raise concerns about how that research should work, and look for clearer ethical guidelines when social media users are also “human subjects.” In many ways I think this piece of research happened to fall into a bigger moment of reckoning about computational social science that has been coming for a long time — and we have a responsibility to take up these questions at this moment.

      But a key issue, both in the research and in the reaction to it, is about Facebook and how it algorithmically curates our social connections, sometimes in the name of research and innovation, but also in the regular provision of Facebook’s service. And that I do have an opinion about. The researchers depended on the fact that Facebook already curates your News Feed, in myriad ways. When you log onto Facebook, the posts you’re immediately shown at the top of the News Feed are not every post from your friends in reverse chronological order. Of course Facebook has the technical ability to do this, and it would in many ways be simpler. But their worry is that users will be inundated with relatively uninteresting (but recent) posts, will not scroll down far enough to find the few among them that are engaging, and will eventually quit the service. So they’ve tailored their “EdgeRank” algorithm to consider, for each status update from each friend you might receive, not only when it was posted (more recent is better) but other factors, including how regularly you interact with that user (e.g. liking or commenting on their posts), how popular they are on the service and among your mutual friends, and so forth. A post with a high rating will show up, a post with a lower rating will not.

      So, for the purposes of this study, it was easy to also factor in a numerical count of happy or sad emotion words in the posts as well, and use that as an experimental variable. The fact that this algorithm does what it does also provided legal justification for the research: that Facebook curates all users’ data is already part of the site’s Terms of Service and its Data Use Policy, so it is within their rights to make whatever adjustments they want. And the Institutional Review Board at Cornell did not see a reason to even consider this as a human subjects issue: all that the Cornell researchers got was the statistical data produced from this manipulation, manipulations that are a normal part of the inner workings of Facebook.

      Defenders of the research (1, 2, 3), including Facebook, have pointed to this as a reason to dismiss what they see as an overreaction. This takes a couple of forms, not entirely consistent with each other: Facebook curates users’ News Feed anyway, it’s within their right to do so. Facebook curates users’ News Feed anyway, probably already on factors such as emotion. Facebook curates users’ News Feed anyway, and needs to understand how to do so by engaging in all sorts of A/B testing, which this was an example of. Facebook curates users’ News Feed anyway, get over it. All of these imply that it’s simply naive to think of this research as a “manipulation” of an otherwise untouched list; your News Feed is a construction, built from some of the posts direct to you, according to any number of constantly shifting algorithmic criteria. This was just one more construction. Those who are upset about this research are, according to its defenders, just ignorant of the realities of Facebook and its algorithm.

      More and more of our culture is curated algorithmically; Facebook is a prime example, though certainly not the only one. But it’s easy for those of us who pay a lot of attention to how social media platforms work, engineers and observers alike, to forget how unfamiliar that is. I think, among the population of Facebook users — more than a billion people — there’s a huge range of awareness about these algorithms and their influence. And I don’t just mean that there are some poor saps who still think that Facebook delivers every post. In fact, there certainly are many, many Facebook users who still don’t know they’re receiving a curated subset of their friends’ posts, despite the fact that this has been true, and “known,” for some time. But it’s more than that. Many users know that they get some subset of their friends’ posts, but don’t understand the criteria at work. Many know, but do not think about it much as they use Facebook in any particular moment. Many know, and think they understand the criteria, but are mistaken. Just because we live with Facebook’s algorithm doesn’t mean we fully understand it. And even for those who know that Facebook curates our News Feeds algorithmically, it’s difficult as a culture to get beyond some very old and deeply sedimented ways to think about how information gets to us.

      The public reaction to this research is proof of these persistent beliefs — a collective groan from our society as it adjusts to a culture that is algorithmically organized. Because social media, and Facebook most of all, truly violates a century-old distinction we know very well, between what were two, distinct kinds of information services. On the one hand, we had “trusted interpersonal information conduits” — the telephone companies, the post office. Users gave them information aimed for others and the service was entrusted to deliver that information. We expected them not to curate or even monitor that content, in fact we made it illegal to do otherwise; we expected that our communication would be delivered, for a fee, and we understood the service as the commodity, not the information it conveyed. On the other hand, we had “media content producers” — radio, film, magazines, newspapers, television, video games — where the entertainment they made for us felt like the commodity we paid for (sometimes with money, sometimes with our attention to ads), and it was designed to be as gripping as possible. We knew that producers made careful selections based on appealing to us as audiences, and deliberately played on our emotions as part of their design. We were not surprised that a sitcom was designed to be funny, even that the network might conduct focus group research to decide which ending was funnier (A/B testing?). But we would be surprised, outraged, to find out that the post office delivered only some of the letters addressed to us, in order to give us the most emotionally engaging mail experience.

        
      Now we find ourselves dealing with a third category. Facebook promises to connect person to person, entrusted with our messages to be delivered to a proscribed audience (now it’s sometimes one person, sometimes a friend list, sometimes all Facebook users who might want to find it). But then, as a part of its service, it provides the News Feed, which appears to be a running list of those posts but is increasingly a constructed subset, carefully crafted to be an engaging flow of material. The information coming in is entrusted interpersonal communication, but it then becomes the raw material for an emotionally engaging commodity, the News Feed. All comes in, but only some comes out. It iOS this quiet curation that is so new, that makes Facebook different than anything before. (And it makes this any research that changes the algorithmic factors in order to withhold posts quite different from other kinds of research we know Facebook to have done, including the A/B testing of the site’s design, the study of Facebook activity to understand the dynamics of social ties, or the selective addition of political information to understand the effect on voter turnout – but would include their effort to study the power of social ties by manipulating users’ feeds.)

      And Facebook is complicit in this confusion, as they often present themselves as a trusted information conduit, and have been oblique about the way they curate our content into their commodity. If Facebook promised “the BEST of what your friends have to say,” then we might have to acknowledge that their selection process is and should be designed, tested, improved. That’s where this research seems problematic to some, because it is submerged in the mechanical workings of the News Feed, a system that still seems to promise to merely deliver what your friends are saying and doing. The gaming of that delivery, be it for “making the best service” or for “research,” is still a tactic that takes cover under its promise of mere delivery. Facebook has helped create the gap between expectation and reality that it has currently fallen into.

      That to me is what bothers people, about this research and about a lot of what Facebook does. I don’t think it is merely naive users not understanding that Facebook tweaks its algorithm, or that people are just souring on Facebook as a service. I think it’s an increasing, and increasingly apparent, ambivalence about  what it is, and its divergence from what we think it is. Despite the cries of those most familiar with their workings, it takes a while, years, for a culture to adjust itself to the subtle workings of a new information system, and to stop expecting of it what tradition systems provided.

      For each form of media, we as a public can raise concerns about its influence. For the telephone system, it was about whether they were providing service fairly and universally: a conduit’s promise is that all users will have the opportunity to connect, and as a nation we forced the telephone system to ensure universal service, even when it wasn’t profitable. Their preferred design was acceptable only until it ran up against a competing concern: public access. For media content, we have little concern about being “emotionally manipulated” by a sitcom or a tear-jerker drama. But we do worry about that kind of emotional manipulation in news, like the fear mongering of cable news pundits. Here again, their preferred design is acceptable until it runs up against a competing concern: a journalistic obligation to the public interest. So what is the competing interest here? What kind of interventions are acceptable in an algorithmically curated platform, and what competing concern do they run up against?

      Is it naive to continue to want Facebook to be a trusted information conduit? Is it too late? Maybe so. Though I think there is still a different obligation when you’re delivering the communication of others — an obligation Facebook has increasingly foregone. Some of the discussion of this research suggests that the competing concern here is science: that the ethics are different because this manipulation was presented as scientific discovery, a knowledge project for which we have different standards and obligations. But, frankly, that’s a troublingly narrow view. Just because this algorithmic manipulation came to light because it was published as science doesn’t mean that it was the science that was the problem. The responsibility may extend well beyond, to Facebook’s fundamental practices.

      Is there any room for a public interest concern, like for journalism? Some have argued that Facebook and other social media are now a kind of quasi public spheres. They not only serve our desire to interact with others socially, they are also important venues for public engagement and debate. The research on emotional contagion was conducted during the week of January 11-18, 2012. What was going on then, not just in the emotional lives of these users, but in the world around them? There was ongoing violence and protest in Syria. The Costa Concordia cruise ship ran aground in the Mediterranean. The U.S. Republican party was in the midst of its nomination process: John Huntsman dropped out of the race this week, and Rick Perry the day after. January 18th was the SOPA protest blackout day, something that was hotly (emotionally?) debated during the preceding week. Social media platforms like Facebook and Twitter were in many ways the primary venues for activism and broader discussion of this particular issue. Whether or not the posts that were excluded by this research pertained to any of these topics, there’s a bigger question at hand: does Facebook have an obligation to be fair-minded, or impartial, or representative, or exhaustive, in its selection of posts that address public concerns?

      The answers to these questions, I believe, are not clear. And this goes well beyond one research study, it is a much broader question about Facebook’s responsibility. But the intense response to this research, on the part of press, academics, and Facebook users, should speak to them. Maybe we latch onto specific incidents like a research intervention, maybe we grab onto scary bogeymen like the NSA, maybe we get hooked on critical angles on the problem like the debate about “free labor,” maybe we lash out only when the opportunity is provided like when Facebook tries to use our posts as advertising. But together, I think these represent a deeper discomfort about an information environment where the content is ours but the selection is theirs.

      -Contributed by ,  Cornell University Department of Communication-

      Posted in Uncategorized | Tagged , , , , | Comments Off

      Algorithm [draft] [#digitalkeywords] Jun 25, 2014

      “What we are really concerned with when we invoke the “algorithmic” here is not the algorithm per se but the insertion of procedure into human knowledge and social experience. What makes something algorithmic is that it is produced by or related to an information system that is committed (functionally and ideologically) to the computational generation of knowledge or decisions.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Algorithm — Tarleton Gillespie, Cornell University

      In Keywords, Raymond Williams urges us to think about how our use of a term has changed over time. But the concern with many of these “digital keywords” is the simultaneous and competing uses of a term by different communities, particularly those inside and outside of technical professions, who seem often to share common words but speak different languages. Williams points to this concern too: “When we come to say ‘we just don’t speak the same language’ we mean something more general: that we have different immediate values or different kinds of valuation, or that we are aware, often intangibly, of different formations and distributions of energy and interest.” (11)

      For “algorithm,” there is a sense that the technical communities, the social scientists, and the broader public are using the word in different ways. For software engineers, algorithms are often quite simple things; for the broader public they name something unattainably complex. For social scientists there is danger in the way “algorithm” lures us away from the technical meaning, offering an inscrutable artifact that nevertheless has some elusive and explanatory power (Barocas et al, 3). We find ourselves more ready to proclaim the impact of algorithms than to say what they are. I’m not insisting that critique requires settling on a singular meaning, or that technical meanings necessarily trumps others. But we do need to be cognizant of the multiple meanings of “algorithm” as well as the type of discursive work it does in our own scholarship.

      algorithm as a technical solution to a technical problem

      In the scholarly effort to pinpoint the values that are enacted, or even embedded, in computational technology, it may in fact not be the “algorithms” that we need be most concerned about — if what we meant by algorithm was restricted to software engineers’ use the term. For their makers, “algorithm” refers specifically to the logical series of steps for organizing and acting on a body of data to quickly achieve a desired outcome. MacCormick (2012), in an attempt to explain algorithms to a general audience, calls them “tricks,” (5) by which he means “tricks of the trade” more than tricks in the magical sense — or perhaps like magic, but as a magician understands it. An algorithm is a recipe composed in programmable steps; most of the “values” that concern us lie elsewhere in the technical systems and the work that produces them.

      For its designers, the “algorithm” comes after the generation of a “model,” i.e. the formalization of the problem and the goal in computational terms. So, the task of giving a user the most relevant search results for their queries might be operationalized into a model for efficiently calculating the combined values of pre-weighted objects in the index database, in order to improve the percentage likelihood that the user clicks on one of the first five results.[1] This is where the complex social activity and the values held about it are translated into a functional interaction of variables, indicators, and outcomes. Measurable relationships are posited as existing between some of these elements; a strategic target is selected, as a proxy for some broader social goal; a threshold is determined as an indication of success, at least for this iteration.

      The “algorithm” that might follow, then, is merely the steps for aggregating those assigned values efficiently, or delivering the results rapidly, or identifying the strongest relationships according to some operationalized notion of “strong.” All is in the service of the model’s understanding of the data and what it represents, and in service of the model’s goal and how it has been formalized. There may be many algorithms that would reach the same result inside a given model, just like bubble sorts and shell sorts both put lists of words into alphabetical order. Engineers choose between them based on values such as how quickly they return the result, the load they impose on the system’s available memory, perhaps their computational elegance. The embedded values that make a sociological difference are probably more about the problem being solved, the way it has been modeled, the goal chosen, and the way that goal has been operationalized (Reider).

      Of course, simple alphabetical sorting may be a misleading an example to use here. The algorithms we’re concerned about today are rarely designed to reach a single and certifiable answer, like a correctly alphabetized list. More common are algorithms that must choose one of many possible results, none of which are certifiably “correct.” Algorithm designers must instead achieve some threshold of operator or user satisfaction — understood in the model, perhaps, in terms of percent clicks on the top results, or percentage of correctly identified human faces from digital images.

      This brings us to the second value-laden element around the algorithm. To efficiently design algorithms that achieve a target goal (rather than reaching a known answer), algorithms are “trained” on a corpus of known data. This data has been in some way certified, either by the designers or by past user practices: this photo is of a human face, this photo is not; this search result has been selected by many users in response to this query, this one has not. The algorithm is then run on this data so that it may “learn” to pair queries and results found satisfactory in the past, or to distinguish images with faces from images without.

      The values, assumptions, and workarounds that go into the selection and preparation of this training data may also be of much more importance to our sociological concerns than the algorithm learning from it. For example, the training data must be a reasonable approximation of the data that algorithm will operate on in the wild. The most common problem in algorithm design is that the new data turns out not to match the training data in some consequential way. Sometimes new phenomena emerge that the training data simply did not include and could not have anticipated; just as often, something important was overlooked as irrelevant, or was scrubbed from the training data in preparation for the development of the algorithm.

      Furthermore, improving an algorithm is rarely about redesigning it. Rather, designers will “tune” an array of parameters and thresholds, each of which represents a tiny assessment or distinction. In search, this might mean the weight given to a word based on where it appears in a webpage, or assigned when two words appear in proximity, or given to words that are categorically equivalent to the query term. These values have been assigned and are already part of the training data, or are thresholds that can be dialed up or down in the algorithm’s calculation of which webpage has a score high enough to warrant ranking it among the results returned to the user.

      Finally, these exhaustively trained and finely tuned algorithms are instantiated inside of what we might call an application, which actually performs the functions we’re concerned with. For algorithm designers, the algorithm is the conceptual sequence of steps, which should be expressible in any computer language, or in human or logical language. They are instantiated in code, running on servers somewhere, attended to by other helper applications (Geiger 2014), triggered when a query comes in or an image is scanned. I find it easiest the think about the difference between the “book” in your hand and the “story” within it. These applications embody values as well, outside of their reliance on a particular algorithm.

      To inquire into the implications of “algorithms,” if we meant what software engineers mean when they use the term, could only be something so picky as investigating the political implications of using a bubble sort or a shell sort — setting aside bigger questions like why “alphabetical” in the first place, or why train on this particular dataset. Perhaps there are lively insights to be had about the implications of different algorithms in this technical sense,{2] but by and large we in fact mean something else when we talk about algorithms as having “social implications.”

      algorithm as synecdoche

      While it is important to understand the technical specificity of the term, “algorithm” has now achieved some purchase in the broader public discourse about information technologies, where it is typically used to mean everything described in the previous section, combined. As Goffey puts it, “Algorithms act, but they do so as part of an ill-defined network of actions upon actions.” (19) “Algorithm” may in fact serve as an abbreviation for the sociotechnical assemblage that includes algorithm, model, target goal, data, training data, application, hardware — and connect it all to a broader social endeavor. Beyond the technical assemblage there are people at every point: people debating the models, cleaning the training data, designing the algorithms, tuning the parameters, deciding on which algorithms to depend on in which context. “These algorithmic systems are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them, tweaking and tuning, swapping out parts and experimenting with new arrangements… We need to examine the logic that guides the hands.” (Seaver 2013) Perhaps “algorithm” is just the name for one kind of socio-technical ensemble, part of a family of authoritative systems for knowledge production or decision-making: in this one, humans involved are rendered legible as data, are put into systematic / mathematical relationships with each other and with information, and then are given information resources based on calculated assessments of them and their inputs.

      But what is gained and lost by using “algorithm” this way? Calling the complex sociotechnical assemblage an “algorithm” avoids the need for the kind of expertise that could parse and understand the different elements; a reporter may not need to know the relationship between model, training data, thresholds, and application in order to call into question the impact of that “algorithm” in a specific instance. It also acknowledges that, when designed well, an algorithm is meant to function seamlessly as a tool; perhaps it can, in practice, be understood as a singular entity. Even algorithm designers, in their own discourse, shift between the more precise meaning, and using the term more broadly in this way.

      On the other hand, this conflation risks obscuring the ways in which political values may come in elsewhere than at what designers call the “algorithm.” This helps account for the way many algorithm designers seem initially surprised by the interest of sociologists in what they do — because they may not see the values in their “algorithms” (precisely understood) that we see in their algorithms (broadly understood), because questions of value are very much bracketed in the early decisions about how to operationalize a social activity into a model and into the miniscule, mathematical moments of assigning scores and tuning thresholds.

      In our own scholarship, this kind of synecdoche is perhaps unavoidable. Like the journalists, most sociologists do not have the technical expertise or the access to investigate each of the elements of what they call the algorithm. But when we settle uncritically on this shiny, alluring term, we risk reifying the processes that constitute it. All the classic problems we face when trying to unpack a technology, the term packs for us. It becomes too easy to treat it as a single artifact, when in the cases we’re most interested in it’s rarely one algorithm, but many tools functioning together, sometimes different tools for different users.[3] It also tends to erase the people involved, downplay their role, and distance them from accountability. In the end, whether this synecdoche is acceptable depends on our intellectual aims. Calling all these social and technical elements “the algorithm” may give us a handle with which to grip we want to closely interrogate; at the same time it can produce a “mystified abstraction” (Striphas 2012) that, for other research questions, it might be better to demystify.

      algorithm as talisman

      The information industries have found value in the term “algorithm” in their public-facing discursive efforts as well. To call their service or process an algorithm is to lend a set of associations to that service: mathematical, logical, impartial, consistent. Algorithms seem to have a “disposition towards objectivity” (Hillis et al 2013: 37); this objectivity is regularly performed as a feature of algorithmic systems. (Gillespie 2014) Conclusions that can be described as having been generated by an algorithm come with a powerful legitimacy, much the way statistical data bolsters scientific claims, with the human hands yet another step removed. It is a very different kind of legitimacy than one that rests on the subjective expertise of an editor or a consultant, though it is important not to assume that it trumps such claims in all cases. A market prediction that is “algorithmic” is different from a prediction that comes from an expert broker highly respected for their expertise and acumen; a claim about an emergent social norm in a community generated by an algorithm is different from one generated ethnographically. Each makes its own play for legitimacy, and implies its own framework for what legitimacy is (quantification or interpretation, mechanical distance or human closeness). But in the context of nearly a century of celebration of the statistical production of knowledge and longstanding trust in automated calculation over human judgment, the algorithmic does enjoy a particular cultural authority.

      More than that, the term offers the corporate owner a powerful talisman to ward off criticism, when companies must justify themselves and their services to their audience, explain away errors and unwanted outcomes, and justify and defend the increasingly significant roles they play in public life. (Gillespie 2014) Information services can point to “the algorithm” as having been responsible for particular results or conclusions, as a way to distance those results from the providers. (Morozov, 2013: 142) The term generates an entity that is somehow separate, the assembly line inside the factory, that can be praised as efficient or blamed for mistakes.

      The term “algorithm” is also quite often used as a stand-in for its designer or corporate owner. When a critic says “Facebook’s algorithm” they often mean Facebook and the choices it makes, some of which are made in code. This may be another way of making the earlier point, that the singular term stands for a complex sociotechnical assemblage: Facebook’s algorithm really means “Facebook,” and Facebook really means the people, things, priorities, infrastructures, aims, and discourses that animate them. But it may also be a political economic conflation: this is Facebook acting through its algorithm, intervening in an algorithmic way, building a business precisely on its ability to construct complex models of social/expressive activity, train on an immense corpus of data, tune countless parameters, and reach formalized goals extremely efficiently.

      Maybe saying “Facebook’s algorithm” and really meaning the choices and interventions made by Facebook the company into our social practices is a way to assign accountability (Diakopoulos 2013, Ziewitz 2011). It makes the algorithm theirs in a powerful way, and works to reduce the distance some providers put between “them” (their aims, their business model, their footprint, their responsibility) and “the algorithm” (as somehow autonomous from all that). On the other hand, conflating the algorithmic mechanism and the corporate owner may obscure the ways these two entities are not always aligned. It is crucial that we discern between things done by the algorithmic system and things done in other ways, such as the deletion of obscene images from a content platform, which is sometimes handled algorithmically and sometimes performed manually. (Gillespie 2012b) It is crucial to note slippage between a provider’s financial or political aims and the way the algorithmic system actually functions. And conflating algorithmic mechanism and corporate owner misses how some algorithmic approaches are common to multiple stakeholders, circulate across them, and embody a tactic that exceeds any one implementation.

      algorithmic as committed to procedure

      In recent scholarship on the social significance of algorithms, it is common for the term to appear not as a noun but as an adjective. To talk about “algorithmic identity” (Cheney-Lippold), “algorithmic regulation” (O’Reilly), “algorithmic power” (Bucher), “algorithmic publics” (Leavitt), “algorithmic culture” (Striphas, 2010) or the “algorithmic turn (Uricchio, 2011) is to highlight a social phenomenon that is driven by and committed to algorithmic systems — which include not just algorithms themselves, but also the computational networks in which they function, the people who design and operate them, the data (and users) on which they act, and the institutions that provide these services.

      What we are really concerned with when we invoke the “algorithmic” here is not the algorithm per se but the insertion of procedure into human knowledge and social experience. What makes something algorithmic is that it is produced by or related to an information system that is committed (functionally and ideologically) to the computational generation of knowledge or decisions. This requires the formalization of social facts into measurable data and the “clarification” (Cheney-Lippold) of social phenomena into computational models that operationalize both problem and solution. These are often proxies for human judgment or action, meant to simulate it as nearly as possible. But the “algorithmic” intervenes in terms of step-by-step procedures that one (computer or human) can enact on this formalized information, such that it can be computed. This process is automated so that it can happen instantly, repetitively, and across many contexts, away from the guiding hand of its implementers. This is not the same as suggesting that knowledge is produced exclusively by a machine, abstracted from human agency or intervention. Information systems are always swarming with people, we just can’t always see them. (Downey, 2014; Kushner 2013) And an assembly line might be just as “algorithmic” in this sense of the word, or at least the parallels are important to consider. What is central is the commitment to procedure, and the way procedure distances its human operators from both the point of contact with others and the mantle of responsibility for the intervention they make. It is a principled commitment to the “if/then” logic of computation.

      Yet what does “algorithmic” refer to, exactly? To put it another way, what is it that is not “algorithmic”? What kind of “regulation” is being condemned as insufficient when Tim O’Reilly calls for “algorithmic regulation”? It would be all too easy to invoke the algorithmic as simply the opposite of what is done subjectively or by hand, or of what can only be accomplished with persistent human oversight, or of what is beholden to and limited by context. To do so would draw too stark a contrast between the algorithm and something either irretrievably subjective (if we are glorifying the impartiality of the algorithmic) or warmly human (if we’re condemning the algorithmic for its inhumanity). If “algorithmic” market predictions and search results are produced by a complex assemblage of people, machines, and procedures, what makes their particular arrangement feel different than other ways of producing information, which are also produced by a complex assemblage of people, machines, and procedures, such that it makes sense to peg them as “algorithmic?” It is imperative to look closely at those pre- and non-algorithmic practices that precede or stand in contrast to those we posit as algorithmic, and recognize how they too strike a balance between the procedural and the subjective, the machinic and the human, the measured and the ineffable. And it is crucial that we continue to examine algorithmic systems and their providers and users ethnographically, to explore how the systemic and the ad hoc coexist and are managed within them.

      To highlight their automaticity and mathematical quality, then, is not to contrast algorithms to human judgment. Instead it is to recognize them as part of mechanisms that introduce and privilege quantification, proceduralization, and automation in human endeavors. Our concern for the politics of algorithms is an extension of worries about Taylorism and the automation of industrial labor; to actuarial accounting, the census, and the quantification of knowledge about people and populations; and to management theory and the dominion of bureaucracy. At the same time, we sometimes wish for more “algorithmic” interventions when the ones we face are discriminatory, nepotistic, and fraught with error; sometimes procedure is truly democratic. I’m reminded of the sensation of watching complex traffic patterns from a high vantage point: it is clear that this “algorithmic” system privileges the imposition of procedure, and users must in many ways accept it as a kind of provisional tyranny in order to even participate in such a complex social interaction. The elements can only be known in operational terms, so as to calculate the relations between them; every possible operationalized interaction within the system must be anticipated; and stakeholders often point to the system-ness of the system to explain success and explain away failure. The system always struggles with the tension between the operationalized aims and the way humanity inevitably undermines, alters, or exceeds those aims. At the same time, it’s not clear how to organize such complex behavior in any other way, and still have it be functional and fair. Commitment to the system and the complex scale at which it is expected to function makes us beholden to the algorithmic procedures that must manage it. From this vantage point, algorithms are merely the latest instantiation of the modern tension between ad hoc human sociality and procedural systemization — but one that is now powerfully installed as the beating heart of the network technologies we surround ourselves with and increasingly depend upon.


      Endnotes

      1. This parallels Kowalski’s well-known definition of an algorithm as “logic + control”: “An algorithm can be regarded as consisting of a logic component, which specifies the knowledge to be used in solving problems, and a control component, which determines the problem-solving strategies by means of which that knowledge is used. The logic component determines the meaning of the algorithm whereas the control component only affects its efficiency.” (Kowalksi, 424) I prefer to use “model” because I want to reserve “logic” for the underlying premise of the entire algorithmic system and its deployment.

      2.See Kockelman 2013 for a dense but superb example.

      3.See Brian Christian, The A/B Test: Inside the Technology That’s Changing the Rules of Business.” Wired, April 25. http://www.wired.com/2012/04/ff_abtesting/


      References

      Barocas, Solon, Sophie Hood, and Malte Ziewitz. 2013. “Governing Algorithms: A Provocation Piece.” Available at SSRN 2245322. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2245322

      Beer, David. 2009. “Power through the Algorithm? Participatory Web Cultures and the Technological Unconscious.” New Media & Society 11 (6): 985-1002.

      Bucher, T. 2012. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media & Society 14 (7): 1164-80.

      Cheney-Lippold, J. 2011. “A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control.” Theory, Culture & Society 28 (6): 164-81.

      Diakopoulos, Nicholas. 2013. “Algorithmic Accountability Reporting: On the Investigation of Black Boxes.” A Tow/Knight Brief. Tow Center for Digital Journalism, Columbia Journalism School. http://towcenter.org/algorithmic-accountability-2/

      Downey, Gregory J. 2014. “Making Media Work: Time, Space, Identity, and Labor in the Analysis of Information and Communication Infrastructures.” In Media Technologies: Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A Foot, 141-66. Cambridge, MA: The MIT Press.

      Geiger, R. Stuart. 2014. “Bots, Bespoke, Code and the Materiality of Software Platforms.” Information, Communication & Society 17 (3): 342-56.

      Gillespie, Tarleton. 2012a. “Can an Algorithm Be Wrong?” Limn 1 (2). http://escholarship.org/uc/item/0jk9k4hj

      Gillespie, Tarleton. 2012b. “The Dirty Job of Keeping Facebook Clean.” Culture Digitally (Feb 22). http://culturedigitally.org/2012/02/the-dirty-job-of-keeping-facebook-clean/

      Gillespie, Tarleton. 2014. “The Relevance of Algorithms.” In Media Technologies: Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A Foot, 167-93. Cambridge, MA: The MIT Press.

      Gitelman, Lisa. 2006. Always Already New: Media, History and the Data of Culture. Cambridge, MA: MIT Press.

      Hillis, Ken, Michael Petit, and Kylie Jarrett. 2013. Google and the Culture of Search. Abingdon: Routledge.

      Kockelman, Paul. 2013. “The Anthropology of an Equation. Sieves, Spam Filters, Agentive Algorithms, and Ontologies of Transformation.” HAU: Journal of Ethnographic Theory 3 (3): 33-61.

      Kowalski, Robert. 1979. “Algorithm = Logic + Control.” Communications of the ACM 22 (7): 424-36.

      Kushner, S. 2013. “The Freelance Translation Machine: Algorithmic Culture and the Invisible Industry.” New Media & Society 15 (8): 1241-58.

      MacCormick, John. 2012. 9 Algorithms That Changed the Future. Princeton: Princeton University Press.

      Mager, Astrid. 2012. “Algorithmic Ideology: How Capitalist Society Shapes Search Engines.” Information, Communication & Society 15 (5): 769-87.

      Morozov, Evgeny. 2014. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.

      O’Reilly, Tim. 2013. “Open Data and Algorithmic Regulation.” In Beyond Transparency: Open Data and the Future of Civic Innovation, edited by Lauren Goldstein and Lauren Dyson. San Francisco, Calif.: Code for America Press. http://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/

      Rieder, Bernhard. 2012. “What Is in PageRank? A Historical and Conceptual Investigation of a Recursive Status Index.” Computational Culture 2. http://computationalculture.net/article/what_is_in_pagerank

      Seaver, Nick. 2013. “Knowing Algorithms.” Media in Transition 8, Cambridge, MA. http://nickseaver.net/papers/seaverMiT8.pdf 

      Striphas, Ted (2010) “How to Have Culture in an Algorithmic Age” The Late Age of Print June 14. http://www.thelateageofprint.org/2010/06/14/how-to-have-culture-in-an-algorithmic-age/

      Striphas, Ted (2012) “What is an Algorithm?” Culture Digitally Feb 1. http://culturedigitally.org/2012/02/what-is-an-algorithm/

      Uricchio, William. 2011. “The Algorithmic Turn: Photosynth, Augmented Reality and the Changing Implications of the Image.” Visual Studies 26 (1): 25-35.

      Williams, Raymond (1976/1983) Keywords: A Vocabulary of Culture and Society. 2nd ed. Oxford: Oxford University Press.

      Ziewitz, Malte. 2011. “How to think about an algorithm? Notes from a not quite random walk,” Discussion paper for Symposium on “Knowledge Machines between Freedom and Control”, 29 September 29. http://ziewitz.org/papers/ziewitz_algorithm.pdf

      -Contributed by ,  Cornell University Department of Communication-

      Posted in Uncategorized | Tagged , , | Comments Off

      Prototype [draft] [#digitalkeywords] Jun 19, 2014

      “…the material, technical and organizational elements of prototypes are always also potentially symbolic. Advocates within an engineering firm or a political campaign can turn them into stories. Outsiders such as journalists can also take them up and turn them into the elements of national or even global memes. In each case, particular sociotechnical configurations become available as potential visions of a larger and presumably better way of organizing society as a whole.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Prototype — Fred Turner, Stanford University

      Silicon Valley is a land of prototypes. From cramped, back-room start-ups to the glass-walled cubicle farms of Apple and Oracle, engineers labor day and night to produce working models of new software and new devices on which to run it. These prototypes need not function especially well, or even hardly at all. What they have to do is make a possible future visible. With a prototype in hand, a project ceases to be a pipedream. It becomes something an engineer, a manager, and a marketing team can get behind.

      But this is only one kind of prototype, and in many ways, it’s the easiest to describe. Silicon Valley produces others, sometimes alongside software and hardware, in the stories salesmen tell about their products, and sometimes well away from the digital factory floor, in the lives that engineers and their colleagues lead. When salesmen pitch a new iPhone or, say, new software for mapping your local neighborhood, they often also pitch a new vision of the social world. Their devices Will Change Human History For The Better – and you can glimpse the changes to come right there, these hucksters suggest, in the stories they tell. As they enter the marketplace, the technology-centered worlds these storytellers have talked into being become models for society at large. Likewise, when engineers and their colleagues gather at festivals like Burning Man, or even when they huddle in the tiny, under-financed, hyper-flexible teams that drive start up development, they engage in modeling and testing new forms of social organization, often self-consciously. Like the constellations of people and machines described in marketing campaigns, these modes of gathering have technologies at their center, but they are also prototypes in their own right – of an idealized form of society.

      These social prototypes present a puzzle for those who take “prototype” to be a digital key word: How is it that a term so closely wedded to engineering practice should also be so clearly applicable to the non-technical social world? Much of the answer depends on the work of hardware and software engineers, who have exported their modes of thinking and working far beyond the confines of Silicon Valley. But much also depends on the peculiarly American context in which these engineers work. In the United States, the concept of the “prototype” has a dual history. It is rooted in engineering practice, but it is also rooted in Protestant and especially Puritan theology. By briefly tracing these two traditions, I hope not only to excavate the history of the term, but through it, to begin to explain how and why Silicon Valley has itself become a model metropolis in the minds of many around the world.

      The Prototype in Software Engineering

      Within the world of software and computer engineering, the prototype is a relatively new arrival. In other industries, three-dimensional models of forthcoming products have been the norm for generations. Architects have long built scale models of houses, for instance, just as ship-makers have built scale models of their vessels. These models give three-dimensional life to measurements first defined on a blueprint, just as the blueprint gives two-dimensional form to ideas that emerged in conversations between the architect, the ship-maker, and their clients. For industries such as these, prototypes have long constituted an ordinary link in a chain of activities by which ideas become defined, modeled, and built.

      Until the late 1980s, most software architects approached a new project simply by attempting to define its features on paper in something called a “requirements document.”[1] Many still do today. One technical writer describes the process thus: “Take a 60-page requirements document. Bring 15 people into a room. Hand it out. Let them all read it.” [2] This process has a number of advantages. First, such documentation produces very precise specifications in a language that all developers can understand. Second, the document can be edited as the project evolves. Third, because it lives on paper and usually in a binder somewhere in an office, the continuously updated requirements document can serve as a repository, a passive reminder of what the team has agreed to do.

      Unfortunately, requirements documents can also leave developers unable to see their work whole. After handing out a large requirements document and letting everyone read it, the technical writer above says, “Now ask them what you’re building. You’re going to get 15 different answers.” Requirements documents can confuse developers as well as inform them. They can also leave out users. Developers routinely talk with their clients before drafting requirements documentation, but they often discover that users’ actual needs change as systems come online. Translating these changes into the requirements documents and then back again into the product can be complicated and time-consuming. Finally, diagrams do little to help systems developers and clients create a shared language in which to discuss these changes.[3]

      Enter the prototype. In a 1990 manual for developers entitled Prototyping, Roland Vonk argued that building a working if buggy software system could transform the requirements definition phase of system development. The prototype could become an object, like an architect’s model, around which engineers and clients could gather and through which they could articulate their needs to one another. It would speed development, improve communication, and help all parties arrive at a better definition of requirements for the system.

      It would also be fun. “Prototypes encourage play,” wrote one developer.[4] In the process, they also allow various stakeholders to make an emotional investment in the future suggested by the model at hand. Being by definition incomplete, prototypes encourage stakeholders to work at completing the object. Playing with prototypes helps stakeholders not only imagine, but to a limited degree, act out the future the prototype exemplifies. The experiential aspect of prototypes also renders the projects they represent especially available to the kinds of performances and stories out of which marketing campaigns are made. Consider this brief account, penned by the designer of a computer joystick:

      Our first prototypes gave [the client firm] Novint and its investors a first peek at what was an exciting, yet nascent, concept. We started with sexy prototypes (we call them appearance models) that captured a vision for what the product might become down the road. By sexy, I mean models in translucent white plastic and stainless steel that took their cues from the special effects found in science fiction movies that gamers enjoy. This created a target for what the final product could be and also helped the company build investor enthusiasm around the product idea.

      With…our first prototypes in hand, Novint could create a narrative about where it was headed with this product. It was a story that now had some tangible components and emotional appeal, thanks to the physical models prototyped by [our] designers. That was a promising start.[5]

      As Lucy Suchman and others have pointed out, information technologies represent “socio-material configurations, aligned into more or less durable forms.” [6] Prototypes represent sites at which those configurations come into being. Prototypes simultaneously make visible technical possibilities and actively convene new constituencies. These stakeholders can help bring the technology to market, but they also represent new social possibilities in their own right. The pattern in which they’ve gathered can itself become a model for future gatherings, within and even beyond the industry in question.

      Daniel Kreiss has put this point succinctly: “While most of the literature on prototypes focuses on small-scale artifacts and research labs, there is no theoretical reason why prototypes do not also exist at the field level.”[7] Kreiss has tracked the use of what he calls “prototype campaigns” across several presidential voting cycles. In a 2013 paper for Culture Digitally, he explored two: the Howard Dean and Barack Obama campaigns of 2004.[8] The Dean campaign took exceptional advantage of digital technologies. It recruited leading consultants and computer scientists, built powerful databases of voters, and established a visible web presence. Dean staffers called their work an “open-source” campaign. In the process, as Kreiss explains, they not only aligned various stakeholders around computers and data; they also turned their use of computers and data into evidence that they belonged at the center of a much larger cultural story. Through that story, they claimed the kind of cultural centrality and national legitimacy that most outsider candidates can only dream of.

      When the Dean campaign imploded, the Obama campaign was only too happy to adopt key members of his technology team and to claim that Obama too was running a bottom-up, technology enabled campaign. As Kreiss has shown, they were not. On the contrary, the Obama campaign used computers to centralize and manage the same kinds of data and power on which elections have always depended.[9] But as a symbol, the Obama campaign seemed to model a world emerging simultaneously in the computer industry, a world that Americans could imagine would be open, networked, individualistic and free.

      Change by Design

      There is a tension here between the sense of the campaign itself as a prototype and its depiction as a prototype. In Suchman’s account, information technologies generate social arrangements. In Kreiss’s, the sociotechnical arrangements of campaigns become elements of stories that in turn legitimate future actions. For the designers of the Novint joystick, prototypes play both roles. Taken together, these three accounts remind us that the material, technical and organizational elements of prototypes are always also potentially symbolic. Advocates within an engineering firm or a political campaign can turn them into stories. Outsiders such as journalists can also take them up and turn them into the elements of national or even global memes. In each case, particular sociotechnical configurations become available as potential visions of a larger and presumably better way of organizing society as a whole.

      Within Silicon Valley, there are a host of organizations devoted to identifying and promulgating promising social prototypes. These include futurist outfits, research firms, and venture capitalists, among many others. Few firms transform engineering prototypes into social prototypes more self-consciously or more visibly than the Palo Alto-based design firm IDEO. Founded in 1978, the firm applies what it calls “design thinking” to every aspect of its client organizations, including individual products and brands, as well as software development, communication strategy, and organizational structure. For any given product, the firm can coordinate every aspect of the prototyping process at the engineering level and at the same time, it can link the devices and processes that emerge to new kinds of stories.

      To get a feel for how IDEO transforms engineering prototypes into social prototypes, one need only consult CEO and President Tim Brown’s 2009 book, Change by Design: How Design Thinking Transforms Organizations and Inspires Innovations. Part business how-to, part advertisement for IDEO, the book outlines the firm’s philosophy of “design thinking” and shows how it has worked in a variety of specific cases. Within design thinking, prototyping occupies two places. The first would be easy for most anyone in Silicon Valley to recognize as an ordinary part of manufacturing. Prototyping stands as the opposite of “specification lead, planning driven abstract thinking.”[10] IDEO founder David Kelly calls it “thinking with your hands.”[11]As Tim Brown points out, prototyping can be cheaper and faster than simply drawing diagrams, and it can engage users in shaping products as they emerge. Brown also argues that to enable prototypes to have real impact, designers need to embed them in stories. These “plausible fictions,” says Brown, help designers keep their end users in mind and help potential customers, within and outside the firm, imagine what they might do with the objects and processes being prototyped.[12]

      Thus far, Brown’s discussion of prototypes echoes conversations in most any prototype-oriented engineering space. But toward the end of his book, Brown takes a millenarian turn. “We are in the midst of an epochal shift in the balance of power,” he argues. Corporations have turned from producing goods to producing services and experiences. Customers have become something more than mere buyers. According to Brown, they have become collaborators, co-constructors of the product-experiences they acquire. Lest the reader imagine this to be a purely commercial transformation, Brown argues that “What is emerging is nothing less than a new social contract” – a contract so revolutionary that it could save the planet: “Left to its own, the vicious circle of design-manufacture-marketing-consumption will exhaust itself and Spaceship Earth will run out of fuel. With the active participation of people at every level, we may just be able to extend this journey for a while longer.”[13]

      The notion that consumer choice and political choice can be fused and that together, they can save humanity from itself, has haunted the marketing of digital media for more than twenty years. But there is more than marketing at stake in Change by Design. For Brown, prototyping has become a way to transform the local, everyday work of engineering into a mode of personal spiritual development. “Above all, think of life as a prototype,” writes Brown:

      We can conduct experiments, make discoveries, and change our perspectives. We can look for opportunities to term processes into projects that have tangible outcomes. We can learn how to take joy in the things we create whether they take the form of a fleeting experience or an heirloom that will last for generations. We can learn the reward comes in creation and re-creation, not just in the consumption of the world around us. Active participation in the process of creation is our right and our privilege. We can learn to measure the success of our ideas not by our bank accounts but by their impact on the world.[14]

      For engineers, prototypes must be things or stories. For analysts like Suchman and Kreiss, as well as for engineers, they can be constellations of people and things that become elements in narratives that in turn have marketing or political force. But for Brown, prototyping is something much more. Prototypes as he describes them belong to a way of looking at the world in which individuals constantly remake themselves, in which they test themselves against the world and if they find themselves wanting, improve themselves. Their quest for self-improvement in turn models the possibility of global transformation. In this vision, making a better product in the factory models and justifies the process of making a better self in everyday life. Making both together, through the process of participation and with proper attention to metrics and measurement, might even prevent the apocalyptic crash of Spaceship Earth.

      Puritan Typology

      Brown’s world-saving rhetoric is a staple of Silicon Valley. But it did not originate there. To understand how Brown and his readers could imagine themselves as prototypes, we need to turn backward in time, trek three thousand miles to the east, and revisit the Puritans of colonial New England. When the Pilgrims landed on Cape Cod, they brought with them an extraordinarily rich practice of Biblical exegesis that they called “typology.” In their view, as in the view of Biblical scholars all the way back to Saint Augustine, events in the Old Testament served as “types” – which we would now call “prototypes” – of events in the life of Christ recounted in the New Testament.[15] When Jonah spent three days in the belly of a whale, for example, he foreshadowed Christ’s burial and resurrection.[16] For the Puritans, types were not simply symbols in stories; rather, they represented God’s efforts to speak to fallen man through his limited senses. In this view, Jonah really did go down under water and when he rose up, he sent word out through time that soon Christ himself would go down under the earth and rise up too. The Bible simply recorded these facts.

      For the Puritans, typology did not stop at the level of the text. Rather, it offered them a vision of the world as a text. In the typological view, God had written his will into time. History consisted of a series of prophecies, rendered in the world as prototypical events, and fulfilled by later happenings. The Biblical exodus of the Israelites, for instance, foreshadowed the migration of the Puritans themselves from England to the New World. To their congregants, the Puritan ministers of Boston and Cambridge seemed to have been prefigured by the saints of the Bible and to serve as types of saints yet to come. Each individual’s life was little more than a single link in a chain of types. On the one hand, an individual such as Cotton Mather might see himself as the fulfillment of a mode of sainthood prophesied in the Bible. And on the other, his congregation might see him as an example to follow into a heavenly future. For the Puritans, history moved ever forward toward the completion of divine prophecy. But the type – or, again, prototype – pointed both forward and backward in time. The Puritan type was a hinge between past and present, mortal and divine.

      For individual Puritans, the ability to read the world as a series of types carried enormous meaning. The doctrine of predestination, to which all New England Puritans subscribed, asserted that God had already decided whom to save and whom to send to hell. There was nothing anyone could do about their fate. This belief however, set off an extraordinary effort among living Puritans to spot signs of their possible election.[17] After all, what God could be so cruel as to curse in life those He was about save for all eternity? By the early 1700s, the signs of likely salvation included most prominently to read the natural world of New England as a series of types, written into history by God.

      By now, you might have begun to wonder what, if anything, seventeenth and early eighteenth century theology might have to do with contemporary science and engineering. One answer is that it was in early eighteenth century New England that Newtonian physics met Puritan theology and it was there that American scientists and engineers first linked scientific progress and Puritan teleology. No one did this more gracefully than the minister Jonathan Edwards. Though many remember Edwards today as the author of the quintessential fire-and-brimstone sermon “Sinners in the Hands of an Angry God,” Edwards also wrote widely on science and philosophy. Throughout his life he kept a notebook in which he recorded his struggles to fuse the scientific and the divine. Published under the title Images or Shadows of Divine Things in 1948, the notebook simply records the types that Edwards believed he saw in nature.

      Consider the following, fairly typical entry:

      The whole material universe is preserved by gravity or attraction, or the mutual tendency of all bodies to each other. One part of the universe is hereby made beneficial to another; the beauty, harmony, and order, regular progress, life, and motion, and in short all the well-being of the whole frame depends on it. This is a type of love or charity in the spiritual world.[18]

      For Edwards, gravity explicitly modeled God’s love for man. But implicitly, Newton’s discovery of gravity and Edwards’ own ability to recognize gravity as a type, marked Newton and Edwards as potential member’s of God’s elect. In Edwards’ typological history, theology and science marched hand in hand toward the end of time, each illuminating God’s will and each producing saints to do that work.

      Which brings us back to Tim Brown, IDEO, and Silicon Valley. For some time now, analysts have suggested that the digital utopianism that continues to permeate Northern California came to life only there. In fact, an archeological exploration of the term “prototype” reveals that the habit of linking scientific and engineering practice to a historical teleology rooted in Christian theology can be traced back to New England, if not farther. As he declaims the power of design thinking to save the world, Tim Brown echoes the Puritan divines of centuries past. They too called on their readers to see their lives as prototypes and to see prototyping as a project that might save their souls and perhaps even the fallen world. Though Brown nowhere refers to God, his volume fairly aches with a longing to find a global meaning in his life and work, to know that he and IDEO are on the side of the angels, that they are not just fallen souls, marketing their wares as best they can, in the corrupt metropoles of capitalism.

      So What Are Prototypes?

      With this brief history of Puritan typology in hand, we can begin to complicate the picture of prototypes that we have received from engineering. In computer science and many other disciplines, engineers build prototypes to look forward in time. They hope to anticipate challenges, reveal user desires, and engage stakeholders in the kinds of experiences that will generate buzz about the product, within and beyond the boundaries of the firm. In Silicon Valley, as elsewhere, intermediaries such as IDEO turn these constellations of technologies and people into elements in stories which can in turn serve to legitimate and even model new social forms. To the extent that we see prototypes as exclusively forward looking, then the process of turning engineering and its products into models of ideal social worlds may look simply like another stage in the conquest of everyday life by the information industries.

      Yet, as Puritan typology reminds us, prototypes always look backward in time as well as forward. The means by which they gather society and technology have their roots in worlds that precede and prefigure the futures they will call out for. And the particular mode of prototyping practiced by Tim Brown and many others in Silicon Valley has its roots not only in the world of engineering, but in the theology of Puritan New England. When he and others turn individual products and processes into prototypes of an ideal social world, they are following in the footsteps of Puritan divines like Jonathan Edwards. They are hardly Puritans in any theological sense. Yet they too are seeking to reveal a hidden order to everyday life. They too hope to uncover a hidden road to heaven and to take their place as saints along the way. They too are wondering whether they have been chosen. And they are offering prototyping to their readers as a method by which they too might discover their own election.

      The affordances of engineering prototypes assist in this process. Because prototypes are incomplete, half-cooked, in need of development, they solicit the collaboration of users and others in the building of a particular future. Because prototypes emerge from the laboratory or the office, they can seem to have no politics. They become enormously difficult to recognize as carriers of a particular teleology. Even as they begin to shadow forth a new social order, one in which engineers and marketers become ministers and the marketplace a kind of congregation, the sheer a-historicity of the prototype shields its makers and their structural ambitions from recognition.

      As scholars then, we need to ask new questions of the prototypes we encounter. We need to ask, How does a given prototype summon the past, as well as foreshadow a particular future? For what purposes? What sort of teleology does it invoke? And what sort of historiography does it require? How do prototypes leave the lab bench and the coder’s cubicle to become elements in stories about the world as a whole? How do engineering prototypes become social prototypes? And who wins when they do?

      By answering these questions, we might finally begin to stop thinking of our lives as prototypes and of new technologies as foreshadowings of a divine future.


      Endnotes

      1.  Vonk, Roland. Prototyping: The Effective Use of CASE Technology.  New York: Prentice Hall International, 1990, X-XI.

      2. Warfel, Todd Zaki. Prototyping. Rosenfeld Media; November 1, 2009; Safari Books Online, accessed May 12, 2014; section 1.3.

      3.  Vonk, Prototyping, X.

      4.  Warfel, Prototyping, 1.3

      5.  Edson, John. Design Like Apple: Seven Principles For Creating Insanely Great Products, Services, and Experiences; John Wiley & Sons; July 10, 2012; Safari Books Online, accessed May 12, 2014; section “Prototype and the Object.”

      6.  Suchman, Lucy, Randall Trigg, and Jeanette Blomberg. “Working Artefacts: Ethnomethods of the Prototype.” British Journal of Sociology 53, no. 2 (June 2002): 163-79; 163.

      7.  Kreiss, Daniel. “Political Prototypes: Why Performances and Narratives Matter,” Culture Digitally, http://culturedigitally.org/2013/11/political-prototypes-why-performances-and-narratives-matter/; posted November 22, 2013; accessed May 12, 2014.

      8.  Ibid.

      9.  Kreiss, Daniel. Taking Our Country Back: The Crafting of Networked Politics from Howard Dean to Barack Obama.  New York: Oxford University Press, 2012.

      10.  Brown, Tim, and Barry Katz. Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. New York: Harper Business, 2009, 89.

      11.  Kelly, quoted ibid.

      12.  Brown, Change by Design, 94.

      13.  Ibid., 178.

      14.  Ibid., 241.

      15.  Brumm, Ursula. American Thought and Religious Typology.  New Brunswick, N.J.: Rutgers University Press, 1970, 26

      16.  Perry Miller, “Introduction,” in Edwards, Jonathan, and Perry Miller. Images or Shadows of Divine Things.  New Haven: Yale Univ. Press, 1948, 1-42; 6.

      17.  Ibid., 27.

      18.  Edwards, Jonathan, Images or Shadows of Divine Things, entry 79, page 79.

      -Contributed by ,  Stanford University Department of Communication-

      Posted in Uncategorized | Tagged , , | Comments Off

      ← Older posts |