Prototype — Fred Turner, Stanford University
Silicon Valley is a land of prototypes. From cramped, back-room start-ups to the glass-walled cubicle farms of Apple and Oracle, engineers labor day and night to produce working models of new software and new devices on which to run it. These prototypes need not function especially well, or even hardly at all. What they have to do is make a possible future visible. With a prototype in hand, a project ceases to be a pipedream. It becomes something an engineer, a manager, and a marketing team can get behind.
But this is only one kind of prototype, and in many ways, it’s the easiest to describe. Silicon Valley produces others, sometimes alongside software and hardware, in the stories salesmen tell about their products, and sometimes well away from the digital factory floor, in the lives that engineers and their colleagues lead. When salesmen pitch a new iPhone or, say, new software for mapping your local neighborhood, they often also pitch a new vision of the social world. Their devices Will Change Human History For The Better – and you can glimpse the changes to come right there, these hucksters suggest, in the stories they tell. As they enter the marketplace, the technology-centered worlds these storytellers have talked into being become models for society at large. Likewise, when engineers and their colleagues gather at festivals like Burning Man, or even when they huddle in the tiny, under-financed, hyper-flexible teams that drive start up development, they engage in modeling and testing new forms of social organization, often self-consciously. Like the constellations of people and machines described in marketing campaigns, these modes of gathering have technologies at their center, but they are also prototypes in their own right – of an idealized form of society.
These social prototypes present a puzzle for those who take “prototype” to be a digital key word: How is it that a term so closely wedded to engineering practice should also be so clearly applicable to the non-technical social world? Much of the answer depends on the work of hardware and software engineers, who have exported their modes of thinking and working far beyond the confines of Silicon Valley. But much also depends on the peculiarly American context in which these engineers work. In the United States, the concept of the “prototype” has a dual history. It is rooted in engineering practice, but it is also rooted in Protestant and especially Puritan theology. By briefly tracing these two traditions, I hope not only to excavate the history of the term, but through it, to begin to explain how and why Silicon Valley has itself become a model metropolis in the minds of many around the world.
The Prototype in Software Engineering
Within the world of software and computer engineering, the prototype is a relatively new arrival. In other industries, three-dimensional models of forthcoming products have been the norm for generations. Architects have long built scale models of houses, for instance, just as ship-makers have built scale models of their vessels. These models give three-dimensional life to measurements first defined on a blueprint, just as the blueprint gives two-dimensional form to ideas that emerged in conversations between the architect, the ship-maker, and their clients. For industries such as these, prototypes have long constituted an ordinary link in a chain of activities by which ideas become defined, modeled, and built.
Until the late 1980s, most software architects approached a new project simply by attempting to define its features on paper in something called a “requirements document.” Many still do today. One technical writer describes the process thus: “Take a 60-page requirements document. Bring 15 people into a room. Hand it out. Let them all read it.”  This process has a number of advantages. First, such documentation produces very precise specifications in a language that all developers can understand. Second, the document can be edited as the project evolves. Third, because it lives on paper and usually in a binder somewhere in an office, the continuously updated requirements document can serve as a repository, a passive reminder of what the team has agreed to do.
Unfortunately, requirements documents can also leave developers unable to see their work whole. After handing out a large requirements document and letting everyone read it, the technical writer above says, “Now ask them what you’re building. You’re going to get 15 different answers.” Requirements documents can confuse developers as well as inform them. They can also leave out users. Developers routinely talk with their clients before drafting requirements documentation, but they often discover that users’ actual needs change as systems come online. Translating these changes into the requirements documents and then back again into the product can be complicated and time-consuming. Finally, diagrams do little to help systems developers and clients create a shared language in which to discuss these changes.
Enter the prototype. In a 1990 manual for developers entitled Prototyping, Roland Vonk argued that building a working if buggy software system could transform the requirements definition phase of system development. The prototype could become an object, like an architect’s model, around which engineers and clients could gather and through which they could articulate their needs to one another. It would speed development, improve communication, and help all parties arrive at a better definition of requirements for the system.
It would also be fun. “Prototypes encourage play,” wrote one developer. In the process, they also allow various stakeholders to make an emotional investment in the future suggested by the model at hand. Being by definition incomplete, prototypes encourage stakeholders to work at completing the object. Playing with prototypes helps stakeholders not only imagine, but to a limited degree, act out the future the prototype exemplifies. The experiential aspect of prototypes also renders the projects they represent especially available to the kinds of performances and stories out of which marketing campaigns are made. Consider this brief account, penned by the designer of a computer joystick:
Our first prototypes gave [the client firm] Novint and its investors a first peek at what was an exciting, yet nascent, concept. We started with sexy prototypes (we call them appearance models) that captured a vision for what the product might become down the road. By sexy, I mean models in translucent white plastic and stainless steel that took their cues from the special effects found in science fiction movies that gamers enjoy. This created a target for what the final product could be and also helped the company build investor enthusiasm around the product idea.
With…our first prototypes in hand, Novint could create a narrative about where it was headed with this product. It was a story that now had some tangible components and emotional appeal, thanks to the physical models prototyped by [our] designers. That was a promising start.
As Lucy Suchman and others have pointed out, information technologies represent “socio-material configurations, aligned into more or less durable forms.”  Prototypes represent sites at which those configurations come into being. Prototypes simultaneously make visible technical possibilities and actively convene new constituencies. These stakeholders can help bring the technology to market, but they also represent new social possibilities in their own right. The pattern in which they’ve gathered can itself become a model for future gatherings, within and even beyond the industry in question.
Daniel Kreiss has put this point succinctly: “While most of the literature on prototypes focuses on small-scale artifacts and research labs, there is no theoretical reason why prototypes do not also exist at the field level.” Kreiss has tracked the use of what he calls “prototype campaigns” across several presidential voting cycles. In a 2013 paper for Culture Digitally, he explored two: the Howard Dean and Barack Obama campaigns of 2004. The Dean campaign took exceptional advantage of digital technologies. It recruited leading consultants and computer scientists, built powerful databases of voters, and established a visible web presence. Dean staffers called their work an “open-source” campaign. In the process, as Kreiss explains, they not only aligned various stakeholders around computers and data; they also turned their use of computers and data into evidence that they belonged at the center of a much larger cultural story. Through that story, they claimed the kind of cultural centrality and national legitimacy that most outsider candidates can only dream of.
When the Dean campaign imploded, the Obama campaign was only too happy to adopt key members of his technology team and to claim that Obama too was running a bottom-up, technology enabled campaign. As Kreiss has shown, they were not. On the contrary, the Obama campaign used computers to centralize and manage the same kinds of data and power on which elections have always depended. But as a symbol, the Obama campaign seemed to model a world emerging simultaneously in the computer industry, a world that Americans could imagine would be open, networked, individualistic and free.
Change by Design
There is a tension here between the sense of the campaign itself as a prototype and its depiction as a prototype. In Suchman’s account, information technologies generate social arrangements. In Kreiss’s, the sociotechnical arrangements of campaigns become elements of stories that in turn legitimate future actions. For the designers of the Novint joystick, prototypes play both roles. Taken together, these three accounts remind us that the material, technical and organizational elements of prototypes are always also potentially symbolic. Advocates within an engineering firm or a political campaign can turn them into stories. Outsiders such as journalists can also take them up and turn them into the elements of national or even global memes. In each case, particular sociotechnical configurations become available as potential visions of a larger and presumably better way of organizing society as a whole.
Within Silicon Valley, there are a host of organizations devoted to identifying and promulgating promising social prototypes. These include futurist outfits, research firms, and venture capitalists, among many others. Few firms transform engineering prototypes into social prototypes more self-consciously or more visibly than the Palo Alto-based design firm IDEO. Founded in 1978, the firm applies what it calls “design thinking” to every aspect of its client organizations, including individual products and brands, as well as software development, communication strategy, and organizational structure. For any given product, the firm can coordinate every aspect of the prototyping process at the engineering level and at the same time, it can link the devices and processes that emerge to new kinds of stories.
To get a feel for how IDEO transforms engineering prototypes into social prototypes, one need only consult CEO and President Tim Brown’s 2009 book, Change by Design: How Design Thinking Transforms Organizations and Inspires Innovations. Part business how-to, part advertisement for IDEO, the book outlines the firm’s philosophy of “design thinking” and shows how it has worked in a variety of specific cases. Within design thinking, prototyping occupies two places. The first would be easy for most anyone in Silicon Valley to recognize as an ordinary part of manufacturing. Prototyping stands as the opposite of “specification lead, planning driven abstract thinking.” IDEO founder David Kelly calls it “thinking with your hands.”As Tim Brown points out, prototyping can be cheaper and faster than simply drawing diagrams, and it can engage users in shaping products as they emerge. Brown also argues that to enable prototypes to have real impact, designers need to embed them in stories. These “plausible fictions,” says Brown, help designers keep their end users in mind and help potential customers, within and outside the firm, imagine what they might do with the objects and processes being prototyped.
Thus far, Brown’s discussion of prototypes echoes conversations in most any prototype-oriented engineering space. But toward the end of his book, Brown takes a millenarian turn. “We are in the midst of an epochal shift in the balance of power,” he argues. Corporations have turned from producing goods to producing services and experiences. Customers have become something more than mere buyers. According to Brown, they have become collaborators, co-constructors of the product-experiences they acquire. Lest the reader imagine this to be a purely commercial transformation, Brown argues that “What is emerging is nothing less than a new social contract” – a contract so revolutionary that it could save the planet: “Left to its own, the vicious circle of design-manufacture-marketing-consumption will exhaust itself and Spaceship Earth will run out of fuel. With the active participation of people at every level, we may just be able to extend this journey for a while longer.”
The notion that consumer choice and political choice can be fused and that together, they can save humanity from itself, has haunted the marketing of digital media for more than twenty years. But there is more than marketing at stake in Change by Design. For Brown, prototyping has become a way to transform the local, everyday work of engineering into a mode of personal spiritual development. “Above all, think of life as a prototype,” writes Brown:
We can conduct experiments, make discoveries, and change our perspectives. We can look for opportunities to term processes into projects that have tangible outcomes. We can learn how to take joy in the things we create whether they take the form of a fleeting experience or an heirloom that will last for generations. We can learn the reward comes in creation and re-creation, not just in the consumption of the world around us. Active participation in the process of creation is our right and our privilege. We can learn to measure the success of our ideas not by our bank accounts but by their impact on the world.
For engineers, prototypes must be things or stories. For analysts like Suchman and Kreiss, as well as for engineers, they can be constellations of people and things that become elements in narratives that in turn have marketing or political force. But for Brown, prototyping is something much more. Prototypes as he describes them belong to a way of looking at the world in which individuals constantly remake themselves, in which they test themselves against the world and if they find themselves wanting, improve themselves. Their quest for self-improvement in turn models the possibility of global transformation. In this vision, making a better product in the factory models and justifies the process of making a better self in everyday life. Making both together, through the process of participation and with proper attention to metrics and measurement, might even prevent the apocalyptic crash of Spaceship Earth.
Brown’s world-saving rhetoric is a staple of Silicon Valley. But it did not originate there. To understand how Brown and his readers could imagine themselves as prototypes, we need to turn backward in time, trek three thousand miles to the east, and revisit the Puritans of colonial New England. When the Pilgrims landed on Cape Cod, they brought with them an extraordinarily rich practice of Biblical exegesis that they called “typology.” In their view, as in the view of Biblical scholars all the way back to Saint Augustine, events in the Old Testament served as “types” – which we would now call “prototypes” – of events in the life of Christ recounted in the New Testament. When Jonah spent three days in the belly of a whale, for example, he foreshadowed Christ’s burial and resurrection. For the Puritans, types were not simply symbols in stories; rather, they represented God’s efforts to speak to fallen man through his limited senses. In this view, Jonah really did go down under water and when he rose up, he sent word out through time that soon Christ himself would go down under the earth and rise up too. The Bible simply recorded these facts.
For the Puritans, typology did not stop at the level of the text. Rather, it offered them a vision of the world as a text. In the typological view, God had written his will into time. History consisted of a series of prophecies, rendered in the world as prototypical events, and fulfilled by later happenings. The Biblical exodus of the Israelites, for instance, foreshadowed the migration of the Puritans themselves from England to the New World. To their congregants, the Puritan ministers of Boston and Cambridge seemed to have been prefigured by the saints of the Bible and to serve as types of saints yet to come. Each individual’s life was little more than a single link in a chain of types. On the one hand, an individual such as Cotton Mather might see himself as the fulfillment of a mode of sainthood prophesied in the Bible. And on the other, his congregation might see him as an example to follow into a heavenly future. For the Puritans, history moved ever forward toward the completion of divine prophecy. But the type – or, again, prototype – pointed both forward and backward in time. The Puritan type was a hinge between past and present, mortal and divine.
For individual Puritans, the ability to read the world as a series of types carried enormous meaning. The doctrine of predestination, to which all New England Puritans subscribed, asserted that God had already decided whom to save and whom to send to hell. There was nothing anyone could do about their fate. This belief however, set off an extraordinary effort among living Puritans to spot signs of their possible election. After all, what God could be so cruel as to curse in life those He was about save for all eternity? By the early 1700s, the signs of likely salvation included most prominently to read the natural world of New England as a series of types, written into history by God.
By now, you might have begun to wonder what, if anything, seventeenth and early eighteenth century theology might have to do with contemporary science and engineering. One answer is that it was in early eighteenth century New England that Newtonian physics met Puritan theology and it was there that American scientists and engineers first linked scientific progress and Puritan teleology. No one did this more gracefully than the minister Jonathan Edwards. Though many remember Edwards today as the author of the quintessential fire-and-brimstone sermon “Sinners in the Hands of an Angry God,” Edwards also wrote widely on science and philosophy. Throughout his life he kept a notebook in which he recorded his struggles to fuse the scientific and the divine. Published under the title Images or Shadows of Divine Things in 1948, the notebook simply records the types that Edwards believed he saw in nature.
Consider the following, fairly typical entry:
The whole material universe is preserved by gravity or attraction, or the mutual tendency of all bodies to each other. One part of the universe is hereby made beneficial to another; the beauty, harmony, and order, regular progress, life, and motion, and in short all the well-being of the whole frame depends on it. This is a type of love or charity in the spiritual world.
For Edwards, gravity explicitly modeled God’s love for man. But implicitly, Newton’s discovery of gravity and Edwards’ own ability to recognize gravity as a type, marked Newton and Edwards as potential member’s of God’s elect. In Edwards’ typological history, theology and science marched hand in hand toward the end of time, each illuminating God’s will and each producing saints to do that work.
Which brings us back to Tim Brown, IDEO, and Silicon Valley. For some time now, analysts have suggested that the digital utopianism that continues to permeate Northern California came to life only there. In fact, an archeological exploration of the term “prototype” reveals that the habit of linking scientific and engineering practice to a historical teleology rooted in Christian theology can be traced back to New England, if not farther. As he declaims the power of design thinking to save the world, Tim Brown echoes the Puritan divines of centuries past. They too called on their readers to see their lives as prototypes and to see prototyping as a project that might save their souls and perhaps even the fallen world. Though Brown nowhere refers to God, his volume fairly aches with a longing to find a global meaning in his life and work, to know that he and IDEO are on the side of the angels, that they are not just fallen souls, marketing their wares as best they can, in the corrupt metropoles of capitalism.
So What Are Prototypes?
With this brief history of Puritan typology in hand, we can begin to complicate the picture of prototypes that we have received from engineering. In computer science and many other disciplines, engineers build prototypes to look forward in time. They hope to anticipate challenges, reveal user desires, and engage stakeholders in the kinds of experiences that will generate buzz about the product, within and beyond the boundaries of the firm. In Silicon Valley, as elsewhere, intermediaries such as IDEO turn these constellations of technologies and people into elements in stories which can in turn serve to legitimate and even model new social forms. To the extent that we see prototypes as exclusively forward looking, then the process of turning engineering and its products into models of ideal social worlds may look simply like another stage in the conquest of everyday life by the information industries.
Yet, as Puritan typology reminds us, prototypes always look backward in time as well as forward. The means by which they gather society and technology have their roots in worlds that precede and prefigure the futures they will call out for. And the particular mode of prototyping practiced by Tim Brown and many others in Silicon Valley has its roots not only in the world of engineering, but in the theology of Puritan New England. When he and others turn individual products and processes into prototypes of an ideal social world, they are following in the footsteps of Puritan divines like Jonathan Edwards. They are hardly Puritans in any theological sense. Yet they too are seeking to reveal a hidden order to everyday life. They too hope to uncover a hidden road to heaven and to take their place as saints along the way. They too are wondering whether they have been chosen. And they are offering prototyping to their readers as a method by which they too might discover their own election.
The affordances of engineering prototypes assist in this process. Because prototypes are incomplete, half-cooked, in need of development, they solicit the collaboration of users and others in the building of a particular future. Because prototypes emerge from the laboratory or the office, they can seem to have no politics. They become enormously difficult to recognize as carriers of a particular teleology. Even as they begin to shadow forth a new social order, one in which engineers and marketers become ministers and the marketplace a kind of congregation, the sheer a-historicity of the prototype shields its makers and their structural ambitions from recognition.
As scholars then, we need to ask new questions of the prototypes we encounter. We need to ask, How does a given prototype summon the past, as well as foreshadow a particular future? For what purposes? What sort of teleology does it invoke? And what sort of historiography does it require? How do prototypes leave the lab bench and the coder’s cubicle to become elements in stories about the world as a whole? How do engineering prototypes become social prototypes? And who wins when they do?
By answering these questions, we might finally begin to stop thinking of our lives as prototypes and of new technologies as foreshadowings of a divine future.
1. Vonk, Roland. Prototyping: The Effective Use of CASE Technology. New York: Prentice Hall International, 1990, X-XI.
2. Warfel, Todd Zaki. Prototyping. Rosenfeld Media; November 1, 2009; Safari Books Online, accessed May 12, 2014; section 1.3.
3. Vonk, Prototyping, X.
4. Warfel, Prototyping, 1.3
5. Edson, John. Design Like Apple: Seven Principles For Creating Insanely Great Products, Services, and Experiences; John Wiley & Sons; July 10, 2012; Safari Books Online, accessed May 12, 2014; section “Prototype and the Object.”
6. Suchman, Lucy, Randall Trigg, and Jeanette Blomberg. “Working Artefacts: Ethnomethods of the Prototype.” British Journal of Sociology 53, no. 2 (June 2002): 163-79; 163.
7. Kreiss, Daniel. “Political Prototypes: Why Performances and Narratives Matter,” Culture Digitally, http://culturedigitally.org/2013/11/political-prototypes-why-performances-and-narratives-matter/; posted November 22, 2013; accessed May 12, 2014.
9. Kreiss, Daniel. Taking Our Country Back: The Crafting of Networked Politics from Howard Dean to Barack Obama. New York: Oxford University Press, 2012.
10. Brown, Tim, and Barry Katz. Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. New York: Harper Business, 2009, 89.
11. Kelly, quoted ibid.
12. Brown, Change by Design, 94.
13. Ibid., 178.
14. Ibid., 241.
15. Brumm, Ursula. American Thought and Religious Typology. New Brunswick, N.J.: Rutgers University Press, 1970, 26
16. Perry Miller, “Introduction,” in Edwards, Jonathan, and Perry Miller. Images or Shadows of Divine Things. New Haven: Yale Univ. Press, 1948, 1-42; 6.
17. Ibid., 27.
18. Edwards, Jonathan, Images or Shadows of Divine Things, entry 79, page 79.
-Contributed by Fred Turner, Stanford University Department of Communication-
Memory — Steven Schrag, University of Pennsylvania
“If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder.”
- Plato, The Phaedrus
“Reminders. Now nothing slips your mind.”
- Apple Inc., “OS X Apps”
Memory – from the Latin memoria (the faculty of remembering, remembrance, a historical account), and “mnemonic” from Mnemosyne, mother of the nine Muses in Greek myth – is one of the most fundamental concepts of human identity, and one of its oldest technologies. It is a process, at both the individual and collective level, of narrating and making sense of experience, of storage and recovery. While computer memory (itself an expansive category of devices used to store and recall data or programs) is but one technology of memory among many, increasingly ubiquitous digital data storage has had a profound effect on contemporary practices of history and remembrance – and even on the way humans construct and perceive their identities. Discussions of a “modernity that forgets” or an “Internet that remembers,” however, often risk conflating individual cognitive memory, collective and cultural memory, history, storage media, and the archive.
Memory, like identity, is a polysemic term expressed as a series of dualities. Both publicly and privately constructed, it comprises the particular and the universal, the natural faculty and the artificial mnemonic, the internal and the external. Historically construed as an art, practiced as a technique in oral societies, retained in objects and architectures, sites and realms, it both mediates (and is mediated by) the analog structures of the brain and the digital records of the hard drive. This essay will outline several of these dualities – the paradoxical relation between remembrance and archival of the past, the tension between technologically-induced amnesia and hypermnesia, and the emergent/everpresent gaps between mnemonic persistence and ephemerality that shape social structures of domination and control – and address the question of whether digital memory intersects cognitive and cultural memory as a surrogate or as a symbiote.
Paradox and Prosthetic
“The paradox of a culture which manifests so many symptoms of hypermnesia and which yet at the same time is post-mnemonic is a paradox that is resolvable once we see the causal relationship between these two features. Our world is hypermnesic in many of its cultural manifestations, and post-mnemonic in the structures of the political economy. The cultural symptoms of hypermnesia are caused by a political-economic system which systemically generates a post-mnemonic culture – a Modernity which forgets.”
- Paul Connerton, How Modernity Forgets
”The most common transformation of memory concerns what has been regarded generally as memory undone – amnesia or forgetting. How memories are erased, forgotten, or willed absent has come to be seen as equally important to the ways in which memories are set in place”
- Barbie Zelizer, Reading the Past Against the Grain: The Shape of Memory Studies
Conversations about the relationship between technology and memory stretch back to Plato’s injunction in the Phaedrus. The act of archival (of an email on a server, a file in a cabinet, or the commitment of an argument to paper) is fundamentally an act of forgetting, of “externalizing memory” via storage media rather than committing it to biological memory. Walter Ong, who considers Plato’s polemic within a larger history of literacy, casts the mnemonic technology of writing as the catalyst for a fundamental epistemic shift between cultures of orality and literacy; however, unlike Plato, he claims that writing can be both destructive and reconstitutive of memory (1982). Ong highlights a contradiction inherent to prosthetic memory: the same process that Derrida pathologizes as an endless reiteration akin to the death-drive, which he called “archive fever.” Memory, an active process that revives and recreates, calls to mind from the archive that which it archives. Thus the act of remembering, in (re)constructing and (re)mediating the present, traps us paradoxically in the past.
This paradox of memory replicates the ambiguity of the science-fictional “prosthetic,” perceived either as a liberating cybernetic extension of the self or the dangerous, disembodied Other of cyberpunk. The cyborg, whose embodied and machine-prosthetic memory function as a holistic ubiquitous-feedback mechanism, forms a hybrid communication-control system – the endpoint of which would be, perhaps, a transcendent, McLuhan-esque global consciousness. In striking contrast, the postmodern mythology of cyberpunk asks: are the memories we experience actually ours? For a scholarly understanding of memory, the answer is always both yes and no – as uncertain as Deckard’s status as human or Replicant at the conclusion of Blade Runner – but unlike the hopeful hybridity of Haraway’s cyborg, the cyberpunk myth (epitomized by William Gibson’s foundational novels) separates the “consensual hallucination” of cyberspace from “meatspace.” Scholars of memory studies describe uniquely modern forms of cultural amnesia: the loss of the art of memory amid the displaced rush of liquid modern life. Pierre Nora places history and memory in conflict, narrating the “conquest” of memory by an ever-accelerating history, a vast standing-reserve of documents piled skyscraper-high, burying the past in an act of archival “terrorism”. These fears of amnesia, (dis)embodied in “neuromantic” cyberpunk mythology, confronts the ‘otherness’ of prosthetic memory (Landsberg 1996; Csicsery-Ronay 1988). In a Cartesian schism of virtualized mind and subjugated meat, the cyberpunk myth depicts the self as “the victim…helpless and sad, against the powers of exteriorized mind” (Csicsery-Ronay p277).
Oblivion and the Archive
“Who controls the past controls the future; who controls the present controls the past.”
- George Orwell, 1984
“The dead past is just another name for the living present. What if you focus the chronoscope in the past of one-hundredth of a second ago? Aren’t you watching the present?”
- Isaac Asimov, “The Dead Past”
Discussions of digital memory often begin with the vision of Vannevar Bush, the “memex” (memory-index) – a permanent, electromechanical archive, that would link documents to each other by means of associative trails and annotations resembling the hyperlinked structure of today’s Internet. But Bush’s own position on whether the documents within the memex should be permanent changed over time; his unpublished “Memex II” addresses the need for “a readily alterable record” whose entries can be rewritten or deleted. But with alterability comes the threat of revisionism, whose purest incarnation is the infinitely alterable “memory hole” of Orwell’s 1984: a granular control mechanism into which every inconvenient document is deposited and made to disappear – or reappear – in service of an official historical narrative. Specters of genocide and trauma haunt us, imploring that we “never forget,” yet Avishai Margalit calls for an “ethics of memory” that ensures that the descendants of genocidal trauma do not find themselves shackled by the duty to commemorate (2002). Such concerns echo Plato’s condemnation of writing: that the text can only repeat its “one unvarying answer” to future questioners, lacking the fluidity of transitory orality, unable to evolve and learn. These benefits of forgetting – its potential for redemption/regeneration/reconciliation, its hope for radical change that “escapes” the past, and its evolutionary capacity for public deliberation and production of knowledge – create tension at the ambiguous interface between the transitory individual and the enduring collective.
Jeffery Rosen and Viktor Mayer-Schönberger similarly argue the “virtue” of forgetting in counterposition to the threat of permanent “comprehensive memory”: “all citizens face the difficulty of escaping their past now that the Internet records everything and forgets nothing” (Mayer-Schönberger 2009; Rosen 2010). Individuals’ past preferences and actions, archived by ubiquitous computing technologies – imperfectly contextualized, in ways that often prove misleading or even inaccurate – haunt them in the present, exposing previous transgressions to public and state scrutiny, and foreclosing on the possibility of rehabilitation or even of change. Rosen and Mayer-Schönberger thus characterize forgetting as a vital form of information control: specifically, individual control over one’s personal information. Their anti-archival policy prescriptions (expungement, in legal terms, or Mayer-Schönberger’s “expiration dates” on some types of archival data) seek to protect this allegedly disappearing faculty of forgetting because of its importance for personal privacy and autonomy. “Comprehensive memory” (rather, comprehensive history) suggests the creation of a “temporal panopticon,” such as that allegorized in Asimov’s short story “The Dead Past”: a fictional “chronoscope,” built for looking into the past, actually functions to eradicate present privacy. Le droit à l’oubli - the “right of oblivion” – creates a legal separation between past crime and present identity by denying our chronoscope-analogues full access to history, allowing a fresh start: the chance to escape one’s past.
Persistence, Ephemerality, and Power
“A major source of forgetting…is associated with processes that separate social life from locality and from human dimensions: superhuman speed, megacities that are so enormous as to be unmemorable, consumerism disconnected from the labour process, the short lifespan of urban architecture, the disappearance of walkable cities. What is being forgotten in modernity is profound, the human-scale-ness of life, the experience of living and working in a world of social relationships that are known. There is some kind of deep transformation in what might be described as the meaning of life based on shared memories, and that meaning is eroded by a structural transformation in the life-spaces of modernity.”
- Paul Connerton, How Modernity Forgets
“Quite obviously, remembering has become the norm, and forgetting the exception. Four main technological drivers have facilitated this shift: digitization, cheap storage, easy retrieval, and global reach.”
- Viktor Mayer-Schönberger, Delete: The Virtue of Forgetting in the Digital Age
The purported “comprehensive memory” of the digital age is, in fact, neither comprehensive nor permanent. “The World Wide Web still is not a library,” concludes Wallace Koehler after conducting a longitudinal study of the “half-life” of online documents – much less the universal archive of the memex (2004). “Link rot” (hyperlinks whose destination pages are no longer available) introduces significant decay into the associative trails we build through hypertext, complicating access and retrieval; “bit rot” and “data rot” similarly force us to grapple with the degeneration of software through the accumulation of errors over time, and the fragility of the physical discs and drives that comprise our storage media; website providers can go out of business, causing thousands of pages to disappear overnight. The “dark Internet” and the “Deep Web” remind us that our means of indexing even the archival data we have are incomplete and impermanent.
Much of the Internet’s content is still characterized by its practical ephemerality: in one striking example, the average thread on 4chan’s popular /b/ message board spends just five seconds on the first page, and five minutes on the site in total, before its content vanishes (Bernstein et al. 2011). “Ephemeral technologies” like Snapchat, which delete information shortly after its receipt, once again make forgetting rather than remembering the default. While these software solutions are defeated by a “hack” as simple as a screenshot, requiring users to rely on social convention for the privacy and security of their correspondence, the lack of a persistent, searchable archive of data in such applications demonstrates the development of new social norms and technological architectures in which experiences are fleeting by design.
But the temporal gaps and untraceable depths of our networked archives intensify, rather than diminish, the need for ongoing critical scrutiny of historical and archival practices. Asymmetries in information control and “information flux,” as well as the ability to analyze and interpret, affect power dynamics between individuals and institutions. David Brin’s unrealizable “transparent society,” in which individuals and organizations have equal access to each other’s data, “postulates the end of privacy,” according to Bossewitch and Sinnreich, “but it fails to adequately account for the differential access to analytic processing power available to different individuals and organizations in making sense – and use – of this data” (2013). In turn, however, the lacunae in archival memory are shaped by these power dynamics; as Susan Brison notes, “As a society, we live with the unbearable by pressuring those who have been traumatized to forget and by rejecting the testimonies of those who are forced by fate to remember” (1996). The desire for recovery and forgiveness, the hidden virtues of ephemerality, can also reflect institutional power – resulting in the erasure of trauma or systemic injustice not only from archives, but from societal and individual memory as well. In the absence of a symmetrically transparent utopia, crucial questions remain, bound up with traditional concerns about selective cultural amnesia, surveillance, and power: who controls the archives, the official histories that modulate collective memory? Who surveils the past, who is surveilled, and who has the capacity to evade surveillance? How can we disrupt hegemonic narratives of the past – violent impositions of identity akin to the “memory implants” of Total Recall – and create new and emancipatory narratives?
Surrogacy or Symbiosis?
“Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. (‘What else could it be?’) I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer.”
- John R. Searle, Minds, Brains and Science
“The way we conceive of natural symbol systems depends to a large degree on the computational metaphors we use to understand them, and machine learning suggests an understanding of symbolic thought that is very different to traditional views…Our analysis of [predictive, probabilistic symbolic communication] arose out of the idea that the mind can be modeled as a kind of learning machine.”
- Michael Ramscar, “Computing Machinery and Understanding”
Our understanding of the mind, and thus of memory, remains in flux. In practice, individuals’ everyday use of mnemonic technologies is contingent, subject to constant change in the form and function of their devices (Kalnikaite and Whittaker 2007). The relationship between organic and prosthetic memory appears to be one of synergy and symbiosis rather than surrogacy. Our metaphors for mind have historically modeled cognition as a pneumatic system, a clockwork automaton, a helmsman steering a ship, an enchanted mechanical loom; today, we may more readily compare the mind to a search engine, algorithmically retrieving stored data from a disorganized network, “learning” from each new delve into its archive.
But even as we embrace this new metaphor of memory, and use it to imagine both our individual and our collective identities, we derive meaning from the mind-metaphors of past eras, which possess their own political and poetic histories. Contemporary “cloud computing” extends cyberpunk notions of disembodied mind into the present day, while proliferating “augmented reality” technologies reinvigorate hopes and fears about the potential of cyborg remembrance either to emancipate or dehumanize. Digitally reconstructed memory and forgetting continue to exist in a state of paradox and plurality, inviting continued conversation about our archives, our histories, and ourselves.
New mnemonic technologies revitalize timeless questions about the contradictory nature of memory – constantly reconstructing the past while prospecting potential futures, in acts as simple as reading old letters from a friend or writing a shopping list – and resurrect familiar specters as well. But as individuals and corporations alike increasingly seek out professional reputation management services to influence their archival afterimages (at least, those who can afford to do so), and the European Court of Justice navigates the tension between privacy and free expression implicated in a (limited) “right to be forgotten” from the index of search engines, these questions and anxieties gain urgency and force. By tracing prevalent themes of information control, surveillance, and power against the background of prosthetic memory, we may hopefully remind ourselves that the term “memory” itself represents more than either synapses or hard disks – that, far from signaling either the end of memory or the end of forgetting, our shifting metaphors for memory and mind represent the complex and multivalent interplay of future and past.
Bernstein, M. S., et al. (2011). 4chan and/b: An Analysis of Anonymity and Ephemerality in a Large Online Community. ICWSM.
Biddick, K. (1993). “Humanist History and the Haunting of Virtual Worlds: Problems of Memory and Rememoration.” Genders 0(18): 47-66.
Biocca, F. and M. R. Levy (1995). Communication in the age of virtual reality, Routledge.
Bossewitch, J. and A. Sinnreich (2013). “The end of forgetting: Strategic agency beyond the panopticon.” New Media & Society 15(2): 224-242.
Bowker, G. C. Memory practices in the sciences.
Brison, S. J. (1996). “Outliving oneself: trauma, memory and personal identity.” In Diana T. Meyers (ed.), Feminists Rethink the Self. Westview Press.
Bugeja, M. and D. V. Dimitrova (2006). “The Half-Life Phenomenon.” The Serials Librarian 49(3): 115-123.
Cavallaro, D. (2000). Cyberpunk and cyberculture: Science fiction and the work of William Gibson, Continuum.
Chun, W. H. K. (2008). “The enduring ephemeral, or the future is a memory.” Critical Inquiry 35(1): 148-171.
Csicsery-Ronay, I. (1988). “Cyberpunk and Neuromanticism.” Mississippi Review 16(2/3): 266-278.
Connerton, P. (1989). How Societies Remember, Cambridge.
Connerton, P. (2009). How Modernity Forgets, Cambridge.
Daugman, J. G. (2001). “Brain metaphor and brain theory.” In William P. Bechtel, Pete Mandik, Jennifer Mundale & Robert S. Stufflebeam (eds.), Philosophy and the Neurosciences: A Reader. Blackwell.
Ernst, W. and J. Parikka (2012). Digital Memory and the Archive, University of Minnesota Press.
Featherstone, M. and R. Burrows (1996). Cyberspace/cyberbodies/cyberpunk: Cultures of technological embodiment, Sage.
Foucault, M. (1977). Language, counter-memory, practice: selected essays and interviews, Cornell University Press.
Garde-Hansen, J., et al. (2009). Save as… digital memories, Palgrave Macmillan.
Haskins, E. (2007). “Between Archive and Participation: Public Memory in a Digital Age.” Rhetoric Society Quarterly 37(4): 401-422.
Kalnikait, V., et al. (2007). Software or wetware?: discovering when and why people use digital prosthetic memory. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. San Jose, California, USA, ACM: 71-80.
Keightley, E. and M. Pickering (2012). The Mnemonic Imagination: Remembering as Creative Practice, Palgrave Macmillan.
Kennedy, V. (1999). “The Computational Metaphor of Mind: More Bugs in the Program.” Metaphor and Symbol 14(4): 281-292.
Koehler, W. (2004). “A longitudinal study of web pages continued: A report aftersix years.” Information Research 9(2), 1-9.
Landsberg, A. (2004). Prosthetic memory: The transformation of American remembrance in the age of mass culture, Columbia University Press.
Margalit, Avishai (2002). The ethics of memory, Harvard University Press.
Markwell, J. and D. W. Brooks (2003). ““Link rot” limits the usefulness of web-based educational materials in biochemistry and molecular biology*.” Biochemistry and Molecular Biology Education 31(1): 69-72.
Mayer-Schönberger, Viktor (2009). Delete: the virtue of forgetting in a digital age, Princeton University Press.
McGlone, M. S. (2007). “What is the explanatory value of a conceptual metaphor?” Language & Communication 27(2): 109-126.
Muri, A. (2003). “Of Shit and the Soul: Tropes of cybernetic disembodiment in contemporary culture.” Body & Society 9(3): 73-92.
Nichols, B. (1988). “The work of culture in the age of cybernetic systems.” Screen 29(1): 22-46.
Parker, A. (2007). “Link rot: how the inaccessibility of electronic citations affects the quality of New Zealand scholarly literature.” New Zealand Library & Information Management Journal 50(2): 172-192.
Pruchnic, J. and K. Lacey (2011). “The Future of Forgetting: Rhetoric, Memory, Affect.” Rhetoric Society Quarterly 41(5): 472-494.
Ramscar, M. (2010) “Computing machinery and understanding.” Cognitive Science, 34(6), 966-971.
Ruiz de Mendoza Ibáñez, F. J. and L. Pérez Hernández (2011). “The Contemporary Theory of Metaphor: Myths, Developments and Challenges.” Metaphor and Symbol 26(3): 161-185.
Savoie, H. (2010). “Memory Work in the Digital Age: Exploring the Boundary Between Universal and Particular Memory Online.” Global Media Journal: American Edition 9(16).
Saerle, J. (1984). Minds, Brains, and Science, Harvard University Press.
Sellen, A. J. and S. Whittaker (2010). “Beyond total capture: a constructive critique of lifelogging.” Commun. ACM 53(5): 70-77.
Steiner, L. and B. Zelizer (1995). “Competing memories: Reading the past against the grain: The shape of memory studies.”
Sternberg, R. J. (1990). Metaphors of mind: Conceptions of the nature of intelligence, Cambridge University Press.
Van Dijck, J. (2007). Mediated Memories in the Digital Age, Stanford University Press.
Van Dijck, J. (2013). “‘You have one identity’: performing the self on Facebook and LinkedIn.” Media, Culture & Society 35(2): 199-215.
Van House, N. and E. F. Churchill (2008). “Technologies of memory: Key issues and critical perspectives.” Memory Studies 1(3): 295-310.
Viégas, F. B., et al. (2004). Digital artifacts for remembering and storytelling: Posthistory and social network fragments. System Sciences, 2004. Proceedings of the 37th Annual Hawaii International Conference on, IEEE.
Wiener, N. (1988). The Human Use of Human Beings: Cybernetics and Society, Da Capo Press, Incorporated.
-Contributed by Steven Schrag, -
Event — Julia Sonnevend, University of Michigan
An event – not life, as John Lennon put it – is what happens when you are busy doing other things. Some events happen expectedly, for instance weddings and presidential inaugurations, while other events are sudden shocks, like cancer diagnoses and assassinations. Certain events gain significance beyond a family or a community and become public events. Events can even turn into global iconic events that international media cover extensively and remember ritually. These events produce peaks and crashes on social media and massive global viewing audiences for television. But be planned or unplanned, minor or earthshattering, all these events do the same thing: they structure our social and public lives; give reference points for our life narratives and national histories.
While events are essential for individuals, societies and media, they are not the sweet hearts of media scholars. Events are like ill-behaved teenagers: they are hard to fit in any rigid system of thought. Many events are idiosyncratic, contour-less and quite resistant to typification, while others are too often repeated to attract scholarly attention. What then might be the purpose of an essay on the relationship between media research and events? Is the task simply impossible?
Possibly. But some disciplines have already tried the impossible. Sociology, that according Daniel Bell specializes in generalization, has considered the general within the singular in its own burgeoning event literature (Abbott, 1983, 1990, 2001; Abrams, 1982; Alexander, 2002, 2009, 2012; Eyerman, 2011; Jacobs, 2000; Mast, 2012; Vinitzky-Serroussi, 2002, 2011; Wagner-Pacifici, 2010). History, otherwise dedicated to singularity, has also detected repetitive features in the narration of events (Bailyn, 1963, 1982; White, 1973; Sewell, 1996). And philosophy has produced a small bookshelf of literature on the elusive concept of events (Badiou, 2005; Danto, 1985; Hegel 1831; Ricoeur, 1984).
Some media researchers too have wrestled with events. For instance, Amit Pinchevski and Tamar Liebes (2010) wrote about the media coverage of the Eichmann trial as a public event. Daniel Hallin (1986) and Marita Sturken (1997) analyzed the media constructions of the Vietnam War. Barbie Zelizer (1992) examined the media representations and retellings of the Kennedy assassination. Some scholars have moved beyond the particular and singular to define whole genres of media events: these genres include for instance media scandals (Lull & Hinerman, 1997), disaster marathons (Liebes, 1998), media spectacles (Kellner 2003), social dramas of apology (Kampf, 2009; Kampf & Löwenheim, 2012), rituals of excommunication (Carey, 1998), live-covered events (Scannell, 2014) and mediatized rituals (Cottle, 2006).
But very few studies in media research put the concept of “event” in the center of their analysis. In most cases, events are taken-for-granted entities, simply ready for narration or identical with their narratives. An important exception is Daniel Dayan and Elihu Katz’s Media Events: The Live Broadcasting of History (1992). Inspired by the television coverage of a major historic event, Egyptian President Anwar el-Sadat’s peace-making visit to Israel in 1977, Media Events developed a taxonomy for “media events.” A “media event” had to (1) be broadcast live, (2) constitute an interruption of everyday life and everyday broadcasting, (3) be preplanned and scripted, and (4) be viewed by a large audience. There should also be (5) a normative expectation that viewing was obligatory and (6) a reverent narration. Moreover, the event had to be (7) integrative of society and (8) mostly conciliatory (Dayan & Katz, 1992; Katz & Liebes, 2007).
Building on Max Weber’s concept of rational-legal, charismatic and traditional authority, Dayan and Katz also presented three scripts of media events. These were contests (for instance the Olympic Games and the Watergate hearings), conquests (such as the landing on the Moon and Pope John Paul II’s visit to Communist Poland) and coronations (for example the funeral of President Kennedy and the Royal Wedding of Prince Charles). Many scholars have subsequently critiqued and built on Dayan and Katz’s understanding of media events (Rothenbuhler, 1988; Zelizer, 1992; Scannell, 1996, 2014; Schudson, 1993; Price & Dayan, 2008; Couldry, Hepp & Krotz, 2010).
Dayan and Katz provided us with a strong concept of a “media events” genre, but they also somewhat limited the scope of the general theoretical discussion on “events in media.” What about events that do not have live coverage (like the Cambodian genocide), events that are not covered by television (like the Eichmann trial in Israel) and events that are celebrated in one country but not in the other (the fall of the Berlin Wall in American and Soviet media)? In other words, what about events that are covered by media but not by the canonic Media Events?
In this keyword essay I will consider “events in media,” including but not limited to the narrow genre of “media events.” I focus on four aspects of events in media: (1) the power of the occurrence vis-à-vis its narrative as an “event,” (2) the witnesses who tell the story of an “event,” (3) the embodiments of the “event” in various media, and (4) the travel of “events” through cultural and geographic boundaries.
(1) The power of the occurrence vis-à-vis its narrative as an event
Every event consists of some happening on the ground and a related narrative of an event. Four planes were deliberately crashed in the United States on September 11, 2001, these happenings altogether received the name “9/11.” On November 9 in 1989, after the desperate East German leadership mistakenly announced a new travel regulation in immediate effect, West German broadcast media convinced people to test the border in Berlin – an awkward occurrence that quickly precipitated the event later called “the fall of the Berlin Wall.” Or, to take another example, the systematic mass murder perpetrated during the WWII, originally narrated as an “atrocity,” became a moral universal in the West described over time as the “Holocaust” (Alexander, 2002). In all these cases a myriad of occurrences were pulled together in a narrative of an “event.”
But while narratives seem powerful tools in shaping events, they are not omnipotent. Consider the example of terrorist attacks. Seemingly they can be narrated in opposing ways, as acts of wanton destruction or as acts in observance of a higher moral order. A good example of framing a terrorist attack as a regrettable but unavoidable must is presented by a plaque on the King David Hotel in Jerusalem: “The hotel housed the Mandate Secretariat as well as the Army Headquarters. On July 22, 1946, [Zionist paramilitary] Irgun fighters at the order of the Hebrew Resistance Movement planted explosives in the basement. Warning phone calls had been made urging the hotel’s occupants to leave immediately. For reasons known only to the British, the hotel was not evacuated and after 25 minutes the bombs exploded, and to the Irgun’s regret and dismay, 91 persons were killed.” This original wording infuriated the British for suggesting that the British, not the Irgun, were responsible for the attack. Although the wording was subsequently revised, the final sentence of “regret and dismay” remained.
This excerpt shows the power of narratives in shaping occurrences into certain types of events, but it does not prove that narratives are capable of everything. We can narrate a terrorist attack as a crime or as an accident, but not as a wedding. Our narratives are flexible, but we cannot do whatever we want with them. As Michael Schudson summarized the limits of our narrative power: “there are events in the world we can shape, distort, reinterpret but not fundamentally change. President Kennedy was killed by an assassin. There are lots of ways to read this fact but none of them restore John F. Kennedy to life. He really died” (Schudson, 2008, p. 92). There are many limited ways to read events.
(2) The witnesses of the event
Who sees and tells the story of an event, who writes its “birth certificate,” is central to every event’s existence. Storytellers are required to bind occurrences together and elevate them into an “event.” In other words, events need witnesses (Peters, 2001). Media witnessing occurs in three distinct forms: witnesses in media (when witnesses of the occurrence share their experiences in media), witnessing by media (when media bear witness to occurrences) and witnessing through media (when audiences are positioned by media as witnesses to occurrences) (Frosh & Pinchevski, 2009). These diverse forms of witnessing all shape the boundaries of events and communicate them to distinct primary and secondary audiences. Events also have competing witnesses, leading to contrasting counter-narratives. All events have diverse witnesses: even interpretations of events that some scholars call “hegemonic” are more multi-colored than they seem to be. Competition among witnesses and among narratives can keep events alive and can destabilize their meanings.
(3) The embodiments of the event in media
Events are more vulnerable than we would think. We easily forget them. We do this not only with wedding anniversaries, but also with major historic events. Each generation has its own events that it regards as earthshattering. For instance, certain generations have flashbulb memories of the atomic bombing of Hiroshima or the Kennedy assassination, other generations never forget the moment they received the news about the attacked Twin Towers. But an iconic event of one generation often appears nothing but boring history for the next. Events are heavy: it is hard to carry them across time, space, and media.
Therefore, in order to last and get recited across generations, occurrences need memorable narratives that construct them as mythical, resonant “events.” These narratives will also need to be carried by a diversity of media. Even the seemingly most powerful and visually spectacular event cannot survive the passing of time without substantial narrative efforts. A lasting narrative needs to be simple and universal, removed from the event’s original complexity, and transportable through diverse media platforms.
For instance, consider all the efforts of commemoration to keep the memory of 9/11 alive. Names of the victims are read aloud at Ground Zero at every anniversary, a huge cosmopolitan museum has recently opened in New York, and the event’s story is embodied in social media campaigns, souvenirs, documentaries and history books. Those who remember the day of September 11, 2001 may think it is unforgettable, but it is not. Few college freshmen today have acute personal memories of the event that took place over a decade ago; its lasting resonance will require promotion of the event’s simple narrative and spectacular imagery across “old” and “new” media alike.
(4) The travel of events across cultural and geographic boundaries
Some events have to be narrated “only” on the national, regional or social group level. But global iconic events, that resonate internationally and over time, obviously need transnational media narration. There are five dimensions of their narration in transnational contexts: (1) foundation: the events’ narrative prerequisites; (2) universalization: the development of the event’s mythical message; (3) condensation: the event’s encapsulation in a brand of a simple phrase, a short narrative, and a recognizable visual scene; (4) counter-narration: competing stories about the event; and (5) diffusion, when the event’s brand travels across multiple media platforms and changing social and political contexts (Sonnevend, 2013b).
Let’s take the example of the fall of the Berlin Wall.
(1) Its story had narrative prerequisites: the story of the Berlin Wall itself, the global resonance of the city of Berlin, the Berlin airlift, Kennedy’s “Ich bin ein Berliner” speech, among others. The lasting transnational story of the fall of the Berlin Wall has built on these narrative “foundations.”
(2) The happenings of November 9, 1989 were initially confusing, contradictory, and complex, although soon after the mythical message of the “end of division” turned those happenings into an “event.” In other words, the story went through the narrative process of simplification and universalization.
(3) This mythical “event” over time got condensed into a branded phrase (“the fall of the Berlin Wall”), a short narrative of freedom, and a recognizable visual scene. This brand became ready for global travel and trading.
(4) Unlike most global iconic events, the fall of the Berlin Wall is an exceptionally consensual event. Few deny its importance. But at its own time, in 1989, East German and Soviet media counter-narrated it as a minor occurrence, a small happening in a substantial and deliberate reform process they were championing. This counter-narrative did not survive the passing of time.
(5) The fall of the Berlin Wall as a global iconic event is now embodied in a diversity of media: it travels from mass media to social media to monuments, memorials, exhibitions, souvenirs, and many other media embodiments. The simple and universal narrative of the fall of the Berlin Wall has permeated the world from China to Israel to the United States, providing us with a contemporary social myth.
Through the above five-dimensional process of transnational storytelling, a global iconic event comes into being. Some global iconic events are more universal than others, some have more counter-narratives than others, but these five dimensions are generally present in their narration.
In sum, this brief sketch has examined four aspects of “events in media:” (1) the power of the occurrence vis-à-vis its narrative as an “event,” (2) the variable witnesses who tell the story of an “event” and (3) the embodiments of the “event” in various media and (4) the travel of “events” across cultural and geographic boundaries. Events are diverse and tricky creatures. Capturing their elusive meaning remains a challenge. Events also move in and out of public memory, gaining or losing significance and meaning. Nonetheless, events keep shaping our international and personal relations, and continue to occupy our media. While scholarship may fall short of fully capturing global events in media, they continue all the while to engage our imagination.
Abbott, A. (1983). Sequence of social events: Concepts and methods for the analysis of order in social processes. Historical Methods 16, 129-147.
Abbott, A. (1990). Conceptions of time and event in social science methods: Causal and narrative approaches. Historical Methods 23, 140-150.
Abbott, A. (2001). Time matters: On theory and method. Chicago: University of Chicago Press.
Abrams, P. (1982). Historical sociology. Ithaca, N.Y.: Cornell University Press.
Alexander, J. C. (2002). On the social construction of moral universals: The ‘Holocaust’ from mass murder to trauma drama. European Journal of Social Theory, 5(1), 5-86.
Alexander, J. C. (2009). Remembering the Holocaust: A debate. Oxford, New York: Oxford University Press.
Alexander, J. C. (2012). Trauma: A social theory. Cambridge, U.K.: Polity.
Badiou, A. (2005). Being and event. London, New York: Continuum.
Bailyn, B. (1963). The problem of the working historian: A comment. In S. Hook (Ed.) Philosophy and history: A symposium. (92-101). New York: New York University Press.
Bailyn, B. (1982). The challenge of modern historiography. American Historical Review 87, 1- 24.
Blondheim, M., & Liebes, T. (2002). Live television’s disaster marathon of September 11 and its subversive potential. Prometheus, 20(3), 271-6.
Carey, J. W. (1998). Political ritual on television: Episodes in the history of shame, degradation and excommunication. In: T. Liebes and J. Curran (Eds.), Media, ritual and identity (pp. 42-70). London: Routledge.
Cottle, S. (2006). Mediatized rituals: Beyond manufacturing consent. Media, Culture & Society 28(3), 411-432.
Couldry, N., Hepp, A., & Krotz, F. (Eds.) (2010). Media events in a global age. London: Routledge.
Danto, A. (1985). Narration and Knowledge. New York: Columbia University Press.
Dayan, D, & Katz, E. (1992). Media events: The live broadcasting of history. Cambridge, MA: Harvard University Press.
Frosh, P. & Pinchevski, A. (2009). Media witnessing: Testimony in the age of mass communication. Basingstoke: Palgrave Macmillan.
Eyerman, R. (2011). The cultural sociology of political assassination. Basingstoke, UK: Palgrave Macmillan.
Hallin, D. C. (1986). “The uncensored war”: The media and Vietnam. New York: Oxford University Press.
Hegel, G. W. F. (1920 ). Vorlesungen über die Philosophie der Weltgeschichte. Leipzig: Verlag von Felix Meiner
Hobsbawm, E., & Ranger, T. (Eds.). (1983). The invention of tradition. Cambridge: Cambridge University Press.
Jacobs, R. N. (2000). Race, media & the crisis of civil society: From Watts to Rodney King. Cambridge: Cambridge University Press.
Kampf, Z. (2009). Public (non-) apologies: The discourse of minimizing responsibility. Journal of Pragmatics 41(11), 2257-2270.
Kampf, Z. & Löwenheim, N. (2012). Rituals of apology in the global arena. Security Dialogue 43(1), 43-60.
Katz, E., & Liebes, T. (2007). ‘No more peace!’: How disaster, terror and war upstaged media events. International Journal of Communication 1, 157-166.
Kellner, D. (2003). Media spectacle. London, New York: Routledge.
Kitch, C. (2007). Selling the “authentic past”: The New York Times and the branding of history. Westminster Papers in Communication and Culture 4(4), 24-41.
Lang, K., & Lang, G. E. (1953). The unique perspective of television and its effects: A pilot study. American Sociological Review 18(1), 3-12.
Levy, D., & Sznaider, N. (2002). Memory unbound: The Holocaust and the formation of cosmopolitan memory. European Journal of Social Theory 5, 87-106.
Liebes, T. (1998). Television’s disaster marathons: A danger for democratic processes? In T. Liebes & J. Curran (Eds.), Media, ritual and identity (pp. 71-84). London: Routledge.
Lull, J., & Hinerman, S. (Eds.). (1997). Media scandals: Morality and desire in the popular market place. Cambridge: Polity Press.
Mast, J. L. (2012). The performative presidency: Crisis and resurrection during the Clinton years. Cambridge: Cambridge University Press.
Neiger, M., O. Meyers, & E. Zandberg (2011). On media memory: Collective memory in a new media age. New York: Palgrave Macmillan.
Peters, J. D. (2001). Witnessing. Media, Culture & Society, 23(6), 707-723.
Pinchevski, A., & Liebes, T. (2010). Severed voices: Radio and the mediation of trauma in the Eichmann trial. Public Culture 22(2), 265-291.
Price, M. E., & Dayan, D. (Eds.). (2008). Owning the Olympics: Narratives of the new China. Ann Arbor: The University of Michigan Press.
Ricoeur, P. (1984). Time and narrative. Trans. K. McLaughlin, K. and D. Pellauer. Chicago: Chicago University Press.
Rothenbuhler, E. W. (1988). The living room celebration of the Olympic Games. Journal of Communication, 38(3), 61-81.
Scannell, P. (1996). Radio, television and modern life. Oxford: Blackwell.
Scannell, P. (2014). Television and the meaning of ‘live:’ An enquiry into the human situation. Cambridge: Polity.
Schudson, M. (1993). Watergate in American memory: How we remember, forget, and reconstruct the past. New York: Basic Books.
Schudson, M. (2008). The anarchy of events and the anxiety of story telling. In Why democracies need an unlovable press (pp. 50-63). Cambridge, UK: Polity Press.
Sewell, W. Jr. (1996). Historical events as transformations of structure: Inventing revolution at the Bastille. Theory and Society 25 (6), 841-81.
Sonnevend, J. (2012). Iconic rituals: Towards a social theory of encountering images. In J. C. Alexander, D. Bartmanski & B. Giesen (Eds.), Iconic power: Materiality and meaning in social life (pp. 219-233). New York: Palgrave Macmillan.
Sonnevend, J. (2013a). Counterrevolutionary icons. Journalism Studies 14(3), 336-354.
Sonnevend, J. (2013b). Global iconic events: How new stories travel through time, space and media. Dissertation at Columbia University.
Sturken, M. (1997). Tangled memories: The Vietnam War, the AIDS epidemic, and the politics of remembering. Berkeley: University of California Press.
Tenenboim-Weinblatt, K. (2008). Fighting for the story’s life: Non-closure in journalistic narrative, Journalism 9(1), 31-51.
Thompson, J. B. (2000). Political scandal: Power and visibility in the media age. Cambridge, UK: Polity Press.
Vinitzky-Seroussi, V. (2002). Commemorating a difficult past: Yitzhak Rabin’s memorials. American Sociological Review 67(1), 30-51.
Vinitzky-Seroussi, V. (2011). ‘Round up the unusual suspects’: Banal commemoration and the role of the media. In M. Neiger, O. Meyers, & E. Zandberg (Eds.), On media memory: Collective memory in a new media age (pp. 27-37). New York: Palgrave Macmillan.
Wagner-Pacifici, R. (2010). Theorizing the restlessness of events. American Journal of Sociology 115(5), 1351-1386.
White, H. (1973). Metahistory: The historical imagination in nineteenth-century Europe. Baltimore: Johns Hopkins University Press.
Zelizer, B. (1992). Covering the body: The Kennedy assassination, the media, and the shaping of collective memory. Chicago: University of Chicago Press.
Zelizer, B. (1998). Remembering to forget: Holocaust memory through the camera’s eye. Chicago: University of Chicago Press.
Zelizer B. & Tenenboim-Weinblatt, K. (Eds.). (2014). Journalism & Memory. London: Palgrave Macmillan.
-Contributed by Julia Sonnevend, -
In the English language, cyber-activism is a compound word that came into currency in the early 1990s. Although the first half of the word, cyber, appeared later than the other half “activism,” in their current meanings, both components were 20th-century inventions. Cyber- is traced to Norbert Wiener’s Cybernetics: Or Control and Communication in the Animal and the Machine (1948), but is often associated with the science fiction of William Gibson, who is credited with popularizing the word “cyberspace” in his novel Neuromancer (1984). This origin gives cyber-activism, as opposed to the interchangeable terms “online activism” and “digital activism,” the special connotation of magical new possibilities associated with cyberspace in science fiction (Jordan 1999).
Since the 1990s, a host of synonymous terms has appeared to make cyber-activism part of an extended linguistic family. It includes: electronic activism, online activism, internet activism, web activism, and digital activism. Because the word “protest” is often used interchangeably with “activism,” the family also includes cyber-protest, electronic protest, internet protest, online protest, and digital protest.
In addition, there are many other terms of kith and kin, such as tactical media (Garcia and Lovink 1997), radical media (Downing 2000), new media activism (Kahn and Kellner 2004; Lievrouw 2011), alternative media (Couldry and Curran 2003; Lievrouw 2011), hacktivism (Denning 1999; Jordan and Taylor 2004), and networked social movements (Juris 2004; Castells 2012).
Although these terms are often used interchangeably, they have slightly different shades of meaning. Terms that were used more often in the 1990s, such as radical media and tactical media, seem to have more radical connotations than later coinages. Although digital activism has not replaced other synonymous terms, it now appears more often than cyber-activism, but it does not have the sci-fi and magical connotations of cyber-activism.
That there are so many words for describing what purports to be comparable phenomena suggests that something of magnitude is at stake. It betrays deep ambiguities and anxieties concerning the meaning and significance of cyber-activism and its cognates. This essay identifies four ambiguities and traces how these ambiguities are an integral part of a set of discourses about cyber-activism that produces political effects distinct from the effects of cyber-activism as political praxis.
The first type of ambiguity concerns the very objects of inquiry. What is meant by cyber-activism, online activism, or digital activism? The difficulties of defining cyber-activism is reflected in the approach adopted by one of the few books with “cyber-activism” in its title. Recognizing that “defining cyberactivism is as difficult as defining activism before the internet,” the editors of the volume Cyberactivism: Online Activism in Theory and Practice decide not to offer a definition (McCaughey and Ayers, 2004, 14). One author represented in the volume, however, defines online activism “as a politically motivated movement relying on the Internet” (Vegh 2003, 71). He then goes on to distinguish among three types of online activism: awareness/advocacy; organization/mobilization; and action/reaction.” (Vegh, 2003, p.72) This often cited definition takes into account both the content of online activism and its technological method. Howard (2011) takes a similar approach when he defines cyberactivism as “the act of using the internet to advance a political cause that is difficult to advance offline.” (p.145).
In both these definitions, the emphasis on the political nature of cyber-activism begs the question of exactly what counts as political and what not. Vegh’s notion of online activism as a politically motivated movement further complicates the issue by raising the additional question of what exactly counts as a movement. Is it a protest event? A series of events? A campaign? A particular condition of being active? A mode of action? A process? Or does it refer to social movement actors and organizations?
There are others who view cyber-activism as mainly a set of methods, tactics, and practices associated with the use of new technologies without stressing its political nature. The literature on advocacy in social work, for example, views cyber-activism as methods: “New advocacy methods that use technology to change public policy have been developed and provide us with new avenues to address the changed political economy of social welfare. Collectively called cyberactivism, these techniques can be used to advantage by social work advocates.” (McNutt & Menon, 2008, 33). Yang (2009, p. 3) defines online activism as “contentious activities associated with the use of the Internet and other new communication technologies,” but stresses it as a cultural and political form. Earl and Kimport (2011) propose a continuum of online activism ranging from e-movements that happen purely online to e-mobilization that uses the internet to organize offline protest. For Lievrouw (2011, p. 19), “alternative/activist new media employ or modify the communication artifacts, practices, and social arrangements of new information and communication technologies to challenge or alter dominant, expected, or accepted ways of doing society, culture, and politics.”
There is a long-time debate about whether social movements are phenomena or meaning (McGee 1980; Melucci 2006). Is cyber-activism meaning or phenomenon? Current discourse seldom asks this question, assuming instead that cyber-activism and its varieties are phenomena existing objectively outside of human consciousness. Without asking the question about subjective meanings and social constructions, what might be a bias (objectivity) becomes natural and a taken-for-granted truth. Certainly, in popular discourse, terms like cyber-activism, online activism, and digital activism are used to mean so many different things that they lose their specific meaning. In this way, they can be used conveniently by critics and proponents alike for whatever purposes they want them to serve. Critical reflexivity about the subjective meanings of cyber-activism will at least make clear that what are taken as natural and objective phenomena may not always be the case and may often be a matter of interpretation.
The second type of ambiguity arises out of the first part of the term. Cyber, online, internet, and digital – these are mostly used to refer to the spatial features of these technologies. Cyber- or online activism is often thought of in spatial terms. Because these technological spaces are different from conventional spaces, there are persistent efforts to dichotomize cyber-activism and offline activism, with clear preference given to offline activism.
This distinction was useful in the earlier stage of cyber-activism, when the technology was still limited in its reach and the use of the internet not yet a routine part of activism. Today, however, the internet, social media, and smart phones are much more prevalent than in the 1990s. Online and offline action becomes highly interfaced. In this new digital environment, it is hard to imagine street protest activities taking place without at least some use of digital media communication. Activism of all varieties, it might be argued, is now digitized to some degree.
A more serious consequence of the spatial bias in the cyber-activism family is a hidden bias against time. Whether the internet as a technology has a space bias or a time bias is open to debate and not my concern here (see Frost 2003 for an argument in favor of internet’s space bias). But the discourse about cyber-activism clearly has a spatial bias. By fixating our attention on the dichotomy of online vs offline spaces, this discourse reifies the differences between the two and prevents us from asking other important questions. It is generally recognized that new communication technologies decouple time from place. This allows cyberactivism to happen in ways that are not limited by time-space as street protests were in earlier times. People in different time zones and continents now routinely take part in the same online protest event at the same time. Another consequence of the spatial bias is that it neglects that cyber-activism itself has a temporal and historical dimension – today’s cyber-activism is not the same as twenty years ago. If cyber-activism has been accused of turning into slacktivism, is it an inherent attribute of cyber-activism or is it the outcome of historical political struggles? If contemporary cyber-activism is not living up to the revolutionary potential envisioned by its radical advocates in the earlier days, is it due to its inherent weaknesses or is it because it is up against forces far more powerful?
The third type of ambiguity derives from the word activism. What is the boundary between activism and non-activism? Where does activism begin and end? In social movement studies, high-risk protests have been called activism (McAdam 1986), but so has everyday behavior with purportedly activist motivations (Almanzar, Sullivan-Catlin and Deane 1998). There seems to be a tendency to conflate the more radical types of cyberactivism with its moderate varieties.
The etymology of activism contains such ambivalence. Activism came from the German word Aktivismus, which was first used by the German philosopher Rudolf Eucken in his 1907 book The Fundamentals of a New Philosophy of Life to refer to “the theory or belief that truth is arrived at through action or active striving after the spiritual life.” (OED, 3nd ed.). In continental Europe during World War I, activism meant “advocacy of a policy of supporting Germany in the war; pro-German feeling or activity” (OED, 3rd ed.). Only by 1920 did activism come to mean “active participation or engagement in a particular sphere of activity; spec. the use of vigorous campaigning to bring about political or social change.” The earlier usage in this sense stresses its “vigorous” character. OED’s 1920 quotation for activism is: “Above these people is the ‘brain proletariat’, restless, alert, dissatisfied, repressed… The thought of this brain proletariat has many aspects-from Buddhist passivism to Bolshevist activism.” And its 1960 quotation: “The sizzling flame of activism is visible in both the agricultural and pastoral districts.” In the same way, OED defines an activist as “A person engaged in or advocating vigorous political activity.“
Activism thus had several different meanings in its early history – an orientation to life in Rudolf Eucken’s philosophy, a pro-German activity during WWI, and a vigorous political activity. Activism and activists could be oppositional to the state, but could also be supportive of it (Hoofd 2008).
Current discourse about cyber-activism retains both the “vigorous” and radical meanings of activism and the less vigorous and more moderate meanings. The moderate type of activism has been called civic action. As opposed to protest, civic action such as community festivals events have only implicit (or latent) purposes, no explicit claims (Samson, McAdam, MacIndoe, & Weffer-Elizondo 2005, 685). Thus cyber-activism may refer to conflictual direct action such as hacking and denial of service attacks, but it may also mean consensus action of the civic type, such as the use of Twitter by non-profit organizations for community building or information sharing.
The conflation of radical with consensus cyber-activism is an important feature of a history of domesticating and institutionalizing cyber-activism. A subtle historical shift takes place whereby the more radical elements of cyber-activism are underplayed or even dislodged. On the one hand, there are government efforts to criminalize radical cyber-activists or corporate efforts to co-opt them. Thus over time, hacktivism takes on connotations of illegality as opposed to its early meanings of countercultural creativity and individual heroism (Jordan 1999). Radical cyber-activist organizations and practices like Indymedia and the Occupy Wall Street movement were subject to policing (Downing 2001; Pickard 2006; Sullivan, Spicer & Böhm 2011; Gillham, Edwards, and Noakes, 2011). On the other hand, a discourse is produced about the necessity of channeling cyber-activism into institutional politics. “The digirati needs to learn how to make friends and win influence in Washington,” Richard F. O’Donnell warned in 1996 . Otherwise they would be “courting irrelevance.” (O’Donnell 1996). Thus, important cyber movements like Moveon.org eventually becomes member-based non-profit organizations. Like mirror images, these two tendencies (and two sets of discourses) have the same effect of undercutting the potency of cyber-activism as an extrainstitutional praxis and absorbing it into normal institutional politics. This might be called the institutionalization bias.
This leads to the last ambiguity I will address, namely, the confusion about the political efficacy of cyber-activism. There is, to say the very least, an obsession with causation in the discourse about cyber-activism. Social movement scholars recognize the importance of studying outcomes (Giugni 1998; Amenta et al 2010), but they are also aware that specifying the causes of outcomes is methodologically more challenging than identifying the conditions of the emergence of a social movement. If social movement organizers and activists at least exert some control over the shape of their movement by designing strategies, framing issues, and shaping identities, they cannot directly control the outcomes of their movements (Amenta et al 2010). Furthermore, beyond their pronounced goals, social movements may have unintended consequences and may incur repression and backlashes. Consequently, most works in this area subscribe to the theory that the outcomes of social movements are mediated by multiple factors (Amenta, Caren and Olasky 2005). Outcomes are indirect, not direct. Although in the communication field, there is a fine literature on the mediated effects of internet use on civic participation (e.g., Xenos and Moy 2007), to my knowledge, this literature has not received the attention it deserves in the discourse about the impact of cyber-activism.
A second confusion concerns the spurious specification of causes and outcomes. Although cyber-activism consists of multiple varieties, there is a curious tendency to cherry-pick the types of cyber-activism and then reject cyber-activism wholesale by claiming that that particular type does not cause an anticipated effect, such as democratization. Thus, email petitions and online comments become clicktivism (Shulman 2009), which is alleged to be politically ineffective. Clicktivism then becomes a synecdoche for cyber-activism, and cyber-activism is then rejected on the ground that it is merely clicktivism. Meanwhile, the more radical manifestations of cyber-activism are omitted.
The third confusion reflects an ideological imprint in current discourse about cyber-activism. In the debate about cyber-activism and democratization, a question that often arises concerns China and is about whether cyber-activism weakens or strengthens authoritarianism. The logic of this argument runs as follows: China has an authoritarian government. Cyber-activism in China makes the authoritarian government more aware of its vulnerabilities, forcing it to improve governance and therefore making it more resilient. Conclusion: cyber-activism is good to authoritarianism. The problem with this argument is that it not only simplifies the meanings and practices to cyber-activism in China, but also presumes that authoritarian governments are incapable of change while implicitly putting the blame on citizens and activists seeking change. Here, the workings of a hidden efficacy bias turns cyber-activism into its own enemy.
How to account for these ambiguities? Certainly, they reflect the difficulties of understanding rapid social and technological change. We should also recognize that cyber-activism is so diverse and fluid that it inevitably comes with ambiguities. Yet insofar as we can identify hidden biases underlying these ambiguities and confusions, I would argue that the existence of these ambiguities is not accidental, but political. Ambiguities serve political purposes.
The four types of ambiguities are roughly associated with four hidden biases, which I have called the objectivity bias, the space bias, the institutionalization bias, and the efficacy bias. The objectivity bias hinders a more reflexive approach to cyber-activism. The space bias diverts attention from seeing cyber-activism as a historically contingent political struggle. The institutionalization bias favors consensus and institutionalized over radical cyber-activism by first lumping together the radical with the moderate and then omitting the history of a parallel process of policing, institutionalization, and co-optation. The efficacy bias dismisses cyber-activism as ineffective by focusing on its palmy side and ignoring its radical wing. Ironically, when cyber-activism does seem to have clear political impact, as in China, this efficacy bias is then turned around to serve the argument that the effectiveness of cyber-activism actually works to stabilize rather than subvert authoritarian rule. Efficacy undoes itself.
What does an account of the ambiguities of cyber-activism as politics tell us about the nature of cyber-activism? More than anything else, it shows the importance of understanding cyber-activism and its family of words as a set of discourses with political effects of their own, distinct from the effects of cyber-activism as political praxis. “There was a steady proliferation of discourses concerned with sex,” Foucault wrote, “…an institutional incitement to speak about it, and to do so more and more; a determination on the part of the agencies of power to hear it spoken about, and to cause it to speak through explicit articulation and endlessly accumulated detail” (History of Sexuality, Vol. 1, p. 18) As Foucault wrote of sex, so we can write the same about cyber-activism or the internet. The steady proliferation of discourses concerned with cyber-activism, I have argued, has weakened rather than strengthened it as a political practice. The ambiguities about cyber-activism are elements of a discursive formation that undercuts the power of cyber-activism. Such a discursive formation is, in proper Foucauldian fashion, a formation of power.
Almanzar, Nelson A. Pichardo, Heather Sullivan-Catlin and Glenn Deane. “Is the Political Personal? Everyday Behaviors as Forms of Environmental Movement Participation.” Mobilization 3, no. 2(1998):185-205.
Amenta E, Caren N, Olasky SJ. 2005. Age for leisure? Political mediation and the impact of the pension movement on US old-age policy. Am. Sociol. Rev. 70:516-38
Amenta, Edwin and Caren, Neal and Chiarello, Elizabeth and Su, Yang, The Political Consequences of Social Movements (August 2010). Annual Review of Sociology, Vol. 36, pp. 287-307, 2010.
Couldry, N. and J. Curran (eds) (2003) Contesting Media Power. Alternative Media in a Networked world. Boulder, CO and Lanham, MD: Rowman and Littleﬁeld.
Critical Art Ensemble. 1996. Electronic Civil Disobedience and Other Unpopular Ideas. New York: Autonomedia.
Denning, D. (1999) ‘Activism, Hacktivism, and Cyberterrorism: the Internet as a tool for influencing foreign policy’; available at http://www.nautilus.org/infopolicy/workshop/papers/denning.html.
Downing, J. (ed.) (2001) Radical Media: rebellious communication and social movements. London: Sage.
Frost, Catherine. How Prometheus Is Bound: Applying the Innis Method of Communications Analysis to the Internet. Canadian Journal of Communication 28(2003), 9-24.
Gilboa, N. (1996) ‘Elites, Lamers, Narcs and Whores: exploring the computer underground’, in Cherny, L. and Weise, E. (eds) (1996) Wired Women: gender and new realities in cyberspace, Seattle: Seal Press: 98-113.
Gillham, Patrick F. , Bob Edwards & John A. Noakes (): Strategic incapacitation and the policing of Occupy Wall Street protests in New York City, 2011, Policing and Society: An International Journal of Research and Policy.
Giugni M. 1998. Was it worth the effort? The outcomes and consequences of social movements. Annu. Rev. Sociol. 24:371-93.
Hoofd, Ingrid M, “Complicit Subversions: Cultural new media activism and ‘high’ theory.” First Monday, Vol. 13, no.10 (2008).
Hoofd, Ingrid M. 2012. Ambiguities of Activism: Alter-Globalism and the Imperatives of Speed. Routledge.
Illia, L. (2006). Passage to cyberactivism: how dynamics of activism change. Journal of Public Affairs. Volume 3, Issue 4.
Jordan, T. (1999a) Cyberpower: the culture and politics of cyberspace and the Internet. London: Routledge.
Jordan, T. (1999b) ‘New Space, New Politics?: cyberpolitics and the Electronic Frontier Foundation’, in Jordan, T. and Lent, A. (eds) Storming the Millennium: the new politics of change, London: Lawrence and Wishart: 80-107.
Jordan, T. (2002) Activism!: direct action, hacktivism and the future of society, London: Reaktion.
Jordan, T. and Taylor, P. (1998) ‘A Sociology of Hackers’, Sociological Review 46 (4): 757-80.
Jordan, Tim and Paul Taylor. 2004. Hactivism and Cyberwars: Rebels with a Cause. Routledge.
Kahn, R. and Kellner, D. (2004) New media activism: from the battle of Seattle’ to blogging’, New Media & Society, Vol. 6, No. 1, pp.87-95.
Lievrouw, Leah A. 2011. Alternative and Activist New Media. Polity.
Levy, S. (1984) Hackers: heroes of the computer revolution, New York: Bantam Doubleday Dell.
Lovink, G. (2002). Dark ﬁber: Tracking critical internet culture. Cambridge, MA: MIT Press.
McAdam, Doug. 1986. Recruitment to High-Risk Activism: The Case of Freedom Summer. American Journal of Sociology 92(1), 64-90.
McKay, G. (ed.) (1998) DiY Culture: party and protest in nineties Britain, London: Verso.
McCaughey, Martha, and Michael D. Ayers. 2003. Cyberactivism: Online Activism in Theory and Practice. New York: Routledge.
McNutt, John G. & Goutham M. Menon, “The Rise of Cyberactivism: Implications for the Future of Advocacy in the Human Services.” Families in Society: The Journal of Contemporary Social Services Cyberactivism:
Meikle, G. (2002). Future active: Media activism and the internet. London: Routledge.
Meyer, G. and Thomas, J. (1990) ‘(Witch)hunting for the Computer Underground: Joe McCarthy in a leisure suit’, The Critical Criminologist, 2 September: 225-53.
Miller, L. (1995) ‘Women and Children First: gender and the settling of the electronic frontier’, in Boal, I. and Brooks, J. (eds) Resisting the Virtual Life: the culture and politics of information, San Francisco: City Lights Books: 49-57.
Pickard, V. W. (2006) United yet autonomous: Indymedia and the struggle to sustain a radical democratic network, Media, Culture & Society, 28(3), pp. 315-336.
Samson, Robert J, Doug McAdam, Heather MacIndoe, & Simo´n Weffer-Elizondo, “Civil Society Reconsidered: The Durable Nature and Community Structure of Collective Civic Action.”AJS Volume 111 Number 3 (November 2005): 673-714
Shulman, S. 2009. “The Case Against Mass E-mails: Perverse Incentives and low Quality Public Participation in U.S. Federal Rulemaking.” Policy & Internet 1 (1): 23-53.
Sullivan, Sian, André Spicer b & Steffen Böhm (2011). “Becoming Global (Un)Civil Society: Counter-Hegemonic Struggle and the Indymedia Network.” Globalizaions 8:5, 703-717.
Turkle, S. (1984) The Second Self: computers and the human spirit, London: Granada.
Xenos, M. and Moy, P. (2007), Direct and Differential Effects of the Internet on Political and Civic Engagement. Journal of Communication, 57: 704-718.
-Contributed by Guobin Yang, -
Gaming — Saugata Bhaduri, Jawaharlal Nehru University
‘Gaming’ is generally understood as the act of playing games, especially, in the current context, video games or games with a digital interface. Accordingly, it is often erroneously presumed that while the use of ‘game’ as a noun or adjective can be traced far back in the history of the English language, ‘game’ as a verb (with ‘gaming’ as its present participle), is of a fairly recent origin. However, the Merriam-Webster Dictionary traces the use of ‘gaming’ to 1501, and the Random House Kernerman Webster’s College Dictionary traces it to 1495-1505, both sources connecting this participial form etymologically to ‘gambling’. Talking of etymology, the Online Etymology Dictionary traces the word ‘game’ to
Old English gamen “game, joy, fun, amusement”, common Germanic (cognates: Old Frisian game “joy, glee”, Old Norse gaman, Old Saxon, Old High German gaman “sport, merriment”, Danish gamen, Swedish gamman “merriment”), regarded as identical with Gothic gaman “participation, communion”, from Proto-Germanic *ga-collective prefix +*mann “person”, giving a sense of “people together”.
There are thus two important components that make up the sense of a word like ‘game’ – an original sense of ‘communion’ and a derivative sense of ‘enjoyment’ – and when it comes to the participial form ‘gaming’, our keyword here, a third derivative sense of taking risks or ‘wagering’ also gets factored in. So, ‘Gaming’, as a keyword, is to be understood, not in terms of its literal functional meaning of playing games, but in terms of it always entailing the three features of collectivizatization, enjoyment, and excess.
What is also to be noted is, that while the word ‘gaming’ was available from the end of the 15th century, it was used rarely, making its case even more interesting. What would account for this rarity? How can one analyse the reticence of a language to deploy the verbal – and more specifically, the present participial, and thus always continuing and never foreclosed, never-ending, open-ended – form of a word whose nominal and adjectival use is so frequent? Could it be that while ‘game’ itself was rooted in communion and enjoyment, the excess of the suffix ‘ing’, makes it border a little too on the risqué? Could it be that ‘gaming‘ is thus essentially subversive, connected ontically as it is to the dangerous wastefulness of gambling, and uncontainable as it is in its participial form? Is it precisely because of this that gaming in the present context – as in the way video games have often worked out to be – becomes the veritable site of role-playing and identity alteration, of contestations and negotiations vis-à-vis the normative life-world, of a Dionysian joyful disruption of the austere world of utility? It is in attempting to answer these and like questions, that the myriad senses of the universe of ‘Gaming’ today can be understood.
But before one can move to the present participial form ‘Gaming’ and it implications, it may probably be worthwhile to take a look at what ‘Game’ itself is. There are two, presumably contradictory, elements that make up a game. On one hand, a game has to have a structure, fairly set rules, and definable goals and objectives; on the other, a game is supposed to lead to enjoyment – that supposed other to regimented structured normativity. It is in this duality then that the primary feature of Game lies: it cannot be utterly de-structured, or de-structive as one may put it, based as it has to be on a structured set of rules and goals; and yet its foundations in the undergrowth of enjoyment has the potential to constantly challenge and subvert structurality itself. Games have to be understood in relation to this immanently subversive duplicity, and ‘Gaming’, as a present participial form of the same, as a further extension of this duplicity unto the forever continuous and forever deferred zone of the indeterminable ‘to come’.
It is thus probably that Wittgenstein would have thought of ‘Games’ in his Philosophical Investigations (1953), aphorisms 66-70, as undefinable as such:
66. Consider for example the proceedings that we call “games”. I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? – Don’t say: “There must be something common, or they would not be called ‘games’” – but look and see whether there is anything common to all. – For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. […]
67. I can think of no better expression to characterize these similarities than “family resemblances”; for the various resemblances between members of a family: build, features, colour of eyes, gait, temperament, etc. etc. overlap and cries-cross in the same way.-And I shall say: ‘games’ form a family. […]
68. [… ] What still counts as a game and what no longer does?
Can you give the boundary? No.
You can draw one; for none has so far been drawn.
(But that never troubled you before when you used the word “game”.)
“But then the use of the word is unregulated, the ‘game’ we play with it is unregulated.”
It is not everywhere circumscribed by rules; but no more are there any rules for how high one throws the ball in tennis, or how hard; yet tennis is a game for all that and has rules too.
69. How should we explain to someone what a game is?
I imagine that we should describe games to him, and we might add: “This and similar things are called ‘games’”. And do we know any more about it ourselves? Is it only other people whom we cannot tell exactly what a game is? […]
70. “But if the concept ‘game’ is uncircumscribed like that, you don’t really know what you mean by a ‘game’.” (31-33)
And, Lyotard and Thébaud would have further extended this sense of indeterminacy and undefinability associated with the word in their Just Gaming (1985). But are games really undefinable, or is there a certain ontic primacy to the phenomenon, that can be cognized and defined, albeit in terms of the slippery duplicity mentioned above?
That games may constitute the very basis to modes of being human, and more so in the bearing out of this duplicity, is best borne out in Johan Huizinga’s 1938 book Homo Ludens, which suggests that ‘play’ is the primary and fundamental condition to the formation of human culture, with its other forms like language, law, war, knowledge, poetry, philosophy and art, being all based on the notion of play. Huizinga says, “The view we take in the following pages is that culture arises in the form of play, that it is played from the very beginning… Social life is endued with supra-biological forms, in the shape of play, which enhances its value.” (46) In fact, for Huizinga, play is fundamental to life itself as it seems to precede human culture too: “Play is older than culture, for culture, however inadequately defined, always presupposes human society, and animals have not waited for man to teach them their playing.” (1) Huizinga suggests further that this primary institution of play is to be credited for the very beginnings of human civilization, less as its source, and more as its very form: “We have to conclude, therefore, that civilization is, in its earliest phases, played. It does not come from play like a baby detaching itself from the womb: it arises in and as play, and never leaves it.” (173) More importantly, however, play can perform this function precisely because of a duality that it exhibits, and Huizinga’s suggestion that play at one and the same time demands and creates order, and yet also is the means to freedom itself (8-10) succinctly sums up the very duplicity that was mentioned above as the basic feature of Game.
But, it should be noted that Huizinga (or, rather, his translator) uses the word ‘play’ and not ‘game’. Are the two words the same? Because if they are, an attempt at defining the specificities of the keyword ‘Gaming’ may run into serious issues. It is imperative, therefore, at this point of time to look into three often presumed to be cognate words – ‘game’, ‘play’, and ‘sports’ – and, in trying to locate their differences, theorize upon the specific imports of ‘Gaming’. To define ‘play’, one can turn again to Huizinga and his classic definition:
Summing up the formal characteristic of play, we might call it a free activity standing quite consciously outside ‘ordinary’ life as being ‘not serious’ but at the same time absorbing the player intensely and utterly. It is an activity connected with no material interest, and no profit can be gained by it. It proceeds within its own proper boundaries of time and space according to fixed rules and in an orderly manner. It promotes the formation of social groupings that tend to surround themselves with secrecy and to stress the difference from the common world by disguise or other means. (13)
This fairly benign and de-individuated definition of ‘play’, however, undergoes a singular and almost sinister twist, when the word gets combined with ‘game’, to form the word ‘gameplay’. ‘Gameplay’ refers to the interactive and experiential component of a player’s act of playing, involving the contingent strategies the player evolves in the process. As Craig Lindley puts it, “gameplay [… is] understood as a pattern of interaction with the game system … In general, it is a particular way of thinking about the game state from the perspective of a player” (2004: 186) and further that “The experience of gameplay is one of interacting with a game design in the performance of cognitive tasks, with a variety of emotions arising from or associated with different elements of motivation, task performance and completion” (2008, 9). It is evident that the bringing together of ‘game’ and ‘play’, or the insertion of the element of ‘game’ into ‘play’, leads to an element of individuation, experientiality, strategization and contingency to the play. Further, ‘gameplay’ has within it the possibility, as Gonzalo Frasca (2003) points out, of introducing “manipulation rules”, or what an individual player can do in an act of playing, beyond the set “goal rules” and “meta-rules” of the game (231-32). This possibility of manipulation and strategizing that ‘gaming’ may entail is best brought out, however, when one contrasts a word like ‘gamesmanship’ – or the art of strategically manipulating rules to win a game, as so succinctly put as early as 1947 by Stephen Potter – with ‘sportmanship’ – or playing by the rules and accepting defeat with grace, thus pointing out the essential difference between ‘games’ and ‘sports’. Thus, the essential specificity of ‘game’, as a keyword, in contradistinction with the presumably synonymous words ‘play’ or ‘sports’, lies in this subversive feature of it, ‘to game’ or ‘gaming’ being to be able to strategize and manipulate the system, while apparently playing by its rules.
Needless to say, there is therefore such a phrase like ‘gaming the system’, which inductively extends this fundamental feature of gaming – of playing by the rules while potentially subverting them – to the wider world, bringing out once again the subversive essence of Gaming. And, this aspect of Gaming is extended further beyond the immediate domain of ganes themselves through a function like ‘gamification’. We are told that “Though the term ‘gamification’ was coined in 2002 by Nick Pelling, a British-born computer programmer and inventor, it did not gain popularity until 2010″ (Wikipedia), by when it came to be accepted widely as the mode of extending the essence of gaming to non-gaming contexts like eduation, business, etc. Gamification typically works by bringing in elements of enjoyment, competition, and the principle of rewards into other work, thus inducting ‘gaming’ as a phenomenon towards achieving goals beyond the ordinary and the normative.
The myriad possibilities of Gaming have been suitably studied under the discipline of Gaming Theory or Game Studies or Ludology, which has emerged as a vibrant interdisciplinary field that critically analyses games and gaming in relation to their implications for society. The fact that this field of intellectual exploration has been able to combine the otherwise discrete disciplines of science and technology, social sciences, and the humanities shows the particular efficacy of Gaming as a discourse to be able to almost undo what the Frankfurt School has identified as the primary problem of modernity, where rationality stood spintered along the three axes of science, morality, and art. This meta-realization as to whether engaging with Gaming can thus address the very problems of modernity apart, some of the raging debates within Gaming Theory – whether games, especially some video games, have a negative impact on the youth and society, with their emphasis on graphic violence; whether contemporary gaming cultures further promote digital and class divides across the world; or whether games are indeed beneficial – further highlight the great importance that Gaming as a phenomenon enjoys in today’s intellectual world.
Of course, before the advent of Gaming Theory, Game Theory – initiated, so to say, by John von Neumann through his 1928 article “On the Theory of Games of Strategy” and his 1944 book Theory of Games and Economic Behaviour - had already firmly placed the notion of the game within the ambit of serious intellectual deliberations. But as I would argue, the crucially missing ‘ing’ suffix in the latter would mark a major difference between the presumptions of the two bodies of theory, and form the basis of our understanding of the keyword ‘Gaming’. The fact that Game Theory is definable as “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers” (Myerson: 1) suggests how different it is from an attempt to theorize ‘Gaming’, where rationality, predictability, and determinability – as explained above – are definitely not the primary criteria. To understand what the primary precepts and presumptions of Game Theory are, one can quote Eric Rasmusen:
The essential elements of a game are players, action, payoffs, and information – PAPI, for short. These are collectively known as the rules of the game, and the modeller’s objective is to describe a situation in terms of the rules of a game so as to explain what will happen in tha situation. Trying to maximize their payoffs, the players will devise plans known as strategies that pick actions depending on the information that has arrived each moment. The combination of strategies hosen by each player is known as the equilibrium. Given an equilibrium, the modeller can see what actions come out of the conjunctions of all the players’ plans, and this tells him the outcome of the game. (31-32)
Clearly, the PAPI-based models that a game theorist tries to evolve out of games are aimed at successfully predicting outcomes, with ‘equilibrium’ being the keyword, a far cry indeed from ‘Gaming’, where being continuously thrown out of balance into the abyss of uncertainties is probably the key to theorization. Thus, while Game Theory has been successfully adapted to economics, political science, evolutionary biology, and to certain forms of pragmatist philosophy, it is not to be conflated with ‘Gaming’ theory, and my hypothesis is that the ‘ing’ suffix hold the key to this crucial difference, adding specificity to the current keyword.
While the discussion so far has been devoted to bring out the subversive essence of the word ‘gaming’ in a very broad way, it may be fruitful now to look at the precise field of video games or computer games – not only because this collection is one of ‘digital’ keywords, but also because the word ‘gaming’, as has been stated right at the beginning, more often than not pertains to this domain itself – and see if this definitional presumption holds there too. A foray into the history of video games or computer games – of its journey from fairly non-manipulable, single player, and closed games to far more interactive, role-playing, simulative, sand-box style games, and the initiation of ‘openness’ with the possibility of ‘mods’, where users can modify games – may well suggest the same. A cursory look at any good history of video gaming – say, by Steven Kent (2001) – and a good account of online gaming – say, by T.L. Taylor (2006) – will already suggest such a trajectory to openness and indeterminacy, and I need not go into details of the same here.
The seven-pronged history of the form’s journey – (i) from a simple “Cathode Ray Tube Amusement Device” invented by Thomas T. Goldsmith Jr. and Estle Ray Mann in 1947; (ii) to the introduction of dedicated gaming machines with Nimrod – the first specialized computer to play a game – introduced by the British company Ferranti in 1951; (iii) to computer games coming to simulate real games, like “Draughts” developed by Christopher Strachey in 1951, “OXO”, based on tic-tac-toe, created by Alexander S. Douglas in 1952, “Checkers” developed by Arthus Samuels in 1956, “Chess” developed at Carnegie Mellon University in 1958, and “Tennis for Two” designed by Willim Higinbotham in 1958; (iv) to computer games becoming simulative futuristic shooting games – something that will continue to be its most prominent avatar – beginning with “Hutspiel”, a war game developed by the US army in 1955, and culminating with MIT students Martin Graetz, Steve Russell, and Wayne Wiitanen designing “Spacewar” in 1961; (v) to gaming entering the public and private domain of consumption, with the first coin-operated arcade video game “Galaxy Game” being developed at Stanford University by Bill Pitts and Hugh Tuck in 1971, and the first commercially available coin-operated game “Computer Space” being created by Nolan Bushnell and Ted Dabney in 1971; (vi) to “Magnavox Odyssey”, the first home console playable on a television, invented by Ralph Baer in 1972, leading to the public arcade vs. the private console (with the later emergence of hand-held gaming devices including mobile telephones) gaming platforms tussle, ending in a veritable victory of the latter over the former; (vii) to the emergence of online gaming, starting with “Mazewar” in 1974, “Multi-User Dungeon” or “MUD1″ in 1978, and “Snipes” in 1983, truly gaining momentum with the wide percolation of the internet, and culminating in the MMORPGs and other multiplayer online games of today- is too well-known to merit a further detailed account. May it suffice to say that the direction of this development is, however, what I think is crucial to understand dynamic Gaming, as a phenomenon, as opposed to the static ‘game’.
As my discussion above has shown, an understanding of the true import of the keyword ‘Gaming’ lies in understanding it as an ongoing process that is contingent upon strategies often leading to potential subversion, rather than an object. Therefore it is but fitting that I close it with a note on the Gamer and the Gaming community, the ones entrusted with the actual realization of this import. The role of the gamer is well analysed by the likes of McKenzie Wark (2007), and can be broadly understood, in terms of the creative and subversive appropriation that has been deemed crucial to Gaming, under three heads. First, the gamer as a loner and the gaming community as a subculture itself has the potential for subverting societal norms the way any subculture does; second, more specifically the gamer can, in the act of gameplay, choose to subvert the official narrative of the game through hacking, modding, cheating, etc. (needless to say, Sandbox or Open World games can enhance such activism on the part of the gamer, but that should not undermine the subversive role of the Gamer in the actual act of Gaming vis-à-vis the Game); and, finally, the extension by the gamer of elements of the gameworld to other worlds through cosplay, fanfiction, machinima, etc. All three possibilities above suggest how ‘Gaming’ as an act can be the site of tactical appropriation of a game on the part of the gamer, and while the likes of David Getsy (2011) have pointed out how games – originally created for diversion – can become veritable sites of subversion, I would conclude this exploration of the keyword with a reiteration that the ‘ing’-ing, as it were, of the ‘game’, or the rendering of the nominal into the present participial, is where the specificity of Gaming as a keyword has to be located.
Frasca, Gonzalo. “Simulation versus narrative: introduction to ludology”, in Mark J.P. Wolf and Bernard Perron (eds.), The Videogame Theory Reader, New York: Routledge, 2003, pp. 221-235.
Getsy, David J. (ed.). From Diversion to Subversion: Games, Play and Twentieth-Century Art, University Park PA: Pennsylvania State University Press, 2011.
Huizinga, Johan. Homo ludens; a study of the play-element in culture (1938). (trans.) C. Van Schendel, Boston: Beacon Press, 1955.
Kent, Steven L. The ultimate history of video games: From Pong to Pokémon and Beyond – The Story Behind the Craze That Touched Our Lives and Changed the World, New York: Three Rivers Press, 2001.
Lindley, Craig. “Narrative, Game Play, and Alternative Time Structures for Virtual Environments”. in Stefan Göbel, et al. (eds.), Technologies for Interactive Digital Storytelling and Entertainment: Second International Conference TIDSE 2004, Darmstadt, Germany, June 2004, Proceedings, Berlin & Heidelberg: Springer Verlag, 2004, pp. 183-194.
Lindley, Craig, Lennart Nacke, and Charlotte Sennersten. “Dissecting Play – Investigating the Cognitive and Emotional Motivations and Affects of Computer Gameplay”. CGAMES 08: Proceedings of 13th International Conference on Computer Games, November 3-5, 2008, Wolverhampton, UK: University of Wolverhampton, 2008, pp. 9-17. <http://www.academia.edu/365971/Dissecting_Play-Investigating_the_ Cognitive_and_Emotional_Motivations_and_Affects_of_Computer_Gameplay>, accessed on April 29, 2014.
Lyotard, Jean-François and Jean-Loup Thébaud. Just Gaming, (trans.) Wlad Godzich, Minneapolis: Minnesota University Press, 1985.
Merriam-Webster Dictionary “Gaming”, accessed on April 29, 2014.
Myerson, Roger B. Game Theory: Analysis of Conflict, Cambridge MA: Harvard University Press, 1991.
Neumann, John von, and Oskar Morgenstern. Theory of Games and Economic Behaviour (1944), Princeton: Princeton University Press, 1953.
Neumann, John von. “On the Theory of Games of Strategy” (1928), (trans.) Sonya Bargmann, in A.W. Tucker and R. D. Luce (eds.), Contributions to the Theory of Games, Vol. IV, Princeton: Princeton University Press, 1959, pp. 13-42.
Online Etymology Dictionary “Game”, accessed on April 29, 2014.
Potter, Stephen. The Theory and Practice of Gamesmanship: The Art of Winning Games Without Actually Cheating, London: Rupert Hart-Davis, 1947.
Random House Kernerman Webster’s College Dictionary “Gaming”, accessed on April 29, 2014.
Rasmusen, Eric. Games and Information: An Introduction to Game Theory. Third Edition, Oxford: Basil Blackwell, 2001.
Taylor, T.L. Play between Worlds: Exploring Online Game Culture. Cambridge MA: MIT Press, 2006.
Wark, McKenzie. Gamer Theory, Cambridge MA: Harvard University Press, 2007.
Wikipedia. “Gamification”, <http://en.wikipedia.org/wiki/Gamification>, accessed on April 29, 2014.
Wittgenstein, Ludwig. Philosophical Investigations, (trans.) G.E.M. Anscombe, Oxford: Basil Blackwell, (1953) 1986.
Bogost, Ian. Unit Operations: an Approach to Videogame Criticism, Cambridge MA: MIT Press, 2006.
Mäyrä, Frans. An Introduction to Game Studies: Games in Culture, London: Sage Publications, 2008.
McAllister, Ken S. Gamework: Language, Power, and Computer Game Culture, Tuscaloosa AL: University of Alabama Press, 2004.
Thompson, Jason C. and Marc A. Ouellette (eds.). The Game Culture Reader, Newcastle upon Tyne: Cambridge Scholars Press, 2013.
Wolf, Mark J.P. and Bernard Perron (eds.). The Video Game Theory Reader. London & New York: Routledge, 2003.
-Contributed by Saugata Bhaduri, -← Older posts | Newer posts →