Culture Digitally // Examining Contemporary Cultural Production // Page 2

Culture Digitally // Examining Contemporary Cultural Production

  • With the generous support of the National Science Foundation we have developed Culture Digitally. The blog is meant to be a gathering point for scholars and others who study cultural production and information technologies. Welcome and please join our conversation.

     

    • Hackers [draft] [#digitalkeywords] Oct 6, 2014

      “Hacking, across its various manifestations, can be seen as a site where craft and craftiness converge.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Hackers — Gabriella Coleman, McGill University

      The Culture vs. the Cultures of Hacking

      In the 1950s a small group of MIT-based computer enthusiasts, many of them model train builders/tinkerers, adopted the term “hacker” to differentiate their freewheeling attitude from those of their peers. While most MIT engineers relied on convention to deliver proven results, hackers courted contingency, disregarding norms or rules they thought likely to stifle creative invention. These hackers, like the engineers they distinguished themselves from, were primarily students, but a handful of outsiders, some of them pre-teens, were also deemed to possess the desire and intellectual chops required to hack and adopted into the informal club; In the eyes of this group, hackers re-purposed tools in the service of beauty and utility while those students “who insisted on studying for courses”[1] were considered “tools” themselves.

      Since this coinage sixty years ago, the range of activity wedded to the term “hacking” has expanded exponentially. Bloggers share tips about “life hacks” (tricks for managing time or overcoming the challenges of everyday life); corporations, governments, and NGOs host “hackathon” coding sprints[2]; and the “hacktivist”, once a marginal political actor, now lies at the center of geopolitical life.[3] Since the early 1980s, the hacker archetype has also become a staple of our mass media diet. Rarely does a day pass without an article detailing a massive security breach at the hands of shadowy hackers, who have ransacked corporate servers to pilfer personal and lucrative data. Alongside these newspaper headlines, hackers often feature prominently in popular film, magazines, literature, and TV.[4]

      Despite this pervasiveness, academic books on the subject of hacking are scant. To date the most substantive historical accounts have been penned by journalists, while academics have written a handful of sociological, anthropological and philosophical books -typically with a media studies orientation.[5] Surveying the popular, journalistic, and academic material on hackers, it is clear that few words in the English language evoke such a bundle of simultaneously negative and positive-even sexy-connotations: mysterious, criminal, impulsive, brilliant, chauvinistic, white knight, digital robin hood, young, white, male, politically naïve, libertarian, wizardly, entitled, brilliant, skilled, mystical, monastic, creepy, creative, obsessive, methodological, quirky, a-social, pathological.

      Some of these associations carry with them a kernel of truth, especially in North America and Europe: conferences are populated by seas of mostly white men; their professionalizable skills, which encompass the distinct technical arts of programming, security research, hardware building, and system/network administration, land them mostly in a middle class or higher tax bracket (they are among the few professionals who can scramble up corporate ladders without a college degree); and their much vaunted libertarianism does, indeed, thrive in particular regions like Silicon Valley, the global start-up capital of the world, and select projects like the cryptocurrency Bitcoin.

      Yet many other popular and entrenched ideas about hacking are more fable than reality. Hackers, so often tagged as asocial lone wolves, are in fact highly social, as evidenced by the hundreds of hacker or developer cons which typically repeat annually and boast impressive attendance records.[6] Another misconception concerns the core political sensibility of the hacker. Many articles universalize a libertarianism to the entirety of hacking practitioners in the west. Whether appraising them positively as freedom fighters or deriding them as naïve miscreants, journalists and academics often pin the origins of their practice on an anti-authoritarian distrust of government combined with an ardent support for free market capitalism. This posited libertarianism is most often mentioned in passing as simple fact or marshaled to explain everything from their (supposedly naive) behavior to the nature of their political activity (or inactivity).[7]

      What is the source of this association, and why has it proved so tenacious? The reasons are complex, but we can identify at least two clear contributing factors. First, many hackers, especially in the west, do demonstrate an enthusiastic commitment to anti-authoritarianism and a variety of civil liberties. Most notably, hackers advocate privacy and free speech rights-a propensity erroneously (if perhaps understandably) flattened into a perception of libertarianism. While these sensibilities are wholly compatible and hold affinities with a libertarian agenda, the two are by no means co-constitutive, nor does one necessarily follow from the other.[8]

      The second source propping up the myth of the libertarian hacker concerns the framing and uptake of published accounts. Certain, depictions of particular aspects of hacking or specific geographic regions wherein libertarianism does, indeed, dominate are routinely represented as and subsequently taken up as indicative of the entire hacker culture.[9] This is only magnified by the fact that Silicon Valley technologists, many who promulgate what Richard Barbrook has named the “Californian ideology”-“a mix of cybernetics, free market economics, and counter-culture,” are so well resourced that their activities and values, however specific, circulate in the public more pervasively than those at work in other domains of hacker practice.[10] There is no question the California ideology remains salient[11] – but it by no means qualifies as a singular hacker worldview homogeneous across regions, generations, projects, and styles of hacking.

      This disproportionately-fortified stereotype of the libertarian hacker, along with the paucity of historical studies and contemporary research regarding other values at work in hacking, forms the terrain from which scholars of hackers currently work and write. But this seems, slowly, to be changing. Increasingly, scholars are tracing the genealogies of hacking practices, ethics, and values to heterodox, multiplicitous origins.[12] For instance, the inception of the “hacker underground”-an archipelago of tightknit crews who embrace transgression, enact secrecy, and excel in the art of computer intrusion-can be traced to the phone phreaks: proto-hackers who, operating both independently and collectively, made it their mission to covertly explore phone systems for a variety of reasons which rarely involved capital gain.[13] Conversely, “free software” hackers are far more transparent in their constitution and activities as they utilize legal mechanisms which aim to guarantee perpetual access to their creations. Meanwhile, “open source” hackers, close cousins to their equivalents in the free software movement, downplay the language of rights emphasizing methodological benefits and freedom of choice in how to use software over the perpetual freedom of the software itself; as a result, open source ideology maintains an affinity with neoliberal logics, while free software runs directly against this current.[14] Another engagement still is displayed by “the crypto-warriors,” covered in great detail by journalist Andy Greenberg, who concern themselves with technical means for securing anonymity and privacy. Their reasons and ideologies differ, but they align in the desire and development of tools which might ensure these ends.[15]

      So while libertarianism is an important worldview to consider, especially in various regions and particular projects, it fails to function effectively as a thread to connect different styles and genres of hacking. However, this doesn’t mean we can’t consider other commitments around which hackers do, indeed, seem to share a common grounding.

      The Craftiness of Craft

      Hacking, across its various manifestations, can be seen as a site where craft and craftiness converge: building a 3-D printer that can replicate itself; stealing a botnet – an army of zombie computers-to blast a website for a political DDoS campaign; inventing a license called copyleft that aims to guarantee openness of distribution by redeploying the logic inherent to copyright itself; showcasing a robot that mixes cocktails at a scientific-geek festival devoted entirely to, well, the art of cocktail robotics; inventing a programming language called Brainfuck which, as you might guess, is primarily designed to humorously mess with people’s heads; the list goes on. The alignment of craft and craftiness is perhaps the best location to find a unifying thread which runs throughout the diverse technical and ethical worlds of hacking.

      To hack is to seek quality and excellence in technological production. In this regard, all hackers fit the bill as quintessential “craftspeople,” as defined by sociologist Richard Sennett: “Craftsmanship names an enduring, basic human impulse, the desire to do a job well for its own sake.”[16] In the 20th century, with the dominance of Fordist styles of factory labor and other bureaucratic mandates, crafting has suffered a precipitous decline in Western mainstream economies, argues Sennett. Among hackers, however, this style of laboring still runs remarkably deep and strong.[17]

      Even if craftspeople tend to work in solitude, crafting is by definition a collectivist pursuit based on shared rules of engagement and standards for quality. Craftspeople gather in social spaces, like the workshop, to learn, mentor each other, and establish guidelines for exchange and making. Among hackers this ethic has remained intact, in part because they have built the necessary social spaces-mailing lists, code repositories, free software projects, hacker and maker spaces, Internet chat relays-where they can freely associate and work semi-autonomously, free from the imperatives and mandates of their day jobs.[18]

      Large free and open source projects are even similar to the guilds of time yore, where fraternity was cultivated through labor. F/OSS institutions are supported by brick and mortar infrastructures (servers, code repository) along with sophisticated and elaborate organizational mechanisms. The largest such project is undoubtedly Debian-boasting over a thousand members who maintain the 25,000 pieces of software which together constitute the Linux-based operating system. In existence now for twenty-one years, Debian is a federation sustained by procedures for vetting new members (including tests of their philosophical and legal knowledge regarding free software), intricate voting procedures, and a yearly developer conference which functions as a sort of pilgrimage.[19]

      Craft and all the social processes entailed – the establishment of rules, norms, pedagogy, traditions, social spaces, and institutions – nevertheless co-exist with countervailing, but equally prevalent, dispositions: notably individualism, anti-authoritarism, and craftiness. Hackers routinely seek to display their creativity and individuality and are well known for balking at convention and bending (or simply breaking) the rules. If a hacker inherits a code base she dislikes, she is likely to simply reinvent it. One core definition of a hack is a ruthlessly clever and unique prank or technical solution. In associating, its creator is also designated as unique.

      Craftiness is a primarily aesthetic disposition, finding expression in a plethora of practical engagements which include wily pranks and the writing of code-which is sometimes sparsely elegant and at other times densely obfuscated.[20] Its purest manifestation, I have argued elsewhere, lies in the joking and humor so common to the hacker habitat.[21] “Easter eggs” provide the classic example: clever and often non-functional jokes are commonly integrated into software instructions or manuals.

      Hacking is not the only crafting endeavour which straddles this line between collectivism and individualism, between tradition and craftiness; the tensions between these poles are apparent among academics who depend upon conventional the referenced work of peers while simultaneously striving to advance clever, novel, counter-intuitive arguments and individual recognition. Craftspeople who build and maintain technologies must be similarly enterprising, especially when improvising a fix for something like an old engine or obsolete photocopying machine.[22] Indeed, the craft-vocation of the security hacker requires what we might describe as intellectual guile. One security researcher described the mentality: You have to, like, have an innate understanding that [a security measure is] arbitrary, it’s an arbitrary mechanism that does something that’s unnatural and therefore can be circumvented in all likelihood.” Craftiness, then, can be seen as thinking outside the box, or circumvention of inherent technological limitations in pursuit of craft. But we can also understand craftiness as exceeding mere instrumentality. Among hackers, the performance of this functional aspect becomes an aesthetic pursuit, a thing valued in-and-of-itself.

      The Power and Politics of Hacking

      The interplay between craft and craftiness can be seen treated as something of a hacking universal, then. But it would be wrong to claim that these two attributes are alone capable of sparking political awareness or activism, or even that all hacking qualifies as political, much less politically progressive. Indeed, for a fuller accounting of the politics of hacking it is necessary to consider the variable cultures and ethics of hacking which underwrite craft and craftiness. Hacker political interventions must also be historically situated, in light of regional differences,[23] notable “critical events”[24] – like the release of diplomatic cables by the whistleblowing, hacker organization Wikileaks-and the broader socio-economic conditions which frame the labor of hacking.[25]

      Indeed, there is little doubt that commercial opportunities fundamentally shape and alter the ethical tenor and political possibilities of hacking. So many hacker sensibilities, projects and products are motivated by, threatened by or easily folded into corporate imperatives.[26] Take, for instance, the hacker commitment to autonomy. Technology giant Google, seeking to lure top talent, instituted the “20% policy.”[27] The company affords its engineers, many of whom value technical sovereignty as part of their ethos, the freedom to work one day a week on their own self-directed projects. And Google is not unique; the informal policy is found in a slew of Silicon Valley firms like Twitter, Facebook, Yahoo, and LinkedIn. Of course, critics rightly charge that this so-called freedom simply translates into even longer and more gruelling work weeks. Corporations advertise and institutionalize “hackathons” as a way to capitalize on the feel good mythology of the hacker freedom fighter-all while reaping the fruits of the labor performed therein. In high-tech Chinese cities like Shanghai, where hacker spaces are currently mushrooming, ethics of openness have been determined to bolster entrepreneurial goals beyond those of any individual or unaffiliated collective.[28]

      It is nevertheless remarkable that hackers, so deeply entwined in the economy, have managed to preserve pockets of meaningful social autonomy and frequently instigated or catalyzed political change. They do so through diverse tactical modalities that stretch from policy reform to the fomenting of digital direct action.[29] If the past five years are any indication, this is a trend which we can expect to grow. What, then, are the sociological and historical conditions that have helped secure and sustain this vibrant sphere of hacker-led political action, especially in light of the economic privilege they enjoy?

      Part of the answer lies in craft and the “workshops”, like IRC, mailing lists and maker spaces, where hackers collectively labor. Taken together they constitute what anthropologist Chris Kelty defines as a recursive public: “a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternative“[30] (emphasis my own) What Kelty highlights with his theory of recursive publics is not so much its politics but its power-a point also extended in a different manner by McKenzie Wark in the Hacker Manifesto.[31] Hackers hold the knowledge-and thus the power-to build and maintain the technological spaces that are partly, or fully, independent from the institutions where they work. These spaces are where they labor, but also the locales where hacker identities are forged and communities emerge to discuss values deemed essential to the practice of their craft.

      Taken from another disciplinary vantage point, these spaces qualify as what sociologists of social movements call “free spaces”, historically identified in radical book shops, bars, block clubs, tenet associations and the like. Generally these are “settings within a community or movement that are removed from the direct control of dominant groups, are voluntarily participated in, and generate the cultural challenge that precedes or accompanies political mobilization.”[32] The vibrancy of hacker politics is contingent on the geeky varieties of such free spaces.

      It is important to emphasize, however, that while the existence of recursive publics or free spaces do not, in and of themselves, guarantee the emergence of hacker political sensibilities, they remain nevertheless vital stage settings for the possibility of activism but regional differences figure prominently. For instance, much of the hacker-based political activism emanates from Europe. Compared to their North American counterparts (especially those in the United States), European hackers tend to tout their political commitments in easily recognizable ways, often aligning themselves with politically-mandated hacker groups and spaces.[33] The continent boasts dozens of autonomous, anti-capitalist technology collectives, from Spain to Croatia, and has a developed activist practice which fuses art with hacking.[34] One of the oldest collectives, the German-based Chaos Computer Club (established in 1984), has worked to shape technology policy in dialogue with government for over a decade.[35] A great majority of the participants populating the insurgent protest ensemble Anonymous are European.[36] Perhaps most tellingly, the first robust, formalized, geek political organization, the Pirate Party, was founded in Sweden.[37]

      Not all hackers are seeking, however, to promote social transformation. But we can nevertheless consider how many of their legal and technical artifacts catalyze enduring and pervasive political changes regardless of intent. Craft autonomy figures heavily in this unexpected dynamic, one which can be observed, perhaps most clearly, in the production of Free and Open Source Software (F/OSS). Productive autonomy and access to the underlying structures of code are enshrined values in this community, and politics seems to be a natural outcome of such commitments. Irrespective of personal motivation or a project’s stated political position, F/OSS has functioned as a sort of icon, a living example from which other actors in fields like law, journalism and education have made cases for open access. To give but one example, Free Software licensing directly inspired the chartering of the Creative Commons non-profit, which has developed a suite of open access licenses for modes of cultural production which extend far beyond the purview of hacking.[38] Additionally, F/OSS practices have enabled radical thinkers and activists to showcase and advocate the vitality, persistence and possibility of non-alienated labor.[39]

      Like F/OSS hackers, those in the underground also strive for and enact craft autonomy with interesting political effects-but here autonomy is understood and enacted differently. Often referred to as blackhats, these hackers pursue forbidden knowledge. While often lured by the thrills offered by subversion and transgression alone, their acts also serve pedagogical purposes, and many have emerged from these illegal, underground into the realm of respected security research. Their hands-on experiences locating vulnerabilities and sleuthing systems are easily transferrable into efforts to fortify-rather than penetrate-technical systems. Predictably, the establishment of a profitable security industry is seen by some underground hackers as a threat to their autonomy: Some critics deride their fellow hackers for selling out to the man.[40] A much larger number don’t have a problem with the aim of securitization per se, but nevertheless chastise those attracted to the field by lucrative salaries rather than a passionate allegiance to quality. In one piece declaring the death of the hacker underground, a hacker bemoans: “unfortunately, fewer and fewer people are willing, or indeed capable of following this path, of pursuing that ever-unattainable goal of technical perfection. Instead, the current trend is to pursue the lowest common denominator, to do the least amount of work to gain the most fame, respect or money.”[41]

      A major, and perhaps unsurprising motivator of hacker politicization comes in the wake of state intervention. The most potent periods of hacker politicization (at least in the American context) are undoubtedly those following arrests of underground hackers like Craig Neidorf[42] or Kevin Mitnick.[43] The criminalization of software can also do the trick; hacker-cryptopapher Phil Zimmerman broke numerous munitions and intellectual property laws when he released PGP (Pretty Good Privacy) encryption to the world-a fact governments did not fail to notice or act upon.[44] But this act of civil disobedience helped engender the now firmly-established hacker notion that software deserves free speech protections.[45]

      In many such instances, the pushback against criminalization spills beyond hacker concerns, engaging questions of civil liberties more generally. Activists outside the hacker discipline are inevitably drawn in, and the political language deployed by them results in a sort of positive feedback loop for the hackers initially activated. We saw this precise pattern with the release and attempted suppression of DeCSS, a short program which could be used to circumvent copy and regional access controls on DVDs. In the United States, hackers who shared or published this code were sued under the Digital Millennium Copyright Act, and its author was subsequently arrested in Norway. State criminalization led to a surge of protest activity among hackers across Europe and North America as they insisted upon free speech rights to write and release code undisputality cementing the association between free speech and code. As alliances were forged with civil liberties groups, lawyers, and librarians, what is now popularly known as the “digital rights movement” was more fully constituted.[46]


      ENDNOTES

      1. Levy, Steven Hackers. Sebastopol, CA: O’Reilly Media, 2010, p. 10

      2. DiSalvo, Carl and Melissa Gregg. “The Trouble With White Hats.” The New Inquiry, November 21, 2013.

      3. Beyer, Jessica L. Expect Us: Online Communities and Political Mobilization. Oxford_; New York: Oxford University Press, 2014; Jordan, Tim, and Paul Taylor. Hacktivism and Cyberwars: Rebels with a Cause?. Routledge, 2004; Coleman, Gabriella. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. London_; Brooklyn, NY: Verso, 2014; Sauter, Molly. The Coming Swarm: DDOS Actions, Hacktivism, and Civil Disobedience on the Internet. New York: Bloomsbury Academic, 2014.

      4. Alper, Meryl. “‘Can Our Kids Hack It With Computers?’ Constructing Youth Hackers in Family Computing Magazines.” International Journal of Communication. N, no. 8 (2014): 673-98.

      5. For a history of phone phreaking see Lapsley, Phil. Exploding the Phone: The Untold Story of the Teenagers and Outlaws Who Hacked Ma Bell, 2013; for a history of the first coordinate state crackdowns against the American black hats see Sterling, Bruce. The Hacker Crackdown: Law and Disorder on the Electronic Frontier. New York: Bantam Books, 1992. The history of the intersection between hacking and cryptography has been written by Greenberg, Andy. This Machine Kills Secrets: How WikiLeakers, Cypherpunks, and Hacktivists Aim to Free the World’s Information. New York: Dutton Adult, 2012.Finally the classic account on the birth of university based hacking and early hardware hacking, see Steven Levy, ibid. For academic accounts also see: Jordan, Tim, and Paul Taylor. Hacktivism and Cyberwars: Rebels with a Cause?. Routledge, 2004; Thomas, Douglas. Hacker Culture. Minneapolis: University of Minnesota Press, 2002; Wark, McKenzie. A Hacker Manifesto. Cambridge, MA: Harvard University Press, 2004; Kelty, Christopher M. Two Bits: The Cultural Significance of Free Software. Durham: Duke University Press, 2008; and Coleman, Gabriella. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press, 2013.

      6. Coleman, Gabriella. “The Hacker Conference: A Ritual Condensation and Celebration of a Lifeworld.” Anthropological Quarterly 83, no. 1 (2010): 47-72.

      7. Borsook, Paulina. Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech. PublicAffairs, 2000.

      8. This is one but many examples where civil liberties is equated with libertarians but I feel like a jerk calling people out: Schulte, Stephanie, and Bret Schulte. “Muckraking in the Digital Age: Hacker Journalism and Cyber Activism in Legacy Media.” NMEDIAC, The Journal Of New Media And Culture 9, no. 1 (February 25, 2014).

      9. Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Chicago: University of Chicago Press, 2006.

      10. Barbrook, R, and A Cameron. “The California Ideology.” Science as Culture, no. 26 (1996): 44-72.

      11. Marwick, Alice E. Status Update: Celebrity, Publicity, and Branding in the Social Media Age. Yale University Press, 2013 and Morozov, Evgeny. To Save Everything, Click Here: The Folly of Technological Solutionism. PublicAffairs, 2013.

      12. Jordan, Tim. Hacking: Digital Media and Technological Determinism. Cambridge, UK; Malden, MA: Polity Press, 2008; Coleman, Gabriella, and Alex Golub. “Hacker Practice.” Anthropological Theory 8, no. 3 (2008): 255-77.

      13. Lapsley, Ibid.

      14. Berry, David. Copy, Rip, Burn: The Politics of Copyleft and Open Source. London: Pluto Press, 2008.

      15. Greenberg, ibid.

      16. Sennett, Richard. The Craftsman. New Haven: Yale University Press, 2009.

      17. Hannemyr, Gisle. “Technology and Pleasure: Considering Hacking Constructive.” First Monday 4, no. 2 (February 1, 1999).

      18. For an in-depth account of how these spaces function pedagogically, Schrock, Andrew Richard. “‘Education in Disguise': Culture of a Hacker and Maker Space.” InterActions: UCLA Journal of Education and Information Studies 10, no. 1 (January 1, 2014).

      19. O’Neil, Mathieu. Cyberchiefs: Autonomy and Authority in Online Tribes. London; New York: New York: Pluto Press, 2009; Coleman, Gabriella. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press, 2013.

      20. Montfort, Nick. “Obfuscated Code.” In Software Studies a Lexicon, edited by Matthew Fuller. Cambridge, Mass.: MIT Press, 2008.

      21. See “Codes of Value” section in Coleman, ibid; and also Goriunova, Olga, ed. Fun and Software: Exploring Pleasure, Paradox and Pain in Computing. New York: Bloomsbury Academic, 2014.

      22. Orr, Julian E. Talking about Machines: An Ethnography of a Modern Job. Ithaca, N.Y: ILR Press, 1996.

      23. See, for example, Takhteyev, Yuri. Coding Places: Software Practice in a South American City. Cambridge, Mass: The MIT Press, 2012; and Chan, Anita Say. Networking Peripheries: Technological Futures and the Myth of Digital Universalism. Cambridge, Mass: The MIT Press, 2014.

      24. Sewell Jr., William H. Logics of History: Social Theory and Social Transformation. Chicago: University Of Chicago Press, 2005.

      25. Wark, ibid.

      26. Delfanti, Alessandro and Johan Soderberg. “Hacking Hacked! The Life Cycles of Digital Innovation.” Science, Technology and Human Values, Forthcoming.

      27. Tate, Ryan. “Google Couldn’t Kill 20 Percent Time Even If It Wanted To.” Wired, August 21, 2013.

      28. Lindtner, Silvia, Li, David. “Created in China.” Interactions Interactions 19, no. 6 (2012): 18.

      29. [LEFT BLANK] ****

      30. Kelty, ibid.

      31. Wark, ibid.

      32. Polletta, F. “‘Free Spaces’ in Collective Action.” Theory and Society 28, no. 1 (1999): 1-38.

      33. Bazzichelli, Tatiana. Networked Disruption: Rethinking Oppositions in Art, Hacktivism and the Business of Social Networking. Aarhus N, Denmark: Aarhus Universitet Multimedieuddannelsen, 2013.

      34. Maxigas. “Hacklabs and Hackerspaces – Tracing Two Genealogies.” Journal of Peer Production, no. 2. Accessed October 2, 2014.

      35. Kubitschko, Sebastian. “Hacking Authority.” edited by Craig Calhoun and Richard Sennett. New York: NYU Press, Forthcoming.

      36. Coleman, Gabriella. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. London; Brooklyn, NY: Verso, 2014.

      37. Burkart, Patrick. Pirate Politics: The New Information Policy Contests. The MIT Press, 2014.

      38. Coleman, Gabriella, and Mako Hill. “How Free Became Open and Everything Else Under the Sun.” MC Journal 7, no. 3 (July 2004).

      39. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Books, 2005.

      40. Anonymous. “Lines in the Sand: Which Side Are You On in the Hacker Class War.” Phrack Inc. 0x0e, no. 0x44 (April 2012).

      41. Anonymous. “The Underground Myth.” Phrack Inc. 0x0c, no. 0x41 (November 2008).

      42. Sterling, Ibid.

      43. Thomas, Ibid.

      44. Levy, Steven. Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age. 1st edition. London: Penguin Books, 2001.

      45. Coleman, Gabriella. “Code Is Speech: Legal Tinkering, Expertise, and Protest among Free and Open Source Software Developers.” Cultural Anthropology 24, no. 3 (November 2, 2012): 420-54.

      46. Postigo, Hector. The Digital Rights Movement: The Role of Technology in Subverting Digital Copyright. Cambridge, Mass: The MIT Press, 2012.


      BIBLIOGRAPHY

      Alper, Meryl. “‘Can Our Kids Hack It With Computers?’ Constructing Youth Hackers in Family Computing Magazines.” International Journal of Communication. N, no. 8 (2014): 673-98.

      Anonymous. “Lines in the Sand: Which Side Are You On in the Hacker Class War.” Phrack Inc. 0x0e, no. 0x44 (April 2012).

      Anonymous. “The Underground Myth.” Phrack Inc. 0x0c, no. 0x41 (November 2008).

      Barbrook, R, and A Cameron. “The California Ideology.” Science as Culture, no. 26 (1996): 44-72.

      Bazzichelli, Tatiana. Networked Disruption: Rethinking Oppositions in Art, Hacktivism and the Business of Social Networking. Aarhus N, Denmark: Aarhus Universitet Multimedieuddannelsen, 2013.

      Berry, David. Copy, Rip, Burn: The Politics of Copyleft and Open Source. London: Pluto Press, 2008.

      Beyer, Jessica L. Expect Us: Online Communities and Political Mobilization. Oxford_; New York: Oxford University Press, 2014.

      Borsook, Paulina. Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech. PublicAffairs, 2000.

      Burkart, Patrick. Pirate Politics: The New Information Policy Contests. The MIT Press, 2014.

      Chan, Anita Say. Networking Peripheries: Technological Futures and the Myth of Digital Universalism. Cambridge, Mass: The MIT Press, 2014.

      Coleman, Gabriella, and Alex Golub. “Hacker Practice.” Anthropological Theory 8, no. 3 (2008): 255-77.

      Coleman, Gabriella. “Code Is Speech: Legal Tinkering, Expertise, and Protest among Free and Open Source Software Developers.” Cultural Anthropology 24, no. 3 (November 2, 2012): 420-54.

      Coleman, Gabriella. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. London; Brooklyn, NY: Verso, 2014.

      Coleman, Gabriella. “The Hacker Conference: A Ritual Condensation and Celebration of a Lifeworld.” Anthropological Quarterly 83, no. 1 (2010): 47-72.

      Coleman, Gabriella. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press, 2013.

      Coleman, Gabriella, and Mako Hill. “How Free Became Open and Everything Else Under the Sun.” MC Journal 7, no. 3 (July 2004).

      Delfanti, Alessandro and Johan Soderberg. “Hacking Hacked! The Life Cycles of Digital Innovation.” Science, Technology and Human Values, Forthcoming.

      DiSalvo, Carl and Melissa Gregg. “The Trouble With White Hats.” The New Inquiry, November 21, 2013. http://thenewinquiry.com/essays/the-trouble-with-white-hats/.

      Goriunova, Olga, ed. Fun and Software: Exploring Pleasure, Paradox and Pain in Computing. New York: Bloomsbury Academic, 2014.

      Greenberg, Andy. This Machine Kills Secrets: How WikiLeakers, Cypherpunks, and Hacktivists Aim to Free the World’s Information. New York: Dutton Adult, 2012.

      Hannemyr, Gisle. “Technology and Pleasure: Considering Hacking Constructive.” First Monday 4, no. 2 (February 1, 1999).

      Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Books, 2005.

      Jordan, Tim. Hacking: Digital Media and Technological Determinism. Cambridge, UK; Malden, MA: Polity Press, 2008.

      Jordan, Tim, and Paul Taylor. Hacktivism and Cyberwars: Rebels with a Cause?. Routledge, 2004.

      Kelty, Christopher M. Two Bits: The Cultural Significance of Free Software. Durham: Duke University Press, 2008.

      Kubitschko, Sebastian. “Hacking Authority.” edited by Craig Calhoun and Richard Sennett. New York: NYU Press, Forthcoming.

      Lapsley, Phil. Exploding the Phone: The Untold Story of the Teenagers and Outlaws Who Hacked Ma Bell, 2013.

      Levy, Steven. Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age. London: Penguin Books, 2001.

      Levy, Steven. Hackers. Sebastopol, CA: O’Reilly Media, 2010.

      Lindtner, Silvia, Li, David. “Created in China.” Interactions Interactions 19, no. 6 (2012): 18.

      Marwick, Alice E. Status Update: Celebrity, Publicity, and Branding in the Social Media Age. Yale University Press, 2013.

      Maxigas. “Hacklabs and Hackerspaces – Tracing Two Genealogies.” Journal of Peer Production, no. 2. Accessed October 2, 2014.

      Montfort, Nick. “Obfuscated Code.” In Software Studies a Lexicon, edited by Matthew Fuller. Cambridge, Mass.: MIT Press, 2008.

      Morozov, Evgeny. To Save Everything, Click Here: The Folly of Technological Solutionism. PublicAffairs, 2013.

      O’Neil, Mathieu. Cyberchiefs: Autonomy and Authority in Online Tribes. London_; New York_: New York: Pluto Press, 2009.

      Orr, Julian E. Talking about Machines: An Ethnography of a Modern Job. Ithaca, N.Y: ILR Press, 1996.

      Polletta, F. “‘Free Spaces’ in Collective Action.” Theory and Society 28, no. 1 (1999): 1-38.

      Postigo, Hector. The Digital Rights Movement: The Role of Technology in Subverting Digital Copyright. Cambridge, Mass: The MIT Press, 2012.

      Sauter, Molly. The Coming Swarm: DDOS Actions, Hacktivism, and Civil Disobedience on the Internet. New York: Bloomsbury Academic, 2014.

      Schrock, Andrew Richard. “‘Education in Disguise': Culture of a Hacker and Maker Space.” InterActions: UCLA Journal of Education and Information Studies 10, no. 1 (January 1, 2014). http://escholarship.org/uc/item/0js1n1qg.

      Schulte, Stephanie, and Bret Schulte. “Muckraking in the Digital Age: Hacker Journalism and Cyber Activism in Legacy Media.” NMEDIAC, The Journal Of New Media And Culture 9, no. 1 (February 25, 2014).

      Sennett, Richard. The Craftsman. New Haven: Yale University Press, 2009.

      Sewell Jr., William H. Logics of History: Social Theory and Social Transformation. Chicago: University Of Chicago Press, 2005.

      Sterling, Bruce. The Hacker Crackdown: Law and Disorder on the Electronic Frontier. New York: Bantam Books, 1992.

      Takhteyev, Yuri. Coding Places: Software Practice in a South American City. Cambridge, Mass: The MIT Press, 2012.

      Tate, Ryan. “Google Couldn’t Kill 20 Percent Time Even If It Wanted To.” Wired, August 21, 2013.

      Thomas, Douglas. Hacker Culture. Minneapolis: University of Minnesota Press, 2002.

      Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Chicago: University of Chicago Press, 2006.

      Wark, McKenzie. A Hacker Manifesto. Cambridge, MA: Harvard University Press, 2004.

      -Contributed by ,  Wolfe Chair in Scientific and Technological Literacy, Department of Art History & Communication Studies, McGill University-

      Posted in Uncategorized | Tagged , , , | 1 Comment

      Internet [draft] [#digitalkeywords] Sep 29, 2014

      “the blurriness in how we use “internet” has a history and a function: it has allowed the word to become a metonymy – a part that stands for the whole – for a complex, shifting, intertwined mix of institutions, technologies, and practices. In this it is similar to “the Church,” “the press,” “Hollywood,” or “television.” …This metonymic pattern is much more than a convenience. It is an assertion of power.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Internet — Tom Streeter, University of Vermont

      Introduction

      The “internet” has many meanings: hardware, software, protocols, institutional arrangements, practices, and social values. More often than not, which meaning we are using goes unspecified. Someone in a coffee shop might ask the laptop-wielding person next to them “are you getting internet?” when they mean “are you getting a wifi signal?” – which is actually a local, not internetwork, technology. The term “internet” is often used to refer to a host of different technologies, from non-TCP/IP systems of connection like local area networks and mobile phone data networks, to major “internet backbone” connections involving core routers, fiber optic long distance lines, and undersea cables. An “internet connected computer” can mean variously a computer running its own TCP/IP server with its own IP address, or simply any kind of gadget capable of sending some kind of data to and/or from global data networks. (A recent news clip referred to “an internet connected umbrella” the handle of which glows when rain is expected, as if “the internet” is the distinguishing technology here rather than, say, the equally essential microchips or wireless technologies.[1])

      The range of multiple meanings go well beyond the technological. A recent headline read, “The 35 Writers Who Run the Literary Internet.”[2] This locution assumes that the internet is a separate space or forum apart from other kinds of discussions of literature – even though the community of literary reviewers and their readers actually spans across outlets that vary both in terms of technology (print, digital) and economic organization (profit, non-profit, advertising supported, subscription, etc.). “Netroots,” a portmanteau of “internet” and “grassroots,” generally refers to progressive left-wing activists who use a mix of traditional and internet forms of political organizing; one does not talk about the Tea Party as a Netroots organization, though it also makes heavy use of the internet. The “internet” foregrounded in “netroots” is thus actually a modest part of a politically inflected whole.[3]

      My point here is not simply to denounce the vagueness with which we use the word.[4] Rather, the blurriness in how we use “internet” has a history and a function: it has allowed the word to become a metonymy – a part that stands for the whole – for a complex, shifting, intertwined mix of institutions, technologies, and practices. In this it is similar to “the Church,” “the press,” “Hollywood,” or “television.” In each case, the use of a part – a building, technology, geographical location, or box in our living room – stands for the whole whatever-it-is. This metonymic pattern is much more than a convenience. It is an assertion of power. It treats fluid, complex relationships as a self-evident thing, and thereby can cover up instabilities and contested elements within the institutions being considered. This reification, in turn, can help perpetuate, for better or worse, a specific set of social arrangements. The metonymy shapes the processes it purports to describe. Unpacking “the internet” as a keyword,[5] therefore, offers a window into both the history of the last thirty years and some key political issues of the present.

      Early History: an internet vs. the Internet

      The root word “network” itself has a history of multiple meanings, in the last century principally divided between an understanding of networks as webs of face-to-face contact without any necessary implication of technological mediation,[6] and networks as technological systems that materially interconnect individuals across distances, such as railroads or telephone systems.[7]

      The dual sociological and technological meanings of “network” served as a backdrop when the word internet emerged in the 1970s. From the beginning the term expressed some of the tensions and hopes involved in the intertwined problems of technological design and the organization of social relations. “Internetwork” appeared among computer engineers as shorthand for a network of networks or interconnected network. This was not just a technical problem. It was a social condition, namely that the first connections of computers across distance occurred in a context of private corporations which sold competing systems based on incompatible telecommunications standards. An internetwork was thus something intended to overcome the existing incompatibilities among computer systems from different firms and institutions. Soon shortened further to “internet,” it thus began life as a colloquial term for a particular kind of technological solution to an institutional (rather than purely technical) problem.[8]

      A 1977 technical document by Jon Postel, for example, opens,

      This memo suggests an approach to protocols used in internetwork systems. . . . The position taken here is that internetwork communication should be view [sic] as having two components: the hop by hop relaying of a message, and the end to end control of the conversation. This leads to a proposal for a hop by hop oriented internet protocol, an end to end oriented host level protocol, and the interface between them. . . . We are screwing up in our design of internet protocols by violating the principle of layering.[9]

      In this passage one can see not only the shift from “internetwork” to the more shortened “internet,” but also a move from speaking of “networks of networks” in general – “internetwork systems” – towards speaking of the specific system being constructed – “our design of internet protocols.” Later in the memo this use of “internet” to refer to a specific system becomes even clearer: “An analogy may be drawn between the internet situation and the ARPANET.” In this last passage, “the internet” is clearly being used to refer to the specific system being designed at the time, and thus contrasted with its predecessor network of networks, the ARPANET.

      In the next decade, a colloquial use of “internet” to refer to a specific institution under construction continued alongside other uses. (And more colloquialisms emerged during this time, such as an even further shortened “the Net.”) During this period, confusion between “internet” as a general principle vs. a specific system became of enough concern for engineers of the day to begin to capitalize the latter: an internet vs. the Internet.[10] But this use of “Internet” to refer to a specific system remained relatively colloquial through the 1980s. At a key moment in 1983, when the existing ARPANET was split into military and research-oriented halves, press reports described the military side as “Milnet” and the civilian side as “R&DNet.”[11] While “R&DNet” as a term never caught on, its direct descendant – funded by the National Science Foundation or NSF – was officially described as NSFNET through the 1980s. In May 1989, the Federal Research Internet Coordinating Committee released a “Program Plan for the National Research and Education Network”; in this instance the committee devoted to internetworking in general uses “Internet” in its self-description, but the proper noun, the specific thing being proposed, is called NREN.[12]

      For the next two years, “Internet” remained an insider’s colloquial term for one internetwork among many others, such as BITNET, BBS systems, USENET, etc. As late as December, 1992, a famous exchange between Vice President elect Al Gore and the CEO of AT&T about whether or not government should be involved in the construction of nationwide computer networks, did not contain the word “internet.”[13] The first issue of Wired, released the following month, referred to the internet only occasionally in passing, largely as one instance of computer communication systems along others, not as the network of networks, not as the center of the “digital revolution” that the magazine was created to celebrate.[14] A May 1993 article in Newsweek about the future of computer networks did not mention the internet at all.[15]

      The metonymy consolidates 1993-95

      All this changed between the fall of 1993 and late 1995, when the contemporary use of “internet” emerged explosively into broad usage, and “the Internet” went from being an internetwork to the network of networks. By early 1996, the remaining consumer computer communication systems from the 1980s like Compuserve and Prodigy were all selling themselves as means of access to the internet rather than the other way around, the U.S. Congress heavily revised its communications law for the first time in more than half a century in the ’96 Telecommunications Act, major corporations from the phone companies to Microsoft to the television networks were radically revamping core strategies to adapt to the internet, and television ads for Coke and Pepsi routinely displayed URLs.[16] The previously colloquial and unstable term became the fixed name of a global phenomenon.

      Though the word became fixed, the phenomenon it referred to was not. For example, the U.S. Telecommunications Act of 1996 was often said to be in part motivated by the rise of the internet, and it referred to “the Internet” several times, defining it rather circularly as “the international computer network of both Federal and non-Federal interoperable packet switched data networks.”[17] (X.25 networks, inherited from the 1970s and still in use at the time by banks and other large institutions, were international and packet-switched but were not what the ’96 Act was referring to.) The use of “the international computer network” instead of “an international computer network” thus indicates a referent that was assumed rather than precisely delineated.

      What changed in the 1992-96 period was not so much the technology or its reach, but the way it was imagined: the shared assumptions, ideas, and values invested in the term took on a new cast and intensity, which in turn shaped collective behavior. It is true that in 1996 there existed a system of material TCP/IP-based computer networking technologies of increasing effectiveness. But the number of nodes and users in that system had been growing logarithmically for several years before 1992 when “internet” was a relatively obscure term, and by the end of 1996 the total number of users remained less than 1% of the world population, and less than 8% of the U.S. population.[18] By 1996 “the internet” was crystallized as a term, but it was not by any stretch an established central form of communication or means of doing business, and the specific wires and computer systems of which it was made would largely be replaced and transformed within a decade. The material technologies associated with the “internet” therefore were not by themselves as yet all that dominant or settled. The designs, hopes, and money that started flowing towards the thing called the internet in 1996 were based on future expectations, on a shared set of beliefs and visions, as much as on material facts. The Internet thus was as much a set of ideas and expectations as it was any specific object, yet the habit of referring to it as an object – the metonymy – played a major role in coagulating those ideas and expectations.

      Internet as social vision: interactivity, forum, telos

      So what did the term “internet” refer to, if it did not only refer to an existing technology? One connotation of the term was a particular experience of interactivity that was widely accessible and designed to be used in an unplanned, playful or exploratory way, rather than merely as a means to a known end.[19] Most occurrences of “Internet” in the ’96 Act are accompanied by the phrase “and other interactive computer services.”[20] While not made explicit, the “interaction” referred to here was not just any social interaction. In its sociological sense, talking on the telephone is an interaction, a bank official transmitting financial data via an X.25 network is an interaction, but these were already old hat and thus not what was being referred to. The “interaction” in question assumed a certain ease, immediacy, and unplanned type of horizontal connections via connected computers, and wide availability and open access – a fear of which with regard to children was seen in the “Communications Decency” portion of the ’96 Act which forbid pornography on the internet and was subsequently found unconstitutional.

      A second important connotation of the internet that emerged was a spatial metaphor, tied to an understanding of it as a kind of forum, rather than, say, as a conduit. In the syllabus of the 1997 decision that overturned the Communications Decency part of the ’96 Act, the U.S. Supreme Court defined the Internet as “an international network of interconnected computers that enables millions of people to communicate with one another in ‘cyberspace’ and to access vast amounts of information from around the world.”[21] (The term cyberspace occurs twice more in the Decision, without quotes.) Here, to articulate what the internet is, the U.S. Supreme Court casually adopts a spatial metaphor from science fiction (replacing the conduit-oriented “information superhighway” metaphor that dominated in the culture a few years earlier). This spatial metaphor helped ground the Court’s description of the internet as what it called “this new forum,” as a space within which citizens interact and deliberate, which thereby underwrote the Court’s judgment that the internet is worthy of stronger free speech protections than, say, broadcasters.

      During this period, the internet also came to be described as having a kind of agency, a force of its own or a teleology. The surprising way in which the internet emerged into broad public consciousness in this period is arguably due to a set of peculiar historical circumstances.[22] But those circumstances were eclipsed by the pressures of the time; it all just seemed to happen as if from nowhere. The resulting shared sense of surprise underwrote a habit of speaking as if it all came from some kind of force attributable to technology alone, without human agency or design. The first, January 1993 issue of Wired magazine flamboyantly attributed to the “Digital Revolution” the disruptive force of “a Bengali Typhoon.” Over the course of 1993, as the internet came to broad public attention, the magazine began to make such attributions of agency directly to the internet, and thus while not inventing the sense of internet-as-force certainly contributed to its momentum. And this all led to a proliferation of slogans such as “The Net interprets censorship as damage and routes around it,”[23] and a generalized sense that the internet, whatever it was, contained within it a set of inherent traits that had a causal force on society. An entire genre of punditry emerged that exploited the discursive possibilities of this sense of telos: speaking as though one had special insight into the mysterious internet suggested one had a unique insight into the future, imparting a kind of speaker’s benefit similar to that Foucault pointed out accrues to being an expert on sexuality.[24]

      Much of this talk is now easily seen as hyperbolic: it is now routine for various governments to censor their internet, and the late 1990s claim that the internet had somehow suspended the laws of economics led to a record-setting stock bubble, with painful consequences when it collapsed. But the sense that the internet has a kind of social force of its own, separable from the intentions and social context of the individuals that construct and use it, persists to this day.

      What emerged at the end of the 1992-96 period, in sum, was a meaning of “internet” that unreflectively mixed a shifting set of technologies, protocols, and institutions with connotations of accessible exploratory interaction, a forum, and a sense that the whole “thing” had a teleological causal force. Because this was at a time when the actual systems that we now use were just beginning to be built out, the mix can be seen to have played a constitutive role in those systems’ creation.

      All these tendencies combine to give the word “internet” an outsized gravitational force in the description of any emerging social practice that has anything at all to do with computer networks. The sense of the internet possessing a kind of agency or telos in particular remains vivid in political and social debates. For example, contemporary net neutrality proponents proclaim “save the internet!” – which presumes that the internet, like a National Park or a species of animal, has a kind of natural state of openness, inherent in the internet itself. (It is only recently that less teleological arguments have been advanced, such as the argument that net neutrality would help uphold the values of democracy.) An assumption that the internet has a natural telos is also evident in the still common framing of internet trends as if they represented a natural unfolding rather than economic and social choices. The term “Web 2.0,” borrowing from the tradition of numbered software upgrades, carried with it a sense of an unstoppable progression. A current discourse about a coming “internet of things” similarly implies a kind of “next phase” logic of progression, while implying that the use of wireless and data technologies in home appliances has grand implications, rather than representing merely a continuation of the more than century-long trend in the automation of consumer durables.[25]

      Conclusion

      In the future, the word internet might fall into disuse, and historians may wonder why this ill-defined thing called “the internet” received so much attention. From 1990-2015, after all, the mobile phone and television both grew dramatically in reach and impact globally, and as of this writing they each still have more users than the internet, however defined. Furthermore, platforms like Wikipedia, Facebook, and Netflix may not be framed as things “on the internet,” but as the quite distinct institutions they are, with different social, economic, technological, and political implications. A future is conceivable in which an internet expert is no more intriguing than a plumbing expert.

      Yet future historians will not be able to explain the political, economic, or social histories of the 1990-2015 period without considering the impact of talk about “the internet.” Laws were passed, stock bubbles inflated and collapsed, political campaigns launched, and a host of influential and broadly shared expectations about politics, economics, and social life were shaped by the term internet and the sets of assumptions it carried with it. The word may be vague, but it has mattered nonetheless.

      At this point in history, scholars should avoid referring to the internet as a self-evident, single object. But they should also pay explicit attention to the hopes, values, and struggles that have been embedded in both the term and the phenomena. The “internet” may not be the answer, but the questions the term raises are nonetheless crucial. The question of how society designs technologies while organizing social relations, implicit already in the first casual uses of the term in the 1970s, remains a crucial intellectual and political problem. The “internet” may not be the solution to the problem of democracy, but a democratic future will still need to consider, among other things, questions about technological systems of interconnection and related political legal, and economic questions. And finally, it is significant that one of the great technological triumphs of history was to a significant degree shaped by widely shared hopes and visions of democracy and horizontal interactivity, by desires for open fora. The internet may not be inherently democratic, but the fact that we have imagined it as so, that we have invested it with widely shared hopes for democracy, deserves our attention.


      Footnotes

      1. Kristyn Ulanday, “The Internet of Things,” The New York Times, July 16, 2014, http://www.nytimes.com/video/garden/100000003003809/the-internet-of-things.html.

      2. http://flavorwire.com/467152/the-35-writers-who-run-the-literary-internet

      3. It has become the norm to speak of information and events as on the internet: “I found it on the internet”; “I was arguing with someone on the internet”; “I looked it up on the internet,” “check on the internet.” While we say “I talked to her on the telephone,” we would not say “I found it on the telephone.” The telephone is not viewed as its own place so much as a tool to get in touch with specific individuals across space. Arguably, one could say the internet is more telephone-like: it is the conduit, whereas individual websites or platforms provide the conditions within which we are getting information, interacting with others, and so forth. “I saw it on Facebook” or “I looked it up on Wikipedia” are in that sense more accurate. Yet finding or doing something “on the internet” as if it were a location rather than a conduit remains an entirely common way of speaking. The locution for the internet is more like how we say “I saw it on television” than with cinema, where we are more likely to say “I saw it in a movie.”

      4. For a lively example of denunciation, see Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (PublicAffairs, 2013), 21.

      5. The internet is thus a keyword in two senses: the sense that “the problems of its meanings [are] inextricably bound up with the problems it [is] being used to discuss,” (Williams, 15) but also that its meanings are “primarily embedded in actual relationships, . . . . within the structures of particular social orders and the processes of social and historical change.” (Williams, 22).

      6. The tradition is usually said to have begun with Georg Simmel. See e.g., Mark S. Granovetter, “The Strength of Weak Ties,” American Journal of Sociology, 1973, 1360-80.

      7. E.g., NBC’s Red and Blue radio “networks” of the mid-1920s – though the U.S. 1927 Radio Act and subsequent legal documents referred not to networks but to “chain broadcasting,” putting more emphasis on the economic and contractual relationships than technological ones.

      8. The continued availability of purely social connotations of “network” and its derivatives, however, is evident in the title of the “Human Rights Internet,” appearing in 1981 or earlier, which was a clearinghouse for information about human rights abuses worldwide; to my knowledge it was organized entirely without the use or consideration of computers. See http://www.hri.ca/ or e.g., David Ziskind, “Labor Laws in the Vortex of Human Rights Protection,” Comp. Lab. L. 5 (1982): 131.

      9. http://www.rfc-editor.org/ien/ien2.txt IEN # 2.3.3.2  “Comments on Internet Protocol and TCP,” Jon Postel, 15 August 1977

      10. So, for example, in 1989, an IBM technical manual stated, “when written with a capital ‘I,’ the Internet refers to the worldwide set of interconnected networks. Hence, the Internet is an internet, but the reverse does not apply.” TCP/IP Tutorial and Technical Overview (ISBN 0-7384-2165-0), cited in “Capitalization of ‘Internet’,” Wikipedia, the Free Encyclopedia, June 30, 2014, http://en.wikipedia.org/w/index.php?title=Capitalization_of_%22Internet%22&oldid=614895170. A discussion list created in 1990 to discuss technical and institutional problems with the evolving system was called “Commercialization and Privatization of the Internet” (“com-priv” for short). In this title, the emphasis was already on the Internet, not an internet. Thomas Streeter, The Net Effect: Romanticism, Capitalism, and the Internet (NYU Press, 2011), 110.

      11. William J. Broad, “Pentagon Curbing Computer Access; Global Network Split in a Bid to Increase Its Security,” The New York Times, October 5, 1983.

      12. Streeter, The Net Effect, 107.

      13. “THE TRANSITION; Excerpts From Clinton’s Conference on State of the Economy,” New York Times, December 15, 1992, New York edition, sec. B.

      14. Using the “premiere issue” distributed as an iPad-only reissue in 2012 as a guide, only two out of seven feature articles mention the Internet at all, each case in the sense of a specific system alongside others, such as BBS’s, Britain’s JANET, and so forth.

      15. Jim Impoco, “Technology Titans Sound Off on the Digital Future,” U.S. News and World Report, May 3, 1993.

      16. Streeter, The Net Effect, 133-134.

      17. Most of the 106-page ’96 Act addresses well-established telecommunications systems, e.g., “the general duties of telecommunications carriers.” Federal Communications Commission and others, “Telecommunications Act of 1996,” Public Law 104, no. 104 (1996): 84.

      18. http://www.internetworldstats.com/emarketing.htm; Farhad Manjoo, “Jurassic Web,” Slate, February 24, 2009, http://www.slate.com/articles/technology/technology/2009/02/jurassic_web.html.

      19. The once-common term “information retrieval” captures the opposing sense of the use of online communication for a pre-planned purpose.

      20. e.g., “The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance.” Ibid., 83.

      21. Reno, Attorney General of the United States, Et Al. v. American Civil, U.S. (U.S. Supreme Court 1997).

      22. Streeter, The Net Effect, 119-137.

      23. http://en.wikiquote.org/wiki/John_Gilmore

      24. One might rewrite what Foucault said about the repressive hypothesis by replacing references to sexuality with “internet revolution,” thusly:

      [T]here may be another reason that makes it so gratifying for us to define the relationship [between technology and society in terms of revolution]: something that one might call the speaker’s benefit. [If the internet is revolutionary], then the mere fact that one is speaking about it has the appearance of a deliberate transgression. A person who holds forth in such language places himself to a certain extent outside the reach of power; he upsets established law; he somehow anticipates the coming freedom. . . . [when we speak about the internet] we are conscious of defying established power, our tone of voice shows that we know we are being subversive, and we ardently conjure away the present and appeal to the future, whose day will be hastened by the contribution we believe we are making. Something that smacks of revolt, of promised freedom, of the coming age of a different law, slips easily into this discourse.” History of Sexuality, Vol. 1, pp. 6-7.

      25. http://en.wikipedia.org/wiki/Internet_of_Things

      -Contributed by ,  University of Vermont Department of Sociology-

      Posted in Uncategorized | Tagged , , , | Leave a comment

      Surrogate [draft] [#digitalkeywords] Sep 25, 2014

      “There has been much theorization of the ways in which new media contain the old, but scholars involved in historicist criticism are increasingly making print simulacra into an effigy. Archives of digitized print materials do not pretend to replace the experience of the original but nonetheless promise, implicitly if not explicitly, a way of engaging with the attributes of the original objects to facilitate scholarly judgments about them.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Surrogate — Jeffrey Drouin, University of Tulsa

      Historical scholarship in literary studies is increasingly dependent upon digital objects that stand in as substitutes for printed or manuscript material. The operational features of digital surrogates often attempt to mimic the functionalities of codices and other material formats-ostensibly to reproduce the experience of handling the originals-while taking advantage of the vastly different cognitive and representational possibilities afforded by the new medium. There has been much theorization of the ways in which new media contain the old, but scholars involved in historicist criticism are increasingly making print simulacra into an effigy. Archives of digitized print materials do not pretend to replace the experience of the original but nonetheless promise, implicitly if not explicitly, a way of engaging with the attributes of the original objects to facilitate scholarly judgments about them. Thus digitized editions embody the ecclesiastical origins of the surrogate-“[a] person appointed by authority to act in place of another; a deputy” who usually stands in for a bishop-and its related concepts that impinge upon scholarly and institutional authority. When the concept of office as the symbol of an ultimate power is transferred to the realm of text, a digital edition which duplicates a print or manuscript document comes not only to embody but also to symbolize the power inherent in the original it stands in for. This paper will examine the digital surrogate as an effigy: an image taking the place of an original that is simultaneously worshipped and desecrated in the act of interpretation.

      A digital edition is a surrogate in that it stands in for and takes the place of a print original. We gain many practical benefits from using digital surrogates in literary scholarship, ranging from protection of the fragile original when a copy would suffice, increasing access to rare materials, and rendering such documents searchable and interoperable with other networked resources. Libraries have been major proponents of digital surrogates, which have long been touted by digital humanists, archivists, and special collections departments. Digital surrogates have also become levelers of class inequalities among researchers, allowing access to those who cannot afford to travel to the archives that house the often rare originals. As digital humanities has flourished as a field over the past decade or so, the searchability and interoperability of digital texts through the TEI encoding guidelines and Dublin Core metadata standards have expanded the usefulness of digital surrogates in making large gestures about literary history, especially when they form the basis of large datasets-much larger than can be processed by scholars individually or in aggregate-that facilitate corpus analysis. There is no denying the innovative possibilities that accrue from corpora of digitized documents. However, when the move toward corpus-level analysis entails inferences about texts in the aggregate, we necessarily ignore the individual works that make up the corpus, at least to some degree. Each work says something from a particular point of view, so how can we be sure that our corpus-level inferences are accurate? Is the singular text lost in the move toward searchability? Is it possible to develop a methodology that synthesizes search-based queries and the uniqueness of the underlying texts? When using digital methods upon a digitized text, are we really studying the object? And, if we attempt to compensate for the blind spots of large-scale analysis by selecting individual works from the digital corpus, are we adequately filling in the gaps?

      While a digital edition offers built-in functionalities and research possibilities unavailable in a printed object, the interface also erases many physical traits of the original, such as size, weight, paper quality, and ink saturation-all of which are crucial in matters of historical, technical, and bibliographic analysis. For instance, The Modernist Journals Project (MJP)  features an edition of BLAST, an important avant-garde magazine from 100 years ago known for its radical experiments in typography and poetics.  Even though the MJP offers high fidelity scans of the original pages, the physical impact of the magazine is lost in translation. The bibliographic information supplied on the landing page of the digital edition indicates that the 212 pages of the first issue (June 1914) are 30.5 cm long and 24.8 cm wide (more than 12 inches and 10 inches respectively). A reader could use a ruler or tape measure as a visual aid in comprehending the size, since it will almost certainly be smaller on a screen. Yet in no way does the comprehension of measurements equal the aesthetic apprehension of seeing-and holding and smelling-a codex that is roughly the area of a small poster, which is twice as wide when opened up, and whose thick paper renders it roughly 6.35 cm (2.5 inches) deep, weighing around 1 kg (2.25 lbs), and supporting the heavily saturated black block letters that often stand over 2.5 cm (1 inch) tall on the page as if they are autonomous objects.

      The physical experience of reading BLAST necessarily contributes to the interpretation of its content, since such a solid, impactful object is diametrically opposed to the ephemerality normally expected of magazines: it is a Vorticist manifesto attempting to break art and literature aesthetically, morally, and physically: to “be an avenue for all those vivid and violent ideas that could reach the Public in no other way” by bringing “to the surface a laugh like a bomb” (“Long Live” 7, “MANIFESTO” 31).

      Indeed, the kinetic typography that often spans juxtaposed pages produces a visual effect whose immensity corroborates its revolutionary assertions.

      blast

      This image presents a digital imitation of two juxtaposed pages from BLAST that demonstrate the interplay of typography and ideology. The series of “Blasts” and “Blesses” comprising this section of the manifesto take aim at the passé while asserting an English art that is nationalist in temper. Throughout most of modern history, English artists and writers looked up to their French colleagues as being more advanced. Here, however, the attacks upon French culture by the magazine’s “Primitive Mercenaries in the Modern World” (“MANIFESTO” 30) seek to create a new space for English art that far surpasses its rival. A key tactic in surpassing the French is to embrace the opposing energies of an explosion: “We fight first on one side, then on the other, but always for the SAME cause, which is neither side or both sides and ours” (30). Hence, Vorticism, taking its queue from the vortex or whirlpool (as well as adolescence), deliberately embodies opposing forces at their point of greatest concentration, which is simultaneously their point of cancellation. These position statements explain the typographical interplay of absence and presence on the magazine’s pages, where bullet lists occupy the left or right side of a page while the other side remains blank (taking a queue from commercial advertising), or where there seems to be a diagonal line separating absence and presence across two juxtaposed pages, as in the screenshot above. The reader can only fully appreciate the amount of energy required to embody these principles while situated before the text arrayed across an area of 1513 square centimeters (240 square inches) plunked solidly upon a table.

      I must admit that the image above is not part of the MJP. In my quest to view digital pages of BLAST juxtaposed as they are in the original, I submitted a PDF version of the magazine to FlipSnack so that I could behold its glory onscreen (or at least the first sixteen pages of it that are made available to those too pathetic to pay for the service) and embed it on a teaching blog: BLAST no. 1 (June 1914); BLAST no. 2 (July 1915, “War Number”). It is also possible to achieve a similar result by viewing the MJP’s PDF in Adobe Acrobat, using Two Page View with the selected option to Show Cover Page so that the left-right orientation is correct. The codex-simulating view option is not yet available on the MJP website, which at present offers only PDF download, a single-page view option, and a tiled thumbnail overview of an entire issue. In other words, because of dissatisfaction with the lack of a codex-like viewer, I have created a simulacrum so as to approach the condition of the original.

      The Oxford English Dictionary informs us that a simulacrum is a “material image, made as a representation of some deity, person, or thing”; it possesses “merely the form or appearance of a certain thing, without possessing its substance or proper qualities”; it is “a mere image, a specious imitation or likeness, of something.” In other words, it fulfills the role of surrogate as a substitute deputed by authority, yet lacks the true substance of that for which it stands. The association of a simulacrum with a deity-and inherent inadequacy-seems apt in the light of the digital BLAST and electronic editing and scholarship in general. One of the built-in goals of the simulacrum is to return to some originary state, “to see the thing as in itself it really is” or was, to paraphrase Matthew Arnold. But in translating BLAST into the new medium, which cannot adequately duplicate the physical attributes to inherent to its meaning, are we not moving the reader further from that originary state?

      We are in effect creating an effigy: a likeness, portrait, or image that lacks the true character of the original yet stands in for our pursuit of it. Like the other terms in this conceptual cluster-surrogate and simulacrum-effigy bears the undertones of a symbol of something holy to be revered, as well as the substitute for something profane to be desecrated. It is telling that the various definitions of effigy relate both to ecclesiastical and judicial terminology. In that light, my hasty decision to feed BLAST to FlipSnack in effigy betrays an attempt to incarcerate the Original: “fig. … to inflict upon an image the semblance of the punishment which the original is considered to have deserved; formerly done by way of carrying out a judicial sentence on a criminal who had escaped.”

      Lest these ramblings be misconstrued as a Proustian obsession with “The Sweet Cheat Gone,” we must ask whether it is illusory to demand total knowledge of our Albertine. In hunting the fugitive original-whether an object, a contextual state, or something else-is the data aspect of the digital fundamentally separate from the object from which it derives? I do not seek to answer that question within the scope of this draft, and will leave it for further development following our conversations. However, regardless of what that answer might be, the question subsequently arises as to whether there is a digital materiality and, if so, how it might work in this line of inquiry. Already the digital surrogate-cum-effigy seems to approach the character of the fetish: “a means of enchantment… or superstitious dread”; “an inanimate object worshipped by preliterate peoples on account of its supposed inherent magical powers, or as being animated by a spirit”; “something irrationally reverenced.”


      Bibliography

      Lewis, Wyndham. “Long Live the Vortex!” BLAST 1:1 (June 1914): 7-8.

      —. “MANIFESTO.” BLAST 1:1 (June 1914): 30-43.

      -Contributed by ,  -

      Posted in Uncategorized | Tagged , , , , | Leave a comment

      How to Give Up the I-Word, Pt. 2 Sep 23, 2014

      This is the second part of a two-part essay, which I originally presented at conferences in the spring of 2014. The first part is available here. The full version of the essay, which I’m happy to share with anyone interested, included a section on the place of innovation speak in the academic sub-discipline of business history.

      Innovation as the Self-Image of an Age

      In the last section, I examined some general drivers of the rise of innovation speak. In this section, I would like to narrow my analysis to focus on how a specific sector responded to these trends. Business schools have played a significant part in promulgating talk of innovation, both within academia and in popular discourse. During the last half of the twentieth century, business schools increasingly became core institutions of cultural production. Business professors often aspire not only to produce works, like case studies, that will only be read and used in academic settings but also to create products, whether books or articles or consultancies, that will have broader appeal. In this context, the trend towards innovation speak can be seen easily enough by tracing the publications of individual writers based at business schools. Michael E. Porter, a professor at Harvard Business School and the dean of competitive strategy analysis, used the word innovation 46 times in his 1980 book, Competitive Strategy, but 123 times in his 1998 book, On Competition.

      One person whose work nicely illustrates a relationship to the drivers mentioned in the previous section is William J. Abernathy, like Porter, a professor at Harvard Business School. Abernathy’s 1978 book, The Productivity Dilemma: Roadblock to Innovation in the Automobile Industry, was written at a time when the automakers were suffering from a well-known decline. Chrysler, especially, was in bad financial shape and would be bailed out a year later. In The Productivity Dilemma, Abernathy examined the trade-offs of adopting rigid but highly effective production techniques: while it won productivity in the short term, the adoption of such production techniques made it difficult, if not impossible, to internalize new innovations. Abernathy made a distinction between incremental and radical forms of innovation, arguing the the latter tended to interrupt settled production. Thus, we have our dilemma. How do we balance innovation and productivity? The historical context for The Productivity Dilemma—falling profitability in the auto industry and a general sense of industrial degeneration—was hardly mentioned at all in the book but acted only as the backdrop. This silence about current events was not true of Abernathy’s following works.

      In 1980, Abernathy published a co-authored essay, titled “Managing Our Way to Economic Decline,” in the eminent Harvard Business Review. Abernathy and his co-author, Robert H. Hayes, argued against the assertions of supply-side economics that declining productivity in the U.S. was the result of high taxes, energy crises, and too much regulation. The authors instead put the blame on shifting managerial experiences and priorities, including a lack of “hands-on” knowledge and “short-term cost reduction rather than long-term development of technological competitiveness.” Later in the essay, the authors characterized this latter priority as a choice between “Imitative vs. Innovative Product Design.” By casting the economic decline of the 1970s as a problem of management, Abernathy and Hayes described the problem as something that could be solved through a change in mindset or an acquisition of fresh knowledge. The framing encouraged scholars to find and communicate lessons for how to foster innovation. Their analysis also implicitly built on the back of long traditions in the West, contained in volumes like Benjamin Franklin’s Autobiography and Poor Richard’s Almanack, that condemned actions that favored short-term gain, which was too focused on the opinions of the market, over wise long-term growth.

      In 1983, along with Kim B. Clark, another professor at Harvard Business School, and Alan M. Kantrow, an associate editor of the Harvard Business Review, Abernathy published Industrial Renaissance: Producing a Competitive Future for America. In a sense, the book brought together Abernathy’s previous insights, including the “productivity dilemma” and the problem of managers overly focused on short-term profits, with a new theme, namely the puzzle and threat of Japanese productivity, especially in the auto industry. The lesson for Clark et al was clear. As one summary of the book puts it, “Examines the failure of American companies to compete under conditions produced by new technologies.” Abernathy and his co-authors described technologies that vastly changed conditions as “disruptive.” Abernathy no doubt knew about Schumpeter, though he did not spend much time in his writings meditating on Schumpeter’s thought. Yet, once Schumpeter was re-discovered by others in the 1980s (which I will discuss in a moment), the basic ideas of Industrial Renaissance would be recast in Schumpeterian terms: the “gale of creative destruction.” Abernathy contracted cancer in 1979 and died in 1983, the year Industrial Renaissance was published, at the age of only fifty. Abernathy’s untimely death cut short a brilliant career. In the context of this essay, it is hard to resist counterfactual questions. If Abernathy had lived, how would his subsequent work have fit into the innovation speak that followed? Would he have continued down the path he laid? Or would he have eventually turned down some other road?

      The academic focus on Japan continued after Abernathy’s death. Cambridge, Mass., both at Harvard Business School and at MIT, was home to many studies of Japanese production techniques. The most famous product to come from these studies was The Machine that Changed the World by James P. Womack, Daniel T. Jones, and Daniel Roos. The book bore the subtitle “Toyota’s Secret Weapon in the Global Wars that is now Revolutionizing World Industry.” Like many pop business books, it hinted at forms of secret knowledge and promised to initiate the reader into its ways. The covers of the first edition of the book proclaimed, “Based on The Massachusetts Institute of Technology 5-Million-Dollar 5-Year Study on the Future of the Automobile.” Womack and Jones followed up this book with Lean Thinking: Banish Waste and Create Wealth in Your Corporation, an even more explicitly pop business book. In 1997, they founded a consultancy, Lean Enterprise Institute, Inc. (“Compared with traditional ‘think’ tanks, we are a ‘do’ tank.”) The team behind The Machine that Changed the World coined the term “lean production” to describe Japanese production, and that term has taken on a life of its own. Even though we are only a little over twenty years on from the term’s coining, it is easy enough to make out its ideological baggage: worries about national competitiveness and the coming Japanese hordes (of businessmen), long-held cultural taboos against waste, and—in the case of Lean Thinking—words, notions, and desires that could just as comfortably fit a fad diet book.

      US preoccupation with the international scene went well beyond Japan, no doubt partly due to the discourse of globalization that was also ascendant during this period. Perhaps the best example of this phenomenon was the academic concept of “national innovation systems.” The history of innovation speak is properly an international and transnational history, though this essay cannot yet aspire to that level of completeness. Like industrial policy before it, the idea of national innovation systems was a product of European intellectuals. Christopher Freeman and Bengt-Åke Lundvall began using the term in the mid-to-late-1980s, and Lundvall published an edited volume on the topic in 1992. Again, this work arose, in part, from an effort to explain Japan’s economic boom, and, again, it was focused on the role of technological change in economic growth and international economic competitiveness. Perhaps the primary contribution of the innovation systems literature was to move the discussion beyond an emphasis on individuals, whether managers or entrepreneurs, to an examination of institutions that fostered certain kinds of activity. The early literature on entrepreneurs especially concentrated on the psychological and characterological makeup of risk takers. Put bluntly, the work on innovation systems made the field more social scientific.

      Attention to place went well beyond the national level to “regional innovation systems” and “innovation clusters.” In some ways, the roots of this thinking lay in an awareness of economic growth in Silicon Valley. By the early 1980s, books on Silicon Valley were beginning to hit the market. But by the mid-1980s, Silicon Valley had become a model—perhaps the model—community with supposed implications for how other places should shape their policies. Books, such as Roger-Emile Miller’s and Marcel Cote’s Growing the Next Silicon Valley: A Guide for Successful Regional Planning (1987), tried to impart the teachings of the place. The notion of “innovative” locales was further valorized and given a stamp of legitimacy when Michael Porter published his 1990 book, The Competitive Advantage of Nations, which focused on the role of regional “clusters” in fostering economic growth. The locality theme was heightened in Richard Florida’s The Rise of the Creative Class (1992), a work that mentions “innovation” over ninety times while heavily idealizing Silicon Valley. And in general, studies of economic geography, often looking back to places like Detroit and Hartford, Connecticut, flourished during this period. This mode of thinking led to the popularity of certain policies, such as science parks, business “incubators,” and other forms of so-called “technology-based economic development” (TBED).

      Attempts to learn from Silicon Valley have never really relented; nor apparently has the (book) market for such lessons. To give a small sampling: Success Secrets from Silicon Valley: How to Make Your Teams More Effective (No Matter What Business) (1998); Relentless Growth: How Silicon Valley Innovation Strategies Can Work in Your Business (1998); The Silicon Valley Boys and Their Valley of Dreams (1999); Understanding Silicon Valley: The Anatomy of an Entrepreneurial Region (2000); The Silicon Valley Edge: A Habitat for Innovation and Entrepreneurship (2000); Champions of Silicon Valley: Visionary Thinking from Today’s Technology Pioneers (2000); TechVenture: New Rules on Value and Profit from Silicon Valley (2001); Clusters of Creativity: Enduring Lessons on Innovation and Entrepreneurship from Silicon Valley and Europe’s Silicon Fen (2002); Once You’re Lucky, Twice You’re Good: The Rebirth of Silicon Valley and the Rise of Web 2.0 (2008); Secrets of Silicon Valley: What Everyone Else Can Learn from the Innovation Capital of the World (2013). Once again, just as in the case of Michael Crichton’s Japan-fear classic, Rising Sun, which was published around the same time that the Japanese economy faltered, it’s easy to see that a publishing boom on Silicon Valley came around the year 2000, just as the dot.com bubble was set to burst. It’s a lesson we should keep in mind today.

      The late 1980s and early 1990s was also the moment of the “rediscovery” of Joseph Schumpeter. Again, this trend was international, and European, especially Scandinavian, scholars played an important part in it. Schumpeter’s model of entrepreneurship, innovation, and economic growth is now so well-known it hardly bears rehearsing. I will deal with his thought a bit more substantively in the next section, but here I just want to ask, Was the rediscovery of Schumpeter driven by new readers seeing his previously, largely unrecognized genius? Or was it driven by ideology, that is, did Schumpeter’s ideas merely fit the self-image of the age of his rediscovery? The answer is no doubt both/and, but Schumpeter’s fans have insufficiently accounted for his ideological valences.

      In conversation, some Schumpeterians have told me that the popular, ideological meanings of “innovation” can be held at bay by remaining true to Schumpeter’s original definition of that that idea: that innovation is the successful exploitation of new ideas, that there are five basic types of innovation, that there is a need to focus on the entrepreneur is a special kind of actor in society, etc., etc. But almost all academic thought vitiates against the idea that any kind of definitional purity can be maintained in the face of the kinds of linguistic waves depicted (in the Ngrams) at the beginning of this essay. If you are using a buzzword during one of those waves, you are falling prey to a fad. What could be more obvious?

      But Schumpeter’s thought is more than a mere fad, as cat videos and Bronies are mere fads; Schumpeter’s thought serves and glorifies particular interests. Schumpeter wrote a lullaby for the business class. Or, perhaps it was more a fairy tale, because there were some scary parts. You could be blown away by the gale of creative destruction. Or, maybe most of all it was a myth, a hero’s tale, the “entrepreneur with a thousand faces.” A business historian friend put it to me like this: (at least the popular version of) Schumpeter justifies American-style capitalism, which has forsaken hope in full employment, which sees jobs lost to “innovation” as natural and unavoidable, which has taken technological novelty as its ultimate end.

      Schumpeterian

      A Google Ngram for the word “Schumpeterian” from 1800 to 2000

      Just like reflections on the “lessons” of Silicon Valley, neo-Schumpterian thought has gone far beyond the ivory tower, most famously in Clayton M. Christensen’s The Innovator’s Dilemma: The Revolutionary Book that Will Change the Way You Do Business (1997). Christensen’s writings are the culmination of much of what I have described in this section: he is a professor of Harvard Business School who combined Abernathy’s ideas about radical innovation with a basic Schumpeterian vision. He put it into a neat package that could be sold to aspiring leaders out of airport bookstores. He has consulted, opened up the Innosight Institute, and calls himself a “Disruptive Innovation Expert.” Meanwhile, Christensen’s works have resulted in flocks of Silicon-Valley-brained college students, all hopped up on TED Talks, going around wanting to “disrupt” everything by creating the next “Killer App” or whatever.

      A core part of the Western tradition is the idea that serious thinking should resist the self-images of the age, the easy, widespread opinions that Plato called doxa and Francis Bacon named the “idol of the marketplace” and Karl Marx described as ideology. But participants in innovation speak have done precisely the opposite of this. They have celebrated and legitimated the reigning orthodoxy.

      InnoAnon: A Twelve Step Program

      I believe that we should give up—or at least drastically curtail—innovation speak. I believe this for multiple reasons. The foremost reasons to my mind are moral and political, but I realize that many people will simply not go along with my thinking here. Many will find these moral and political reasons tendentious. So, before turning to the moral reasons for abandoning “innovation,” I will focus first on the social scientific reasons for doing so.

      For historians, one worry should be that “innovation” is not an actor’s category. It’s an analytical one that we import into the past. This kind of presentism risks obscuring historical actor’s thoughts, cares, and wishes. We lose sight of the notions that guided their actions. Presentism and lousy historical method might seem unimportant to some scholars, however. A greater concern is that a focus on “innovation”—which often is a stand-in for a narrow conception of technological change—concentrates too much on the technological cutting-edge and on the value of change. This focus draws our attention away from so many other factors that contribute to organizational vitality. Moreover, we shouldn’t be interested only in vital organizations because, let’s face it, most aren’t. If we focus on vital organizations, then our social science does not account for much. David Edgerton tried to draw attention to the historical profession’s overemphasis on cutting-edge and high technologies in his book, The Shock of the Old. Edgerton argues that old and mundane technologies are the norm, not novel ones. It remains to be seen if scholars will follow his advice and broaden their purview. But his challenge holds also for those who have chosen to write about “innovation.” We have put too much energy into such writings, and it has left our accounts thin, narrow, hollow. I believe that these inadequacies also have moral implications.

      If in the grand scope of social science, asking what factors encourage innovation is incredibly narrow, in the context of our society’s problems, it’s myopic. As a society, we have come to talk as if innovation is a core value, like love, fraternity, courage, beauty, dignity, responsibility, you name it. I do not believe, however, that, if you asked people to name their core values, innovation would appear on most of their lists. Innovation speak worships at the altar of change, but it too rarely asks who those changes are benefitting. It acts as if change is a good in itself. Too often, when it does take perspective into account, it proceeds either from the viewpoint of the manager or the shareholder, that is, from the perspective of people who are interested in profits, or from the viewpoint of the consumer interested in cheap goods. Other social roles largely drop out of the analysis. To give an example from the historical profession, Christophe Lecuyer’s Making Silicon Valley: Innovation and the Growth of High-Tech, 1930–1970 contains 75 instances of the word “innovation” but exactly zero instances of the words “poverty” or “inequality,” even though that region is famously unequal. Christophe is a nice and good guy. I do not mean to besmirch his reputation. What I mean to point out is that we have so narrowly defined our studies that we have left out the most important parts. Since the 1970s, Silicon Valley has become the image and model of an innovative locality, with many other places around the United States and the world hoping to imitate it. But what would successful imitation mean for the local population?

      As I was writing this essay, a new publication, the Journal of Responsible Innovation, was released. The journal will house the increasing literature on “anticipatory governance,” which tries to foresee potential risks in emerging technologies and make policies to preempt them. I have qualms with this literature, not least because it puts too much emphasis on emerging technologies and not enough on the mundane technologies that fill most people’s daily lives. Yet, the title of this journal contains an insight largely missing from most writings on innovation, namely that not all innovation is responsible. The introduction of crack cocaine in American cities in the mid-1980s was a major innovation in Schumpeterian terms, but for some reason scholars in innovation studies have not focused on that case. The same goes for how landlords in Hoboken, New Jersey used arson to burn tenants out of rent-controlled apartments and make way for the gentrifying yuppies who were increasingly interested in the real estate just across the river from Manhattan. Very innovative. Of course, William Baumol realized that innovation had many moral faces when he wrote his essay, “Entrepreneurship: Productive, Unproductive, and Destructive,” but few have followed his lead. This paucity partly explains why Assaf Moghadam’s “How Al Qaeda Innovates” was so heavily passed around between scholars last year. Scholars wonder about “the challenge of remaining innovative.” To what end? To whose end? To counter the amorality of innovation speak, we might return to a slightly older notion and go along with Lewis Mumford who, following Nietzsche, insisted that technological changes should be aimed at enhancing and serving life. And we should broaden the (phenomenological) perspectives taken into account. If, as social scientists, we wish to produce work that is morally and politically salient, this broader scope is our only option.

      -Contributed by ,  Assistant Professor in Science and Technology Studies, Stevens Institute of Technology-

      Posted in Uncategorized | Tagged , , | 1 Comment

      How to Give up the I-Word, Pt. 1 Sep 22, 2014

      This is the first part of a two-part essay, which I originally presented at conferences in the spring of 2014. Part two will be posted tomorrow. The full version of the essay, which I’m happy to share with anyone interested, included a section on the place of innovation speak in the academic sub-discipline of business history.

      Use of the word “innovation” began rising in the mid-1940s, and it’s never stopped since. At times, its growth has been exponential; at others, merely linear; but we hear the word today more than ever.

      Innovation

      A Google Ngram for the word “innovation” from 1800 to 2000

      This curve has consequences. It results in concrete stories like this one: a few years ago, a friend of mine was teaching a course that touched on science and technology policy. One of his class sessions that semester fell on the day of President Barack Obama’s State of the Union Address. Somehow the topic of innovation came up during class discussion. My friend joked that the students should pay attention to how many times the President used the word “innovation.” Perhaps, he said, they should use it as the basis of a drinking game: take a sip every time Obama utters the word. The students chuckled. The class moved on. That night, my friend watched the State of the Union Address himself. Part of the way into it, as the word innovation flew from the President’s mouth again and again, my friend was suddenly overcome with fear. What if his students had taken him seriously? What if they decided to use shots of hard liquor in their game instead of merely sipping something less alcoholic? He had anxious visions of his students getting alcohol poisoning from playing The Innovation Drinking Game and himself being fingered for it by the other students. Because the President had used the word so very often, my friend went to bed wondering if his earlier misplaced joke would jeopardize his tenure. Such is life in the era of innovation.

      Of course, the rise of innovation speak has greater consequences than college drinking games and the worries of tenure-track assistant professors, or it would hardly be worth considering. Innovation speak has come to guide social action, or at least structure desirable actions for some. These trends look much more dramatic if you consider terms and phrases, such as “innovation policy,” which virtually came out of nowhere in the mid-1970s.

      Innovation Policy

      A Google Ngram for the word “innovation policy” from 1800 to 2000

      Terms like innovation policy move us beyond an earlier moment where scholars and science and technology policy wonks were simply noting innovation as a process that happened in the world. The point became that innovation could be fostered, manipulated, instrumentalized. Policy-makers could take actions that would increase innovative activity, and, therefore, it became important to learn what factors gave rise to innovation. What are the “sources of innovation”? (Von Hippel) What kind of national and regional “systems” fostered its growth? (Lundvall, Freeman, Nelson) How could managers harness rather than be destroyed by the “gales of creative destruction”? (Christensen) How could localities learn from successful hubs of innovative activity, most paradigmatically during this period, Silicon Valley? (Porter) Many academic disciplines entered this growing space, searching for the roots of innovation, hoping to capture it in a bottle. Historians were recruited into this effort, since they can supposedly draw lessons from the past about innovation.

      Yet, ironies appear. During this same period—the moment of high innovation speak—economic inequality has dramatically risen in the United States as have prison populations. Middle-class wages have more or less stagnated, while executive salaries have skyrocketed. The United States’ life expectancy, while rising, has not kept pace with other nations, and its infant mortality rate has declined relative to other countries since 1980. It has also fallen behind in most measures of education. One could go on with statistics like this for a long time. Put most severely, during the era of innovation speak, we have become a worse people. This claim is not causal. Certainly many other complicated factors have contributed to these sad statistics, but it is also not clear that thinking about innovation policy, which has taken so much time and energy of so many bright people, has done much to alleviate our problems. And some actions done in the name of innovation and entrepreneurship, like building science parks, probably do little more than give money to the highly educated and fairly well-to-do.

      There are still questions to ask, however, even if the link between innovation speak and certain forms of decline is not causal. What has driven the adoption of innovation speak? Why have academics glommed on to this idea when things around them were falling apart? What have they missed by focusing on innovation and its related topics? How do they justify working on innovation when so much else is wrong in our world?

      When we cast our eyes on academics using the I-Word what we see are certain habitual patterns of thought, the reliance on specific concepts and metaphors, and a dependence on questions related to those concepts. Let me give one example: The sociologist Fred Block has co-edited a volume, titled State of Innovation: The U.S. Government’s Role in Technology Development. He also wrote the introductory essay for it. In the opening of that essay, Block rehearses three well-known problems—the trade deficit, global climate change, and unemployment—that President Barack Obama faced as he entered office. Then Block writes, “For all three of these reasons, the new administration has strongly emphasized strengthening the U.S. economy’s capacity for innovation.” From one perspective, Block’s statement makes perfect sense. From another point of view, when taking into account all of the problems listed above, the statement seems slightly crazed. Why would innovation policy be an answer to anything but the most superficial of these issues? Consider all of the assumptions and patterns of thought and life that must be in place for Block’s words to seem like common sense.

      The goal for academic analysis must be to uncover these assumptions and explain their historical genesis. In striving for this objective, we have a great deal of help because this is precisely the kind of work that anthropologists, cultural historians, scholars of cultural studies, and critics of ideology have been doing for generations. We also know of historical analogies that share structures with innovation speak. For instance, over fifty years ago, Samuel P. Hays published his book, Conservation and the Gospel of Efficiency: The Progressive Conservation Movement, 1890–1920. Hays explained the notion of efficiency’s rise to prominence among certain groups of actors during the period he investigated. Ponder then how the utterances of a specific actor fit into this overall trend. After World War I, in a moment slightly after the period Hays’ book covers, Herbert Hoover took over as Secretary of Commerce and used the position to push an agenda of efficiency. As Rex Cochrane writes in his official history of the National Bureau of Standards, a division in the Commerce Department, “Recalling the scene of widespread unrest and unemployment as he took office, Hoover was later to say: ‘There was no special outstanding industrial revolution in sight. We had to make one.’ His prescription for the recovery of industry ‘from [its] war deterioration’ was through ‘elimination of waste and increasing the efficiency of our commercial and industrial system all along the line.’” Hoover’s formulation of the hopes of efficiency so resemble Block’s, on the hopes of innovation. Thanks to Hays and others, we now recognize the ideological status of Hoover’s words. Although Hoover may have believed that he originated his own thoughts, we can see them as part of a larger trend. Moreover, at least to the degree that efficiency formed the basis of Frederick Winslow Taylor’s “scientific management” and other cultural products of the era, we know that efficiency was often a mixed blessing. Its value depended on who you were. Managers loved efficiency; workers loathed it. The same often holds true for innovation.

      The goal of this essay is to offer a first take on the rise of innovation speak. A classic question in the literature on the Progressive Era, to which Hays’ book was a major contribution, was, “who were the progressives?” I cannot yet describe any clear “innovation movement” as Hays described a conservation movement. It will take more time and perhaps a longer historical perspective to answer the question, “who were the innovation speakers?” My aims here are humbler. As I have argued elsewhere, many incentives drive people to use the word “innovation.” In a review of Philip Mirowski’s book, Science-Mart, I wrote, “In the second chapter, “The ‘Economics of Science’ as Repeat Offender,” Mirowski lays much of the blame for our current state at the feet of economists of science, technology, and innovation. Mirowski is right to go after economists as the chief theorists of our moment. ‘Innovation’ has become the watchword, and public policy has congealed around fostering it as the primary source of economic growth. But Mirowski goes too far. Many share responsibility for our current myopic focus on innovation as the key to societal improvement, including politicians reaching for appealing rhetoric and numerous groups opportunistically looking for handouts (e.g., university administrators; academic and industry scientists and engineers; firms, from large corporations to small startups; bureaucrats in government labs).” If this is right, then the goal must be to examine why different groups have taken to innovation speak. For example, academic researchers live not in the land of milk and honey but in the land of grants and soft money and, to win treasure, are bidden to speak of innovation, particularly by the National Institutes of Health, which requires a whole section on the I-word in its proposals.

      This essay moves through three sections. This introduction and the first section will be published today; the second and third sections, on Wednesday. The first section examines some general drivers that have led to an increased focus on innovation in US culture. In the second section, I narrow my focus to the world of academia and the rise of innovation speak in that sphere. This rise tracks with the drivers described in the first section, in part because academics acted as advisers to government during the general problems that increased focus on innovation. Economists, as Philip Mirowski has described, played a major role here, but I will also focus on the business school as a core mediator between academics and non-academics on the topic of innovation. Finally, in the third and concluding section, I will argue that, for both moral and social scientific reasons, we should give up the word “innovation.”

      Drivers

      Several factors drove the rise of innovation speak, including some long, deep trends in Western Civilization. For instance, the word “progress” had crashed upon the shoals of the late-1960s and early-1970s: a triumvirate of factors typically remembered as the Viet Nam war, Watergate, and perceptions of environmental crisis. This growing skepticism included doubt that technology would inevitably lead to a better future. Innovation became a kind of stand-in for progress. Innovation had two faces when it came to this matter, however. On the one hand, in its vaguest sense, the sense often used in political speeches and State of the Union Addresses, innovation means little more than “better.” In these instances, innovation is as close to progress as one could come without saying the word “progress.” On the other hand, in its more technical definitions, “innovation” lacked progress’s sense of “social justice” or the betterment of all. Innovation need not serve the interests of all, in fact, typically it doesn’t. 

      Progress

      A Google Ngram for the word “progress” from 1900 to 2000 shows a decline in the word’s usage beginning—no surprises—in the late 1960s

      These two faces of innovation allowed certain kind of rhetorical slipperiness on the part of speakers. They could use it to mean progress but, when pushed, retreat to a technical definition. This slipperiness also meant that innovation was catholic politically. Both liberals and conservatives felt innovation’s pull. The term was vague enough that no one needed to feel as if it conflicted with his or her beliefs.

      Another factor related to a long trend was changes in thinking about science and technology during this period. The pure science ideal that emerged in the 19th century began faltering in the 1970s and collapsed almost completely in 1980 with the Diamond v. Chakrabarty decision, which ruled that genetically engineered organisms could be patented, and the passage of the Bayh-Dole Act, which allowed recipients of federal money to patent their inventions. These events signaled and pushed forward deep changes in the nature of scientific practice in the United States. In the early 20th century, scientists were blacklisted if they went to work for industry. Beginning in the 1980s, academic scientists weren’t hip unless they had a startup or two. One result of these changes was the death of the so-called Mertonian norms: communalism (or the sharing of all information), universalism (or the ability of anyone to participate in science regardless of personal identity), disinterestedness (or the willingness to suspend personal interests for the sake of the scientific enterprise), and organized skepticism. Scholars in science and technology studies have shown again and again that science never lived up to these norms, that secrecy and other counter-norms were just as prevalent in ordinary scientific practice as the ones Merton identified. In the late 20th century, however, these norms no longer functioned even as ideals. How can you aspire to communism when you are striving for patents and the commercialization of proprietary knowledge?

      Pure Science

      A Google Ngram for the exact phrase “pure science” from 1800 to 2000

      In a series of recent essays, the historian of science, Paul Forman, has tried to characterize the shifts inherent in this post-Mertonian, postmodern moment. In one formulation, Forman argues that, before the 1970s, science took precedent over technology, both because it was believed to precede it (science gave rise to technology) and because it held higher prestige. The intellectual portion of this priority was best captured in the so-called “linear model,” which asserted that scientific discoveries led to technological change. During the postmodern moment, however, the relationship between science and technology inverted, so that practical outcomes—and one might add, profit—became the foremost goal, or so Forman argues. These changes are easily perceivable in recent discussions about and fretting over STEM, or science, technology, engineering, and math, in debates about secondary and higher education in the United States. While the term ostensibly includes science, it isn’t science in the sense of knowledge-for-knowledge’s sake. It is almost always science that is actionable, useable, commercially-viable, science that will make the nation globally competitive. And this focus on competition brings us to our next factor.

      So far, I have focused on two factors—the death of progress and shifts in scientific ideals—that were related to long trends in the history of Western culture, but there were more immediate causes of the rise of innovation talk. Perhaps most important among them was the stagflation and declining industrial vitality that marked the 1970s. Of course, this downturn was itself related to longer trends, including the period of perceived industrial expansion that, excepting the Great Depression, had begun in the late 19th century. The downturn seemed dire in the context of the post-War economic boom of the 1950s and early 1960s. Policy-makers worried aloud about the health of industry, and members of the Carter administration discussed the adoption of an “industrial policy” to spur on “sick industries” and halt the country’s slide into obsolescence.

      The terms “innovation policy” and “industrial policy” shot up around the same time, but industrial policy faltered in the late 1980s and plateaued throughout the 1990s before sliding into disuse. There were some consequences of the victory of innovation policy over industrial policy. Industrial policy was broader than innovation policy. It included the whole sweep of industrial technologies, inputs, and outputs, while innovation policy placed heavy emphasis on the cutting-edge, the so-called “emerging technologies.” Industrial policy also included a focus on labor, while innovation policy typically doesn’t, unless it is to worry about the availability of knowledge workers, trained up in STEM, that is, unless it is labor as seen through the human resources paradigm.

      The emerging innovation speakers were not content to focus on the economic recession at hand, however. Economic decline, for them, was much worse given that it was a decline relative to other nations. In other words, innovation speakers needed an other. And innovation speak, to this day, typically involves members of a global superpower worrying about its state in the world. It’s a worry of empire. In the 1980s, the other was Japan, whose managers and workers were making great strides because of some mysterious thing hitherto unknown in the West, because of something rooted in their culture. Focus on Japan developed in the context of a new discourse about “globalization,” a term whose use skyrocketed beginning in the late 1980s. Fear of Japan was expressed not just in academic tomes but also in popular culture, in movies and novels such as Gung Ho (1986) and Rising Sun (novel, 1992; film, 1993), but academic analysis of Japanese production techniques did become a cottage industry during this period. Analysts of Japan were often central contributors to the rise of innovation speak, at least amongst academics, as I will explore in the next section.

      Ironically, Michael Crichton’s Rising Sun, the most vehemently anti-Japanese expression of American pop culture, came out a few years after Japan’s economy fell into crisis from which it has never fully recovered. But we have found other others, perhaps because we cannot live without them. In the early 1990s, the Federal Bureau of Investigations was already quizzing American scientists who went to professional conferences that also had Chinese scholars in attendance, and current anxieties about American innovativeness often mention China’s investments in science and technology. Talk of innovation is often talk about our relative position in the world. The same holds true of current worries about STEM education. STEM is almost always worry about economic competitiveness, not about the beauties of science. As the physicist, Brian Greene, explained on the radio program, On Being, “The urgency to fund STEM education largely comes from this fear of America falling behind, of America not being prepared. And, sure, I mean, that’s a good motivation. But it certainly doesn’t tell the full story by any means. Because we who go into science generally don’t do it in order that America will be prepared for the future, right? We go into it because we’re captivated by the ideas.” Technology trumps idle curiosity.

      -Contributed by ,  Assistant Professor in Science and Technology Studies, Stevens Institute of Technology-

      Posted in Uncategorized | Tagged , , | Leave a comment

      ← Older posts | Newer posts →