Culture Digitally // Examining Contemporary Cultural Production // Page 2

Culture Digitally // Examining Contemporary Cultural Production

  • With the generous support of the National Science Foundation we have developed Culture Digitally. The blog is meant to be a gathering point for scholars and others who study cultural production and information technologies. Welcome and please join our conversation.

     

    • Surrogate [draft] [#digitalkeywords] Sep 25, 2014

      “There has been much theorization of the ways in which new media contain the old, but scholars involved in historicist criticism are increasingly making print simulacra into an effigy. Archives of digitized print materials do not pretend to replace the experience of the original but nonetheless promise, implicitly if not explicitly, a way of engaging with the attributes of the original objects to facilitate scholarly judgments about them.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Surrogate — Jeffrey Drouin, University of Tulsa

      Historical scholarship in literary studies is increasingly dependent upon digital objects that stand in as substitutes for printed or manuscript material. The operational features of digital surrogates often attempt to mimic the functionalities of codices and other material formats-ostensibly to reproduce the experience of handling the originals-while taking advantage of the vastly different cognitive and representational possibilities afforded by the new medium. There has been much theorization of the ways in which new media contain the old, but scholars involved in historicist criticism are increasingly making print simulacra into an effigy. Archives of digitized print materials do not pretend to replace the experience of the original but nonetheless promise, implicitly if not explicitly, a way of engaging with the attributes of the original objects to facilitate scholarly judgments about them. Thus digitized editions embody the ecclesiastical origins of the surrogate-“[a] person appointed by authority to act in place of another; a deputy” who usually stands in for a bishop-and its related concepts that impinge upon scholarly and institutional authority. When the concept of office as the symbol of an ultimate power is transferred to the realm of text, a digital edition which duplicates a print or manuscript document comes not only to embody but also to symbolize the power inherent in the original it stands in for. This paper will examine the digital surrogate as an effigy: an image taking the place of an original that is simultaneously worshipped and desecrated in the act of interpretation.

      A digital edition is a surrogate in that it stands in for and takes the place of a print original. We gain many practical benefits from using digital surrogates in literary scholarship, ranging from protection of the fragile original when a copy would suffice, increasing access to rare materials, and rendering such documents searchable and interoperable with other networked resources. Libraries have been major proponents of digital surrogates, which have long been touted by digital humanists, archivists, and special collections departments. Digital surrogates have also become levelers of class inequalities among researchers, allowing access to those who cannot afford to travel to the archives that house the often rare originals. As digital humanities has flourished as a field over the past decade or so, the searchability and interoperability of digital texts through the TEI encoding guidelines and Dublin Core metadata standards have expanded the usefulness of digital surrogates in making large gestures about literary history, especially when they form the basis of large datasets-much larger than can be processed by scholars individually or in aggregate-that facilitate corpus analysis. There is no denying the innovative possibilities that accrue from corpora of digitized documents. However, when the move toward corpus-level analysis entails inferences about texts in the aggregate, we necessarily ignore the individual works that make up the corpus, at least to some degree. Each work says something from a particular point of view, so how can we be sure that our corpus-level inferences are accurate? Is the singular text lost in the move toward searchability? Is it possible to develop a methodology that synthesizes search-based queries and the uniqueness of the underlying texts? When using digital methods upon a digitized text, are we really studying the object? And, if we attempt to compensate for the blind spots of large-scale analysis by selecting individual works from the digital corpus, are we adequately filling in the gaps?

      While a digital edition offers built-in functionalities and research possibilities unavailable in a printed object, the interface also erases many physical traits of the original, such as size, weight, paper quality, and ink saturation-all of which are crucial in matters of historical, technical, and bibliographic analysis. For instance, The Modernist Journals Project (MJP)  features an edition of BLAST, an important avant-garde magazine from 100 years ago known for its radical experiments in typography and poetics.  Even though the MJP offers high fidelity scans of the original pages, the physical impact of the magazine is lost in translation. The bibliographic information supplied on the landing page of the digital edition indicates that the 212 pages of the first issue (June 1914) are 30.5 cm long and 24.8 cm wide (more than 12 inches and 10 inches respectively). A reader could use a ruler or tape measure as a visual aid in comprehending the size, since it will almost certainly be smaller on a screen. Yet in no way does the comprehension of measurements equal the aesthetic apprehension of seeing-and holding and smelling-a codex that is roughly the area of a small poster, which is twice as wide when opened up, and whose thick paper renders it roughly 6.35 cm (2.5 inches) deep, weighing around 1 kg (2.25 lbs), and supporting the heavily saturated black block letters that often stand over 2.5 cm (1 inch) tall on the page as if they are autonomous objects.

      The physical experience of reading BLAST necessarily contributes to the interpretation of its content, since such a solid, impactful object is diametrically opposed to the ephemerality normally expected of magazines: it is a Vorticist manifesto attempting to break art and literature aesthetically, morally, and physically: to “be an avenue for all those vivid and violent ideas that could reach the Public in no other way” by bringing “to the surface a laugh like a bomb” (“Long Live” 7, “MANIFESTO” 31).

      Indeed, the kinetic typography that often spans juxtaposed pages produces a visual effect whose immensity corroborates its revolutionary assertions.

      blast

      This image presents a digital imitation of two juxtaposed pages from BLAST that demonstrate the interplay of typography and ideology. The series of “Blasts” and “Blesses” comprising this section of the manifesto take aim at the passé while asserting an English art that is nationalist in temper. Throughout most of modern history, English artists and writers looked up to their French colleagues as being more advanced. Here, however, the attacks upon French culture by the magazine’s “Primitive Mercenaries in the Modern World” (“MANIFESTO” 30) seek to create a new space for English art that far surpasses its rival. A key tactic in surpassing the French is to embrace the opposing energies of an explosion: “We fight first on one side, then on the other, but always for the SAME cause, which is neither side or both sides and ours” (30). Hence, Vorticism, taking its queue from the vortex or whirlpool (as well as adolescence), deliberately embodies opposing forces at their point of greatest concentration, which is simultaneously their point of cancellation. These position statements explain the typographical interplay of absence and presence on the magazine’s pages, where bullet lists occupy the left or right side of a page while the other side remains blank (taking a queue from commercial advertising), or where there seems to be a diagonal line separating absence and presence across two juxtaposed pages, as in the screenshot above. The reader can only fully appreciate the amount of energy required to embody these principles while situated before the text arrayed across an area of 1513 square centimeters (240 square inches) plunked solidly upon a table.

      I must admit that the image above is not part of the MJP. In my quest to view digital pages of BLAST juxtaposed as they are in the original, I submitted a PDF version of the magazine to FlipSnack so that I could behold its glory onscreen (or at least the first sixteen pages of it that are made available to those too pathetic to pay for the service) and embed it on a teaching blog: BLAST no. 1 (June 1914); BLAST no. 2 (July 1915, “War Number”). It is also possible to achieve a similar result by viewing the MJP’s PDF in Adobe Acrobat, using Two Page View with the selected option to Show Cover Page so that the left-right orientation is correct. The codex-simulating view option is not yet available on the MJP website, which at present offers only PDF download, a single-page view option, and a tiled thumbnail overview of an entire issue. In other words, because of dissatisfaction with the lack of a codex-like viewer, I have created a simulacrum so as to approach the condition of the original.

      The Oxford English Dictionary informs us that a simulacrum is a “material image, made as a representation of some deity, person, or thing”; it possesses “merely the form or appearance of a certain thing, without possessing its substance or proper qualities”; it is “a mere image, a specious imitation or likeness, of something.” In other words, it fulfills the role of surrogate as a substitute deputed by authority, yet lacks the true substance of that for which it stands. The association of a simulacrum with a deity-and inherent inadequacy-seems apt in the light of the digital BLAST and electronic editing and scholarship in general. One of the built-in goals of the simulacrum is to return to some originary state, “to see the thing as in itself it really is” or was, to paraphrase Matthew Arnold. But in translating BLAST into the new medium, which cannot adequately duplicate the physical attributes to inherent to its meaning, are we not moving the reader further from that originary state?

      We are in effect creating an effigy: a likeness, portrait, or image that lacks the true character of the original yet stands in for our pursuit of it. Like the other terms in this conceptual cluster-surrogate and simulacrum-effigy bears the undertones of a symbol of something holy to be revered, as well as the substitute for something profane to be desecrated. It is telling that the various definitions of effigy relate both to ecclesiastical and judicial terminology. In that light, my hasty decision to feed BLAST to FlipSnack in effigy betrays an attempt to incarcerate the Original: “fig. … to inflict upon an image the semblance of the punishment which the original is considered to have deserved; formerly done by way of carrying out a judicial sentence on a criminal who had escaped.”

      Lest these ramblings be misconstrued as a Proustian obsession with “The Sweet Cheat Gone,” we must ask whether it is illusory to demand total knowledge of our Albertine. In hunting the fugitive original-whether an object, a contextual state, or something else-is the data aspect of the digital fundamentally separate from the object from which it derives? I do not seek to answer that question within the scope of this draft, and will leave it for further development following our conversations. However, regardless of what that answer might be, the question subsequently arises as to whether there is a digital materiality and, if so, how it might work in this line of inquiry. Already the digital surrogate-cum-effigy seems to approach the character of the fetish: “a means of enchantment… or superstitious dread”; “an inanimate object worshipped by preliterate peoples on account of its supposed inherent magical powers, or as being animated by a spirit”; “something irrationally reverenced.”


      Bibliography

      Lewis, Wyndham. “Long Live the Vortex!” BLAST 1:1 (June 1914): 7-8.

      —. “MANIFESTO.” BLAST 1:1 (June 1914): 30-43.

      -Contributed by ,  -

      Posted in Uncategorized | Tagged , , , , | Leave a comment

      How to Give Up the I-Word, Pt. 2 Sep 23, 2014

      This is the second part of a two-part essay, which I originally presented at conferences in the spring of 2014. The first part is available here. The full version of the essay, which I’m happy to share with anyone interested, included a section on the place of innovation speak in the academic sub-discipline of business history.

      Innovation as the Self-Image of an Age

      In the last section, I examined some general drivers of the rise of innovation speak. In this section, I would like to narrow my analysis to focus on how a specific sector responded to these trends. Business schools have played a significant part in promulgating talk of innovation, both within academia and in popular discourse. During the last half of the twentieth century, business schools increasingly became core institutions of cultural production. Business professors often aspire not only to produce works, like case studies, that will only be read and used in academic settings but also to create products, whether books or articles or consultancies, that will have broader appeal. In this context, the trend towards innovation speak can be seen easily enough by tracing the publications of individual writers based at business schools. Michael E. Porter, a professor at Harvard Business School and the dean of competitive strategy analysis, used the word innovation 46 times in his 1980 book, Competitive Strategy, but 123 times in his 1998 book, On Competition.

      One person whose work nicely illustrates a relationship to the drivers mentioned in the previous section is William J. Abernathy, like Porter, a professor at Harvard Business School. Abernathy’s 1978 book, The Productivity Dilemma: Roadblock to Innovation in the Automobile Industry, was written at a time when the automakers were suffering from a well-known decline. Chrysler, especially, was in bad financial shape and would be bailed out a year later. In The Productivity Dilemma, Abernathy examined the trade-offs of adopting rigid but highly effective production techniques: while it won productivity in the short term, the adoption of such production techniques made it difficult, if not impossible, to internalize new innovations. Abernathy made a distinction between incremental and radical forms of innovation, arguing the the latter tended to interrupt settled production. Thus, we have our dilemma. How do we balance innovation and productivity? The historical context for The Productivity Dilemma—falling profitability in the auto industry and a general sense of industrial degeneration—was hardly mentioned at all in the book but acted only as the backdrop. This silence about current events was not true of Abernathy’s following works.

      In 1980, Abernathy published a co-authored essay, titled “Managing Our Way to Economic Decline,” in the eminent Harvard Business Review. Abernathy and his co-author, Robert H. Hayes, argued against the assertions of supply-side economics that declining productivity in the U.S. was the result of high taxes, energy crises, and too much regulation. The authors instead put the blame on shifting managerial experiences and priorities, including a lack of “hands-on” knowledge and “short-term cost reduction rather than long-term development of technological competitiveness.” Later in the essay, the authors characterized this latter priority as a choice between “Imitative vs. Innovative Product Design.” By casting the economic decline of the 1970s as a problem of management, Abernathy and Hayes described the problem as something that could be solved through a change in mindset or an acquisition of fresh knowledge. The framing encouraged scholars to find and communicate lessons for how to foster innovation. Their analysis also implicitly built on the back of long traditions in the West, contained in volumes like Benjamin Franklin’s Autobiography and Poor Richard’s Almanack, that condemned actions that favored short-term gain, which was too focused on the opinions of the market, over wise long-term growth.

      In 1983, along with Kim B. Clark, another professor at Harvard Business School, and Alan M. Kantrow, an associate editor of the Harvard Business Review, Abernathy published Industrial Renaissance: Producing a Competitive Future for America. In a sense, the book brought together Abernathy’s previous insights, including the “productivity dilemma” and the problem of managers overly focused on short-term profits, with a new theme, namely the puzzle and threat of Japanese productivity, especially in the auto industry. The lesson for Clark et al was clear. As one summary of the book puts it, “Examines the failure of American companies to compete under conditions produced by new technologies.” Abernathy and his co-authors described technologies that vastly changed conditions as “disruptive.” Abernathy no doubt knew about Schumpeter, though he did not spend much time in his writings meditating on Schumpeter’s thought. Yet, once Schumpeter was re-discovered by others in the 1980s (which I will discuss in a moment), the basic ideas of Industrial Renaissance would be recast in Schumpeterian terms: the “gale of creative destruction.” Abernathy contracted cancer in 1979 and died in 1983, the year Industrial Renaissance was published, at the age of only fifty. Abernathy’s untimely death cut short a brilliant career. In the context of this essay, it is hard to resist counterfactual questions. If Abernathy had lived, how would his subsequent work have fit into the innovation speak that followed? Would he have continued down the path he laid? Or would he have eventually turned down some other road?

      The academic focus on Japan continued after Abernathy’s death. Cambridge, Mass., both at Harvard Business School and at MIT, was home to many studies of Japanese production techniques. The most famous product to come from these studies was The Machine that Changed the World by James P. Womack, Daniel T. Jones, and Daniel Roos. The book bore the subtitle “Toyota’s Secret Weapon in the Global Wars that is now Revolutionizing World Industry.” Like many pop business books, it hinted at forms of secret knowledge and promised to initiate the reader into its ways. The covers of the first edition of the book proclaimed, “Based on The Massachusetts Institute of Technology 5-Million-Dollar 5-Year Study on the Future of the Automobile.” Womack and Jones followed up this book with Lean Thinking: Banish Waste and Create Wealth in Your Corporation, an even more explicitly pop business book. In 1997, they founded a consultancy, Lean Enterprise Institute, Inc. (“Compared with traditional ‘think’ tanks, we are a ‘do’ tank.”) The team behind The Machine that Changed the World coined the term “lean production” to describe Japanese production, and that term has taken on a life of its own. Even though we are only a little over twenty years on from the term’s coining, it is easy enough to make out its ideological baggage: worries about national competitiveness and the coming Japanese hordes (of businessmen), long-held cultural taboos against waste, and—in the case of Lean Thinking—words, notions, and desires that could just as comfortably fit a fad diet book.

      US preoccupation with the international scene went well beyond Japan, no doubt partly due to the discourse of globalization that was also ascendant during this period. Perhaps the best example of this phenomenon was the academic concept of “national innovation systems.” The history of innovation speak is properly an international and transnational history, though this essay cannot yet aspire to that level of completeness. Like industrial policy before it, the idea of national innovation systems was a product of European intellectuals. Christopher Freeman and Bengt-Åke Lundvall began using the term in the mid-to-late-1980s, and Lundvall published an edited volume on the topic in 1992. Again, this work arose, in part, from an effort to explain Japan’s economic boom, and, again, it was focused on the role of technological change in economic growth and international economic competitiveness. Perhaps the primary contribution of the innovation systems literature was to move the discussion beyond an emphasis on individuals, whether managers or entrepreneurs, to an examination of institutions that fostered certain kinds of activity. The early literature on entrepreneurs especially concentrated on the psychological and characterological makeup of risk takers. Put bluntly, the work on innovation systems made the field more social scientific.

      Attention to place went well beyond the national level to “regional innovation systems” and “innovation clusters.” In some ways, the roots of this thinking lay in an awareness of economic growth in Silicon Valley. By the early 1980s, books on Silicon Valley were beginning to hit the market. But by the mid-1980s, Silicon Valley had become a model—perhaps the model—community with supposed implications for how other places should shape their policies. Books, such as Roger-Emile Miller’s and Marcel Cote’s Growing the Next Silicon Valley: A Guide for Successful Regional Planning (1987), tried to impart the teachings of the place. The notion of “innovative” locales was further valorized and given a stamp of legitimacy when Michael Porter published his 1990 book, The Competitive Advantage of Nations, which focused on the role of regional “clusters” in fostering economic growth. The locality theme was heightened in Richard Florida’s The Rise of the Creative Class (1992), a work that mentions “innovation” over ninety times while heavily idealizing Silicon Valley. And in general, studies of economic geography, often looking back to places like Detroit and Hartford, Connecticut, flourished during this period. This mode of thinking led to the popularity of certain policies, such as science parks, business “incubators,” and other forms of so-called “technology-based economic development” (TBED).

      Attempts to learn from Silicon Valley have never really relented; nor apparently has the (book) market for such lessons. To give a small sampling: Success Secrets from Silicon Valley: How to Make Your Teams More Effective (No Matter What Business) (1998); Relentless Growth: How Silicon Valley Innovation Strategies Can Work in Your Business (1998); The Silicon Valley Boys and Their Valley of Dreams (1999); Understanding Silicon Valley: The Anatomy of an Entrepreneurial Region (2000); The Silicon Valley Edge: A Habitat for Innovation and Entrepreneurship (2000); Champions of Silicon Valley: Visionary Thinking from Today’s Technology Pioneers (2000); TechVenture: New Rules on Value and Profit from Silicon Valley (2001); Clusters of Creativity: Enduring Lessons on Innovation and Entrepreneurship from Silicon Valley and Europe’s Silicon Fen (2002); Once You’re Lucky, Twice You’re Good: The Rebirth of Silicon Valley and the Rise of Web 2.0 (2008); Secrets of Silicon Valley: What Everyone Else Can Learn from the Innovation Capital of the World (2013). Once again, just as in the case of Michael Crichton’s Japan-fear classic, Rising Sun, which was published around the same time that the Japanese economy faltered, it’s easy to see that a publishing boom on Silicon Valley came around the year 2000, just as the dot.com bubble was set to burst. It’s a lesson we should keep in mind today.

      The late 1980s and early 1990s was also the moment of the “rediscovery” of Joseph Schumpeter. Again, this trend was international, and European, especially Scandinavian, scholars played an important part in it. Schumpeter’s model of entrepreneurship, innovation, and economic growth is now so well-known it hardly bears rehearsing. I will deal with his thought a bit more substantively in the next section, but here I just want to ask, Was the rediscovery of Schumpeter driven by new readers seeing his previously, largely unrecognized genius? Or was it driven by ideology, that is, did Schumpeter’s ideas merely fit the self-image of the age of his rediscovery? The answer is no doubt both/and, but Schumpeter’s fans have insufficiently accounted for his ideological valences.

      In conversation, some Schumpeterians have told me that the popular, ideological meanings of “innovation” can be held at bay by remaining true to Schumpeter’s original definition of that that idea: that innovation is the successful exploitation of new ideas, that there are five basic types of innovation, that there is a need to focus on the entrepreneur is a special kind of actor in society, etc., etc. But almost all academic thought vitiates against the idea that any kind of definitional purity can be maintained in the face of the kinds of linguistic waves depicted (in the Ngrams) at the beginning of this essay. If you are using a buzzword during one of those waves, you are falling prey to a fad. What could be more obvious?

      But Schumpeter’s thought is more than a mere fad, as cat videos and Bronies are mere fads; Schumpeter’s thought serves and glorifies particular interests. Schumpeter wrote a lullaby for the business class. Or, perhaps it was more a fairy tale, because there were some scary parts. You could be blown away by the gale of creative destruction. Or, maybe most of all it was a myth, a hero’s tale, the “entrepreneur with a thousand faces.” A business historian friend put it to me like this: (at least the popular version of) Schumpeter justifies American-style capitalism, which has forsaken hope in full employment, which sees jobs lost to “innovation” as natural and unavoidable, which has taken technological novelty as its ultimate end.

      Schumpeterian

      A Google Ngram for the word “Schumpeterian” from 1800 to 2000

      Just like reflections on the “lessons” of Silicon Valley, neo-Schumpterian thought has gone far beyond the ivory tower, most famously in Clayton M. Christensen’s The Innovator’s Dilemma: The Revolutionary Book that Will Change the Way You Do Business (1997). Christensen’s writings are the culmination of much of what I have described in this section: he is a professor of Harvard Business School who combined Abernathy’s ideas about radical innovation with a basic Schumpeterian vision. He put it into a neat package that could be sold to aspiring leaders out of airport bookstores. He has consulted, opened up the Innosight Institute, and calls himself a “Disruptive Innovation Expert.” Meanwhile, Christensen’s works have resulted in flocks of Silicon-Valley-brained college students, all hopped up on TED Talks, going around wanting to “disrupt” everything by creating the next “Killer App” or whatever.

      A core part of the Western tradition is the idea that serious thinking should resist the self-images of the age, the easy, widespread opinions that Plato called doxa and Francis Bacon named the “idol of the marketplace” and Karl Marx described as ideology. But participants in innovation speak have done precisely the opposite of this. They have celebrated and legitimated the reigning orthodoxy.

      InnoAnon: A Twelve Step Program

      I believe that we should give up—or at least drastically curtail—innovation speak. I believe this for multiple reasons. The foremost reasons to my mind are moral and political, but I realize that many people will simply not go along with my thinking here. Many will find these moral and political reasons tendentious. So, before turning to the moral reasons for abandoning “innovation,” I will focus first on the social scientific reasons for doing so.

      For historians, one worry should be that “innovation” is not an actor’s category. It’s an analytical one that we import into the past. This kind of presentism risks obscuring historical actor’s thoughts, cares, and wishes. We lose sight of the notions that guided their actions. Presentism and lousy historical method might seem unimportant to some scholars, however. A greater concern is that a focus on “innovation”—which often is a stand-in for a narrow conception of technological change—concentrates too much on the technological cutting-edge and on the value of change. This focus draws our attention away from so many other factors that contribute to organizational vitality. Moreover, we shouldn’t be interested only in vital organizations because, let’s face it, most aren’t. If we focus on vital organizations, then our social science does not account for much. David Edgerton tried to draw attention to the historical profession’s overemphasis on cutting-edge and high technologies in his book, The Shock of the Old. Edgerton argues that old and mundane technologies are the norm, not novel ones. It remains to be seen if scholars will follow his advice and broaden their purview. But his challenge holds also for those who have chosen to write about “innovation.” We have put too much energy into such writings, and it has left our accounts thin, narrow, hollow. I believe that these inadequacies also have moral implications.

      If in the grand scope of social science, asking what factors encourage innovation is incredibly narrow, in the context of our society’s problems, it’s myopic. As a society, we have come to talk as if innovation is a core value, like love, fraternity, courage, beauty, dignity, responsibility, you name it. I do not believe, however, that, if you asked people to name their core values, innovation would appear on most of their lists. Innovation speak worships at the altar of change, but it too rarely asks who those changes are benefitting. It acts as if change is a good in itself. Too often, when it does take perspective into account, it proceeds either from the viewpoint of the manager or the shareholder, that is, from the perspective of people who are interested in profits, or from the viewpoint of the consumer interested in cheap goods. Other social roles largely drop out of the analysis. To give an example from the historical profession, Christophe Lecuyer’s Making Silicon Valley: Innovation and the Growth of High-Tech, 1930–1970 contains 75 instances of the word “innovation” but exactly zero instances of the words “poverty” or “inequality,” even though that region is famously unequal. Christophe is a nice and good guy. I do not mean to besmirch his reputation. What I mean to point out is that we have so narrowly defined our studies that we have left out the most important parts. Since the 1970s, Silicon Valley has become the image and model of an innovative locality, with many other places around the United States and the world hoping to imitate it. But what would successful imitation mean for the local population?

      As I was writing this essay, a new publication, the Journal of Responsible Innovation, was released. The journal will house the increasing literature on “anticipatory governance,” which tries to foresee potential risks in emerging technologies and make policies to preempt them. I have qualms with this literature, not least because it puts too much emphasis on emerging technologies and not enough on the mundane technologies that fill most people’s daily lives. Yet, the title of this journal contains an insight largely missing from most writings on innovation, namely that not all innovation is responsible. The introduction of crack cocaine in American cities in the mid-1980s was a major innovation in Schumpeterian terms, but for some reason scholars in innovation studies have not focused on that case. The same goes for how landlords in Hoboken, New Jersey used arson to burn tenants out of rent-controlled apartments and make way for the gentrifying yuppies who were increasingly interested in the real estate just across the river from Manhattan. Very innovative. Of course, William Baumol realized that innovation had many moral faces when he wrote his essay, “Entrepreneurship: Productive, Unproductive, and Destructive,” but few have followed his lead. This paucity partly explains why Assaf Moghadam’s “How Al Qaeda Innovates” was so heavily passed around between scholars last year. Scholars wonder about “the challenge of remaining innovative.” To what end? To whose end? To counter the amorality of innovation speak, we might return to a slightly older notion and go along with Lewis Mumford who, following Nietzsche, insisted that technological changes should be aimed at enhancing and serving life. And we should broaden the (phenomenological) perspectives taken into account. If, as social scientists, we wish to produce work that is morally and politically salient, this broader scope is our only option.

      -Contributed by ,  Assistant Professor in Science and Technology Studies, Stevens Institute of Technology-

      Posted in Uncategorized | Tagged , , | 1 Comment

      How to Give up the I-Word, Pt. 1 Sep 22, 2014

      This is the first part of a two-part essay, which I originally presented at conferences in the spring of 2014. Part two will be posted tomorrow. The full version of the essay, which I’m happy to share with anyone interested, included a section on the place of innovation speak in the academic sub-discipline of business history.

      Use of the word “innovation” began rising in the mid-1940s, and it’s never stopped since. At times, its growth has been exponential; at others, merely linear; but we hear the word today more than ever.

      Innovation

      A Google Ngram for the word “innovation” from 1800 to 2000

      This curve has consequences. It results in concrete stories like this one: a few years ago, a friend of mine was teaching a course that touched on science and technology policy. One of his class sessions that semester fell on the day of President Barack Obama’s State of the Union Address. Somehow the topic of innovation came up during class discussion. My friend joked that the students should pay attention to how many times the President used the word “innovation.” Perhaps, he said, they should use it as the basis of a drinking game: take a sip every time Obama utters the word. The students chuckled. The class moved on. That night, my friend watched the State of the Union Address himself. Part of the way into it, as the word innovation flew from the President’s mouth again and again, my friend was suddenly overcome with fear. What if his students had taken him seriously? What if they decided to use shots of hard liquor in their game instead of merely sipping something less alcoholic? He had anxious visions of his students getting alcohol poisoning from playing The Innovation Drinking Game and himself being fingered for it by the other students. Because the President had used the word so very often, my friend went to bed wondering if his earlier misplaced joke would jeopardize his tenure. Such is life in the era of innovation.

      Of course, the rise of innovation speak has greater consequences than college drinking games and the worries of tenure-track assistant professors, or it would hardly be worth considering. Innovation speak has come to guide social action, or at least structure desirable actions for some. These trends look much more dramatic if you consider terms and phrases, such as “innovation policy,” which virtually came out of nowhere in the mid-1970s.

      Innovation Policy

      A Google Ngram for the word “innovation policy” from 1800 to 2000

      Terms like innovation policy move us beyond an earlier moment where scholars and science and technology policy wonks were simply noting innovation as a process that happened in the world. The point became that innovation could be fostered, manipulated, instrumentalized. Policy-makers could take actions that would increase innovative activity, and, therefore, it became important to learn what factors gave rise to innovation. What are the “sources of innovation”? (Von Hippel) What kind of national and regional “systems” fostered its growth? (Lundvall, Freeman, Nelson) How could managers harness rather than be destroyed by the “gales of creative destruction”? (Christensen) How could localities learn from successful hubs of innovative activity, most paradigmatically during this period, Silicon Valley? (Porter) Many academic disciplines entered this growing space, searching for the roots of innovation, hoping to capture it in a bottle. Historians were recruited into this effort, since they can supposedly draw lessons from the past about innovation.

      Yet, ironies appear. During this same period—the moment of high innovation speak—economic inequality has dramatically risen in the United States as have prison populations. Middle-class wages have more or less stagnated, while executive salaries have skyrocketed. The United States’ life expectancy, while rising, has not kept pace with other nations, and its infant mortality rate has declined relative to other countries since 1980. It has also fallen behind in most measures of education. One could go on with statistics like this for a long time. Put most severely, during the era of innovation speak, we have become a worse people. This claim is not causal. Certainly many other complicated factors have contributed to these sad statistics, but it is also not clear that thinking about innovation policy, which has taken so much time and energy of so many bright people, has done much to alleviate our problems. And some actions done in the name of innovation and entrepreneurship, like building science parks, probably do little more than give money to the highly educated and fairly well-to-do.

      There are still questions to ask, however, even if the link between innovation speak and certain forms of decline is not causal. What has driven the adoption of innovation speak? Why have academics glommed on to this idea when things around them were falling apart? What have they missed by focusing on innovation and its related topics? How do they justify working on innovation when so much else is wrong in our world?

      When we cast our eyes on academics using the I-Word what we see are certain habitual patterns of thought, the reliance on specific concepts and metaphors, and a dependence on questions related to those concepts. Let me give one example: The sociologist Fred Block has co-edited a volume, titled State of Innovation: The U.S. Government’s Role in Technology Development. He also wrote the introductory essay for it. In the opening of that essay, Block rehearses three well-known problems—the trade deficit, global climate change, and unemployment—that President Barack Obama faced as he entered office. Then Block writes, “For all three of these reasons, the new administration has strongly emphasized strengthening the U.S. economy’s capacity for innovation.” From one perspective, Block’s statement makes perfect sense. From another point of view, when taking into account all of the problems listed above, the statement seems slightly crazed. Why would innovation policy be an answer to anything but the most superficial of these issues? Consider all of the assumptions and patterns of thought and life that must be in place for Block’s words to seem like common sense.

      The goal for academic analysis must be to uncover these assumptions and explain their historical genesis. In striving for this objective, we have a great deal of help because this is precisely the kind of work that anthropologists, cultural historians, scholars of cultural studies, and critics of ideology have been doing for generations. We also know of historical analogies that share structures with innovation speak. For instance, over fifty years ago, Samuel P. Hays published his book, Conservation and the Gospel of Efficiency: The Progressive Conservation Movement, 1890–1920. Hays explained the notion of efficiency’s rise to prominence among certain groups of actors during the period he investigated. Ponder then how the utterances of a specific actor fit into this overall trend. After World War I, in a moment slightly after the period Hays’ book covers, Herbert Hoover took over as Secretary of Commerce and used the position to push an agenda of efficiency. As Rex Cochrane writes in his official history of the National Bureau of Standards, a division in the Commerce Department, “Recalling the scene of widespread unrest and unemployment as he took office, Hoover was later to say: ‘There was no special outstanding industrial revolution in sight. We had to make one.’ His prescription for the recovery of industry ‘from [its] war deterioration’ was through ‘elimination of waste and increasing the efficiency of our commercial and industrial system all along the line.’” Hoover’s formulation of the hopes of efficiency so resemble Block’s, on the hopes of innovation. Thanks to Hays and others, we now recognize the ideological status of Hoover’s words. Although Hoover may have believed that he originated his own thoughts, we can see them as part of a larger trend. Moreover, at least to the degree that efficiency formed the basis of Frederick Winslow Taylor’s “scientific management” and other cultural products of the era, we know that efficiency was often a mixed blessing. Its value depended on who you were. Managers loved efficiency; workers loathed it. The same often holds true for innovation.

      The goal of this essay is to offer a first take on the rise of innovation speak. A classic question in the literature on the Progressive Era, to which Hays’ book was a major contribution, was, “who were the progressives?” I cannot yet describe any clear “innovation movement” as Hays described a conservation movement. It will take more time and perhaps a longer historical perspective to answer the question, “who were the innovation speakers?” My aims here are humbler. As I have argued elsewhere, many incentives drive people to use the word “innovation.” In a review of Philip Mirowski’s book, Science-Mart, I wrote, “In the second chapter, “The ‘Economics of Science’ as Repeat Offender,” Mirowski lays much of the blame for our current state at the feet of economists of science, technology, and innovation. Mirowski is right to go after economists as the chief theorists of our moment. ‘Innovation’ has become the watchword, and public policy has congealed around fostering it as the primary source of economic growth. But Mirowski goes too far. Many share responsibility for our current myopic focus on innovation as the key to societal improvement, including politicians reaching for appealing rhetoric and numerous groups opportunistically looking for handouts (e.g., university administrators; academic and industry scientists and engineers; firms, from large corporations to small startups; bureaucrats in government labs).” If this is right, then the goal must be to examine why different groups have taken to innovation speak. For example, academic researchers live not in the land of milk and honey but in the land of grants and soft money and, to win treasure, are bidden to speak of innovation, particularly by the National Institutes of Health, which requires a whole section on the I-word in its proposals.

      This essay moves through three sections. This introduction and the first section will be published today; the second and third sections, on Wednesday. The first section examines some general drivers that have led to an increased focus on innovation in US culture. In the second section, I narrow my focus to the world of academia and the rise of innovation speak in that sphere. This rise tracks with the drivers described in the first section, in part because academics acted as advisers to government during the general problems that increased focus on innovation. Economists, as Philip Mirowski has described, played a major role here, but I will also focus on the business school as a core mediator between academics and non-academics on the topic of innovation. Finally, in the third and concluding section, I will argue that, for both moral and social scientific reasons, we should give up the word “innovation.”

      Drivers

      Several factors drove the rise of innovation speak, including some long, deep trends in Western Civilization. For instance, the word “progress” had crashed upon the shoals of the late-1960s and early-1970s: a triumvirate of factors typically remembered as the Viet Nam war, Watergate, and perceptions of environmental crisis. This growing skepticism included doubt that technology would inevitably lead to a better future. Innovation became a kind of stand-in for progress. Innovation had two faces when it came to this matter, however. On the one hand, in its vaguest sense, the sense often used in political speeches and State of the Union Addresses, innovation means little more than “better.” In these instances, innovation is as close to progress as one could come without saying the word “progress.” On the other hand, in its more technical definitions, “innovation” lacked progress’s sense of “social justice” or the betterment of all. Innovation need not serve the interests of all, in fact, typically it doesn’t. 

      Progress

      A Google Ngram for the word “progress” from 1900 to 2000 shows a decline in the word’s usage beginning—no surprises—in the late 1960s

      These two faces of innovation allowed certain kind of rhetorical slipperiness on the part of speakers. They could use it to mean progress but, when pushed, retreat to a technical definition. This slipperiness also meant that innovation was catholic politically. Both liberals and conservatives felt innovation’s pull. The term was vague enough that no one needed to feel as if it conflicted with his or her beliefs.

      Another factor related to a long trend was changes in thinking about science and technology during this period. The pure science ideal that emerged in the 19th century began faltering in the 1970s and collapsed almost completely in 1980 with the Diamond v. Chakrabarty decision, which ruled that genetically engineered organisms could be patented, and the passage of the Bayh-Dole Act, which allowed recipients of federal money to patent their inventions. These events signaled and pushed forward deep changes in the nature of scientific practice in the United States. In the early 20th century, scientists were blacklisted if they went to work for industry. Beginning in the 1980s, academic scientists weren’t hip unless they had a startup or two. One result of these changes was the death of the so-called Mertonian norms: communalism (or the sharing of all information), universalism (or the ability of anyone to participate in science regardless of personal identity), disinterestedness (or the willingness to suspend personal interests for the sake of the scientific enterprise), and organized skepticism. Scholars in science and technology studies have shown again and again that science never lived up to these norms, that secrecy and other counter-norms were just as prevalent in ordinary scientific practice as the ones Merton identified. In the late 20th century, however, these norms no longer functioned even as ideals. How can you aspire to communism when you are striving for patents and the commercialization of proprietary knowledge?

      Pure Science

      A Google Ngram for the exact phrase “pure science” from 1800 to 2000

      In a series of recent essays, the historian of science, Paul Forman, has tried to characterize the shifts inherent in this post-Mertonian, postmodern moment. In one formulation, Forman argues that, before the 1970s, science took precedent over technology, both because it was believed to precede it (science gave rise to technology) and because it held higher prestige. The intellectual portion of this priority was best captured in the so-called “linear model,” which asserted that scientific discoveries led to technological change. During the postmodern moment, however, the relationship between science and technology inverted, so that practical outcomes—and one might add, profit—became the foremost goal, or so Forman argues. These changes are easily perceivable in recent discussions about and fretting over STEM, or science, technology, engineering, and math, in debates about secondary and higher education in the United States. While the term ostensibly includes science, it isn’t science in the sense of knowledge-for-knowledge’s sake. It is almost always science that is actionable, useable, commercially-viable, science that will make the nation globally competitive. And this focus on competition brings us to our next factor.

      So far, I have focused on two factors—the death of progress and shifts in scientific ideals—that were related to long trends in the history of Western culture, but there were more immediate causes of the rise of innovation talk. Perhaps most important among them was the stagflation and declining industrial vitality that marked the 1970s. Of course, this downturn was itself related to longer trends, including the period of perceived industrial expansion that, excepting the Great Depression, had begun in the late 19th century. The downturn seemed dire in the context of the post-War economic boom of the 1950s and early 1960s. Policy-makers worried aloud about the health of industry, and members of the Carter administration discussed the adoption of an “industrial policy” to spur on “sick industries” and halt the country’s slide into obsolescence.

      The terms “innovation policy” and “industrial policy” shot up around the same time, but industrial policy faltered in the late 1980s and plateaued throughout the 1990s before sliding into disuse. There were some consequences of the victory of innovation policy over industrial policy. Industrial policy was broader than innovation policy. It included the whole sweep of industrial technologies, inputs, and outputs, while innovation policy placed heavy emphasis on the cutting-edge, the so-called “emerging technologies.” Industrial policy also included a focus on labor, while innovation policy typically doesn’t, unless it is to worry about the availability of knowledge workers, trained up in STEM, that is, unless it is labor as seen through the human resources paradigm.

      The emerging innovation speakers were not content to focus on the economic recession at hand, however. Economic decline, for them, was much worse given that it was a decline relative to other nations. In other words, innovation speakers needed an other. And innovation speak, to this day, typically involves members of a global superpower worrying about its state in the world. It’s a worry of empire. In the 1980s, the other was Japan, whose managers and workers were making great strides because of some mysterious thing hitherto unknown in the West, because of something rooted in their culture. Focus on Japan developed in the context of a new discourse about “globalization,” a term whose use skyrocketed beginning in the late 1980s. Fear of Japan was expressed not just in academic tomes but also in popular culture, in movies and novels such as Gung Ho (1986) and Rising Sun (novel, 1992; film, 1993), but academic analysis of Japanese production techniques did become a cottage industry during this period. Analysts of Japan were often central contributors to the rise of innovation speak, at least amongst academics, as I will explore in the next section.

      Ironically, Michael Crichton’s Rising Sun, the most vehemently anti-Japanese expression of American pop culture, came out a few years after Japan’s economy fell into crisis from which it has never fully recovered. But we have found other others, perhaps because we cannot live without them. In the early 1990s, the Federal Bureau of Investigations was already quizzing American scientists who went to professional conferences that also had Chinese scholars in attendance, and current anxieties about American innovativeness often mention China’s investments in science and technology. Talk of innovation is often talk about our relative position in the world. The same holds true of current worries about STEM education. STEM is almost always worry about economic competitiveness, not about the beauties of science. As the physicist, Brian Greene, explained on the radio program, On Being, “The urgency to fund STEM education largely comes from this fear of America falling behind, of America not being prepared. And, sure, I mean, that’s a good motivation. But it certainly doesn’t tell the full story by any means. Because we who go into science generally don’t do it in order that America will be prepared for the future, right? We go into it because we’re captivated by the ideas.” Technology trumps idle curiosity.

      -Contributed by ,  Assistant Professor in Science and Technology Studies, Stevens Institute of Technology-

      Posted in Uncategorized | Tagged , , | Leave a comment

      Fighting for Which Future? When Google Met Wikileaks Sep 18, 2014

      In the summer of 2011, in the midst of the Cablegate affair (the leaking of some 250,000 diplomatic cable transmission between the US State Department and American embassies by WikiLeaks), at a time of far-reaching changes in the regimes of Tunisia and Egypt, and while public demonstrations against existing social order swept various places in the world, a meeting was held between WikiLeaks founder Julian Assange and Google CEO Eric Schmidt and his associates (Jared Cohen, Director of Google Ideas and previously a member of the State department’s policy planning staff; Lisa Shields, VP Communications & Marketing at Council on Foreign Relations; and Scott Malcomson, director of communication at International Crisis Group and previously advisor at the US State department).

      The published transcript of their discussion provides a rare glimpse into a clash between conflicting worldviews, a clash which reflects various power and ideological struggles raging over the past twenty years with regard to technology’s role in our society, usually away from public view.


      Assange-Schmidt — Derivative Work: Colin Green. Original Photos by: Cancillería del Ecuador, Guillaume Paumier, and Wikimedia (Photos are licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license)

      Four such interrelated struggles are particularly worthy of mention. First, what will control of cyberspace look like? Will it be well-organized and centrally controlled by states and corporations, or by the individual users and professional experts? The debate on whether the Internet needs to operate without state regulation has already been decided both normatively and practically. The naive belief expressed in John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, that the web can exist independently, relying on self-regulation only, is no longer tenable. Cyberspace, just like any other human space, is used for both positive and negative activities. The power struggle over the regulation of cyberspace is still ongoing: will it be the crowds whose “collective wisdom” and direct connectivity self-regulate online conduct (relying on professional experts and civil society to establish standards governing the way we communicate with each other), or will control be left to various state or corporate power elites who regulate our behavior by virtue of their control of political and economic resources, including technological platforms? In recent years, we have witnessed a trend of growth in both state and self-regulation of the Internet. However, the question of balancing power foci remains to be seen.

      Second, technology — the web in particular — is it neutral or political? Technocrats tend to argue that because technology is based on algorithms devoid of human interference, makes it possible to construct consistently neutral and non-discriminatory processes. However, by the very fact that it is designed by humans, every technology is inherently political, involving values and interests cast in the image of its developers, and subsequently shaped by its users.

      The third struggle is over the number and identity of mediators. In the information age, the ability to control flows of information is a significant power element. Technological improvements have immensely increased the individual user’s ability both to produce and to disseminate data. Despite this ability, however, true control of information is in the hands of mediators. The huge amount of information produced every second, as well as the need to create, share and read content require the user to rely on mediators. These network gatekeepers help the users in all their activities in cyberspace, from filtering excess information through connectivity with others to producing new content. We rely on Google to find what we search, or on Facebook and Twitter to show us the posts uploaded by our friends. But Facebook does not show us all of our friends’ posts, only those it selects. This is a struggle for controlling the agenda of information conveyed and transferred from one person to another — the essence of power — another aspect of the politics of information if you will.

      Finally, the fourth struggle rages over transparency. Tim Berners-Lee, one of the founding fathers of the World Wide Web, open code developers, and multiple forces in civil society and business sectors all have been working on making information more open and publicly accessible. For some, information openness has become a means to an end. The boundaries of openness have become a critical issue in the struggle for shaping the image of cyberspace and society in general. What are the checks and balances involved? Is the revealing of sensitive information at a security cost justifiable solely based on the freedom of information principle? Does blowing the whistle on systematic surveillance and tracking of civilians and users justify any means? And if limits are drawn, who should determine the boundaries? Information can be open, but its flows will certainly not be equal.

      These power struggles are waged between conflicting sides, but framing them in terms of good against evil, anarchist versus conformist, freedom fighter against the power hungry is simplistic and ignores the complexity of these debates. Google is presented as promoting a model of white, liberal and secular values, while WikiLeaks is presented as promoting various shades of gray. But in fact, the Wikileaks orthodox position against censorship, at all costs, is designed to allow it total control on the freedom of publication, how and as much as it wants. But will this allow other narratives they do not espouse to be freely expressed? It is reasonable to assume that they too, in their capacity as mediators, will become an alternative form of censorship.

      The complexity of power struggles is revealed also when Assange and Schmidt talk about the reduced extent of mediation required in the “new world.” Assange talks about relying on the masses as a way of bypassing intermediators, while Schmidt makes do with believing in the empowering potential of technology as an explanation for the user’s growing power. Both ignore the fact that the degree of mediation has not decreased, but rather increased. Today, Google is the greatest platform mediator in human history — between its clouds, Android operating system, mapping service, search engine, YouTube, chat and telephony in Hangouts, photos on Picasa, or Waze. WikiLeaks, which wants to create “an improved model of journalism,” is also a mediator, whether reluctantly or not. In the diplomatic cables affair, it deliberately chose to release certain materials and exclude others from the public domain. Who can assure us it is an honest mediator? Nobody can answer that question — neither Assange nor Schmidt.

      Despite their conflicting views on various issues, Julian Assange and Eric Schmidt share a blind adoration of technology and the belief that technological solutions will cure society of its ills and woes, of rampant inequality in different contexts and the brutal denial of various rights. Technology has an important role to play, but it is people who turn it into a space for economic growth or into a dangerous space.

      [This is a translation of the Prologue for the Book: 'When Google Met Wikileaks' By Julian Assange in Hebrew. The article was originally posted on Karine’s “Network Gatekeeper” blog, and was also featured on The Huffington Post.]

      -Contributed by ,  Associate Professor in the School of Information, University of Washington-

      Posted in Uncategorized | Tagged , , , , | 1 Comment

      Archive [draft] [#digitalkeywords] Sep 16, 2014

      “In the digital age, we attempt to create archives of a particular moment, the entirety of a medium, the mutability of language, all knowledge. … The archive as a fractured, incalculable moment is an attempt to hold close all that happens at once in the world. But this concept has become incredibly problematic with the rush of information around us.”

       
      The following is a draft of an essay, eventually for publication as part of the Digital Keywords project (Ben Peters, ed). This and other drafts will be circulated on Culture Digitally, and we invite anyone to provide comment, criticism, or suggestion in the comment space below. We ask that you please do honor that it is being offered in draft form — both in your comments, which we hope will be constructive in tone, and in any use of the document: you may share the link to this essay as widely as you like, but please do not quote from this draft without the author’s permission. (TLG)

       

      Archive — Katherine D. Harris, San Jose State University

      In Archive Fever, Derrida suggests that the moments of archivization are infinite throughout the life of the artifact: “The archivization produces as much as it records the event” (17). Archiving occurs at the moment that the previous representation is overwritten by a new “saved” document. Traces of the old document exist, but cannot be differentiated from the new. At the moment an archivist sits down to actively preserve and store and catalogue the objects, the archiving is once again contaminated with a process. This, according to Derrida, “produces more archive, and that is why the archive is never closed. It opens out of the future” (Derrida 68). Literary works become archives not only in their bibliographic and linguistic codes,[1] but also in their social interactions yet to occur. It is the re-engagement with the work that adds to an archive and that continues the archiving itself beyond the physical object.

      I crafted my keyword perambulations around this burning desire to return to origins intermixed with the desire to hold everything at once in the mind’s eye. In literature, this of course causes the protagonist to faint, go mad, isolate herself, create alternate realities — all in the name of either escaping or explaining what cannot be known. My Gothic Novel students pointed out just this week that the narrator in a short story, most specifically Lovecraft, attempts to focus on a few actions in the busy-ness of the world, to focus the reader on what is calculable, knowable but ultimately unheimlich.

      In the digital age, we attempt to create archives of a particular moment (The September 11 Digital Archive), the entirety of a medium (The Internet Archive), the mutability of language (The Oxford English Dictionary), all knowledge (Wikipedia). More than any others, the crowd-sourced information of Wikipedia attempts to capture knowledge as well as the creation of that knowledge — the history or Talk of each Wikipedia entry unveils an evolving community of supposedly disinterested[2] users who argue, contribute, and create each entry. Wikipedia entries represent that digital version of an archive in the twenty-first century. The archive as a fractured, incalculable moment is an attempt to hold close all that happens at once in the world. But this concept has become incredibly problematic with the rush of information around us – a topic that I broach in my entry for “Archive” with The Johns Hopkins Guide to Digital Media.

      Kenneth Price begins my discussion about “archive” by offering a traditional definition of the term:

      Traditionally, an archive has referred to a repository holding material artifacts rather than digital surrogates. An archive in this traditional sense may well be described in finding aids but its materials are rarely, if ever, meticulously edited and annotated as a whole. In an electronic environment, archive has gradually come to mean a purposeful collection of digital surrogates. (para. 3)

      Later in this article, Price veers into discussing the role of archivist in shaping the archive, similar to what Derrida proposes above but with less dramatic flair. Price’s article is in response to the authority of a digital scholarly edition and its editors in the face of traditional print editions. Always, for Price, there is an organizing principle to archiving and, subsequently, editing. However, what we’re concerned with for this particular gathering is inherent the messiness of the archive as it pertain to cultural records, both physical and digital. What gets placed into the archive and by whom becomes part of that record. What’s missing, then, becomes equally important. Martha Nell Smith proposes that digital archives are free from the constraints of a traditional print critical edition; more importantly, the contents and architecture of a digital archive can be developed in full view of the public with the intention of incorporating the messiness of humanity.

      In “Googling the Victorians,” Patrick Leary writes that all sorts of digital archives about Victorian literature are springing up, archives that are not peer-reviewed per se but offer an intriguing and sanguine view of the wealth of nineteenth-century materials. Leary concludes his essay by asserting that whatever does not end up in a digital archive, represented as cyber/hypertext will not, in the future, be studied, remembered, valorized and canonized. Though this statement reflects some hysteria about the loss of the print book, it is also revealing in its recognition that digital representations have become common and widespread, regardless of professional standards. Whatever is not on the Web will not be remembered, says Leary. Does this mean that the literary canon will shift to accommodate all of those wild archives and editions? Or, does it mean that those mega projects of canonical authors will survive while the disenfranchised and non-canonical literary materials will fall further into obscurity?

      Raymond Williams posits that “vulgar misuse” allows for entry into the cultural record (Williams, Keywords 21), though those in library science object to the normalization of archive, that moves away from their professional standards for a vault of record of humanity. But the construction of a digital archive in literary studies conflates literature, digital humanities, history, computer programming, social sciences, and a host of other cross-pollinated disciplines. The archive, more than anything right now in literary studies, demonstrates what Williams calls “networks of usage” (23) with “an emphasis on historical origins [as well as] on the present – present meanings, implications, relationships – as history” (23). Community, radical change, discontinuity, and conflict are all part of the continuum in the creation of meaning according to Williams, seemingly similar to Borges’ “Library of Babel” and Derrida’s “archive fever.” While archivists insist on a conscious choice in the use of “archive” (noun or verb), perhaps as part of a professional tradition, I seek to look at the messiness of the word as a representation of the messiness of past, present, and future.

      The issue with formal digital archives is where to stop collecting to account for scope, duration, and shelf space. In digital archives, sustainability is key; but the digital archive is vastly more capable of accumulating everything and then its liberal and even promiscuous remixing by its users based on the tools available. The primary argument seems to be who is controlling the inventorying, organization, tagging, coding of the data with an archive (user, curator, editor, architect?).  And, what digital tools are best employed in sorting the information? Even a tool offers a preliminary critical perspective.

      Kathleen Burnett, borrowing from Deleuze and Guattari’s concept of the rhizome, notes in “Theory of Hypertextual Design” that the archive is less about the artifact and more about the user:

      [e]ach user’s path of connection through a database is as valid as any other.  New paths can be grafted onto the old, providing fresh alternatives.  The map orients the user within the context of the database as a whole, but always from the perspective of the user.  In hierarchical systems, the user map generally shows the user’s progress, but it does so out of context.  A typical search history displays only the user’s queries and the system’s responses.  It does not show the systems’s path through the database.  It does not display rejected terms, only matches.  It does not record the user’s psychological responses to what the system presents. . . .  The map does not reproduce an unconscious closed in upon itself; it constructs the unconscious.'” (25)

      The digital archive, some argue, is the culmination of Don McKenzie’s “social text,” and the database, and to some extent hyperlinks, allow users to chase down any reference. In essence, the users become ergodic and radial readers. McGann, in The Textual Condition, defines radial reading as the activity of reading regularly transcends its own ocular physical bases, which means that readers leave the book in order to acquire more information about the book (i.e., look up word in dictionary or footnote in back). This allows the reader to interact with the book, text, story, etc., through this acquisition of knowledge. The reader makes and re-makes the knowledge produced by the text through this continual knowledge acquisition, yet the reader never actually leaves the text. It stays with her even while consulting other knowledge. This creates a plasticity to the text that is unique according to each reader (119).

      But the archive, a metaphor once again, is always and forever contaminated according to David Greetham in Pleasures of Contamination. An archive is less about the text of a printed word and can be about all facets of materiality, form, and its subsequent encoding – even the reader herself.  Scott Rettberg notes that the act of reading prioritizes the experience over the object itself with this idea of ergodic reading:

      The process of reading any configurative or “ergodic” form of literature invites the reader to first explore the ludic challenges and pleasures of operating and traversing the text in a hyperattentive and experimental fashion before reading more deeply. The reader of Julio Cortazar’s Hopscotch must decide which of the two recommended reading orders to pursue, and whether or not to consider the chapters which the author labels “expendable.” The reader of Milorad Pavic’s Dictionary of the Khazars must devise a strategy for moving through the cross-referenced web of encyclopedic fragments. The reader of David Markson’s Wittgenstein’s Mistress or Reader’s Block must straddle between competing desires to attend to the nuggets of trivia of which those two books are largely composed or to concentrate on the leitmotifs which weave them into a tapestry of coherent psychological narrative. In each of these print novels, the reader must first puzzle over the rules of operation of the text itself, negotiate the formal “novelty” of the novel, play with the various pieces, and fiddle with the switches, before arriving at an impression of how the jigsaw puzzle might together, how the text-machine may run. Only after this exploratory stage is the type of contemplative or interpretive reading we associate with deep attention possible. (para. 13 – emphasis added)

      As our understanding of digital interruptions into an otherwise humanistic world expands and becomes both resistant and welcoming, the definition of an “archive” expands as well.


      Bibliography

      Bornstein, George. “How to Read a Page: Modernism and Material Textuality.” Studies in the Literary Imagination 32:1 (Spring 1999): 29-58.

      Burnett, Kathleen. “Toward a Theory of Hypertextual Design.” Postmodern Culture 3: 2 (January, 1993): 1-28.

      Greetham, David. The Pleasures of Contamination: Evidence, Text, and Voice in Textual Studies. Indiana UP, 2010.

      Harris, Katherine. “Archive.” The Johns Hopkins Guide to Digital Media. Eds. Marie-Laure Ryan, Lori Emerson, and Benjamin J. Robertson. Baltimore: Johns Hopkins UP, 2014.

      Leary, Patrick. “Googling the Victorians.” Journal of Victorian Culture 10:1 (Spring 2005): 72-86.

      McGann, Jerome. “How to Read a Book.” The Textual Condition. Princeton: Princeton UP, 1991. 119.

      Price, Kenneth. “Electronic Scholarly Editions.” A Companion to Digital Literary Studies. Eds. Susan Schreibman and Ray Siemens. Oxford: Blackwell, 2008.

      Rettberg, Scott. “Communitizing Electronic Literature.” Digital Humanities Quarterly 3:2 (Spring 2009).

      Smith, Martha Nell Smith. “The Human Touch: Software of the Highest Order.” Textual Cultures 2:1 (2007 Spring): 1-15.

      William, Raymond. Keywords: A Vocabulary of Culture and Society. NY: Oxford UP, 1983.


      Endnotes

      [1] The bibliographic code is distinguished from the content or the semantic construction of language within a text (linguistic code) by the following elements, as George Bornstein describes: “[F]eatures of a page layout, book design, ink and paper, and typeface . . . publisher, print run, price or audience. . . . [Bibliographic codes] might also include the other contents of the book or periodical in which the work appears, as well as prefaces, notes, or dedications that affect the reception and interpretation of the work” (30, 31). Linguistic codes are specifically the words. Also within the book are paratextual elements that do not necessarily fall under the bibliographic or linguistic codes. 

      [2] See Matthew Arnold on disinterestedness in Essays on Criticism.

      -Contributed by ,  Associate Professor of English & Comparative Literature, San Jose State University-

      Posted in Uncategorized | Tagged , | Leave a comment

      ← Older posts | Newer posts →