Spam, and the Challenge of Chasing Shadows

This dialogue was inspired by Kevin Driscoll’s insightful book review in the L.A. Review of Books, of Finn Brunton’s superb new book, Spam: The Shadow History of the Internet. I asked Kevin if he would use a bit of his review to begin a dialogue with Finn; the conversation moved quickly to the methodological challenges of studying shadows.

Tarleton Gillespie, co-organizer of Culture Digitally

Kevin Driscoll

It was a pleasure to read your new book, enough so that it moved me to map the history it tells and the argument you made with that history in the LA Review of Books. I said there

The book’s subtitle reads, “A Shadow History of the Internet,” but the story we encounter inside may be more accurately described as a history of the internet’s shadow. At each turn, Brunton sets aside the areas of the net illuminated by publicity and venture capital to explore the ignored cracks and corners where spam proliferates… As Brunton demonstrates, the history of spam — by definition, that which is undesirable and annoying — can help inform our ongoing quest to determine what the internet is and can be… The history of what we didn’t want is as important as what we did, and to remember it helps us to understand how this transnational assemblage we call “the internet” came to be.

But as compelling as the sociological argument is, the book is built as a history — and a history that, it strikes me, must not have been the easiest one to uncover and snap together. I didn’t think I could spend much time in the book review on methodological and historiographic issues, but here’s where I hinted at what I thought might be a particular challenge:

Whereas other historians of technology have been able to rely on old system manuals, institutional archives, and interviews with surviving users and engineers, Brunton’s task is more difficult: formal preservation of spam-related materials is very rare, for the simple reason that spam is usually deleted as quickly and permanently as possible. As a result, Brunton finds himself chasing the spam story across an array of unusual, and in many cases unstable, primary sources. Usenet FAQs, internet RFCs, and other vernacular policy documents provide some of the most important clues about the characteristic conflicts of their particular times and technologies, and Brunton’s enthusiasm for his ephemeral subject matter is evident in the care that he takes in reconstructing these contexts for readers who may not have personal experience with a time-sharing terminal, a dial-up internet service, or an encrypted grey-market chat room.

So I was hoping our dialogue might be a chance to ask more about that, about the difficulty not only of telling the history of spam, but of the historical and archeological work necessary to get beyond “the areas of the net illuminated by publicity and venture capital.” So, two questions to open that up:

1. One aspect of the book that I really enjoyed was the effort you put into reconstructing the user experience of early time-sharing systems. Very few internet users today remember reading email over a compressed 9600bps connection, never mind an error-prone 300bps connection with a tiny buffer. Wrangling this level of technical detail into something that will resonate with a variety of readers seems like a serious writing challenge. How did you tackle this problem? Did you get feedback from less technical readers? Were there any particularly troubling passages that required a lot of revision?

2. Many of your primary sources were not found in traditional archives. The “Man in the Wilderness” leak is a particularly compelling example of an unusual sort of source that played a crucial role in your analysis. In a sense, you are compelled to wear archivist and preservationist hats at the same time as you developing your own unique historiography. Can you say more about the techniques you employed to identify and manage these artifacts? Did you find yourself at any notable dead-ends? Are there key pieces of the spam story that seem to be “lost”?

 

Finn Brunton

At the time that I was writing the book, I was deeply immersed in the world of the early networks — reading their specifications and technical documentation and policy materials, reconstructing their conversations, studying the early users, their hardware, and their milieu. However, in the off-hours and the evenings I was reading a lot in a genre that I suppose could be considered encounters with young technologies. I wanted to understand how the machines and utilities felt to the people, the role they played in their understanding of what was happening in the world: the affect, as well as the effect — D’Annunzio on planes; Gertrude Stein on cars; Hazlitt on a globalizing imperial space; Mayakovsky on electric light; C.S. Peirce thinking through the Marquand binary logic machine at Princeton; Daphne Oram encountering tape recorders and oscillators; Soviet teenagers sitting down with the Elektronika BK; Siegfried Giedion on glass, iron and concrete in architecture; Benjamin and Brecht listening to and recording for the radio; and especially Humphrey Jennings’s under-appreciated masterpiece Pandaemonium, a massive collage of the firsthand accounts of industrial machinery (and new ways of seeing, like ballooning and microscopy) from the 1600s to the late 19th century (limited largely to England, unfortunately).

Just between you and me and this public forum: Understanding the constraints that were part of the experience of the early network is, of course, very important to understanding how spam developed, both as a concept and a practice — but I had a bigger ambition. I assert that my arguments in the book about spam and the history and dynamics of the Internet are both accurate and useful, but I wanted the book to speak to people who had no particular interest in those arguments, by providing them with an account of how different eras in the history of networked computing felt. My authorial fantasy is that three hundred years from now, when the entirety of the legacy Internet comes shipped for free in some minute corner of every crystalline quantum computer sold, someone curious to explore it can pull up my book (along with many others, of course) to reconstruct the experience and the imaginary of one aspect of the technology.

And, about the practicalities of writing it, a lot of it was helped by virtue of reading those earlier accounts I’ve mentioned above. Something about reading the splendor and shock of electric light helps the terminal pop into its own as an object to be described. (All this is completely aside from the superlative editing I received from the ST&S manuscript reading group at the University of Michigan, and from my reviewers and editor at MIT Press, without which the book would never have worked.)

As for “the techniques you employed to identify and manage these artifacts,” finding them was largely a brute force operation: I just read everything vaguely related to the history of spam, on the wild off-chance it would be pertinent. I read the Arpanet News, and the papers that Leo Kuvayev wrote about teaching an AI to play Hearts while he was still at MIT, before he became a spammer, and all the tangentially related oral histories and publications. I learned an appalling amount about the phone sex business, money laundering and credit card fraud, the ins and outs of setting up an early commercial ISP, tape drives, the weird grey market that’s always swirling around weightlifters and body builders, long-distance calling rates, the history of support vector machines, and so on — much of which never made it into the book, but these materials would often point me in the direction of the archival materials on which I ended up relying. Every scholar knows the magic of that experience, where one thing leads to another. And, yes, I would instantly grab everything I could, whether with a wget command, saving as a PDF, screenshots in the case of some fascinating criminal IRC channels with bizarre, colorful iconography. It was at once amazingly easy to lay hands on materials, and amazingly easy to lose them if I didn’t preserve them myself. The Usenet archive managed via Deja News by way of Google Groups, for instance, seemed to grow progressively more broken with time; spam blogs and spam comment threads were (understandably) deleted by their host services if they were noticed. I would capture it all, just in case. As it stands, most Internet and digital media researchers must be their own archivists.

The areas of spam which I found the most opaque and frustrating were those involving significant people whom I couldn’t track down or contact. Part of why I was so obsessed with collecting firsthand traces was to reconstruct, as best I could, the activities, decisions and experiences of significant actors who were either in hiding, dead, or exceedingly disinterested in discussing their work.

 

Kevin Driscoll

I strongly sympathize with the desire to preserve the feeling of computer use in various times and places. Not only does it help to illuminate the uses and values associated with computer use but, ideally, it should also stimulate design thinking and suggest alternative forms that networked computing might take in the future. I wonder what other resources these future-people might have to accompany your book. Will they also attempt to emulate the software and simulate the network? Maybe 3-D print a replica ASR-33 Teletype? It feels like some of this work is already happening outside of the academy. (Retro gamers collect CRTs to play their aging console and ham radio operators maintain vintage gear to participate in the annual “Straight Key Night.”) I wonder how the practice of writing history will change as computers are used to model and simulate the past. Will it be less important to describe the features of a technology than to examine the reactions of contemporary user-witnesses? Publishing will likely be affected as well — we don’t see many books published with bonus CD-ROMs these days but even quite obscure books will have a web complement. (An alternative way into this question might be: what new sorts of apocrypha become possible when the past is repeatedly re-modeled? Will high-tech re-enactors set up cell phone beacons the way that Civil War re-enactors set up camps?)

In terms of tracking down relevant materials, it sounds like researching spam presents a kind of double-whammy. The stewardship of USENET/email archives has been inconsistent in general but if any effort is undertaken to clean or organize the materials, it is almost certain to begin with a purge of all spam messages. I imagine that this problem is only compounded as spam-fighting is automated and services are centralized. How do you think USENET’s decentralized architecture shaped its preservation?

One hunch would be that the inherent redundancy of USENET’s store-and-forward protocol would leave behind lots of partial collections gathering dust on university servers. But this plentitude could also contribute to a false sense that the materials are ubiquitous, secure, and unlikely to disappear. The opposite would be true for a highly-centralized service like America On-Line. Presumably, only the company itself was in a position to archive its forums (on a mass scale) but they also have greater resources and motivation to do so. Today, we have access to a big chunk of USENET but almost none of AOL. What does this suggest for scholars who will attempt to reconstruct the experience of Facebook or Wikipedia?

You’ve also really piqued my curious about the sorts of people who are “exceedingly disinterested” in talking to a researcher. Given how recent these events are, there must be countless people ready and willing to claim to various “firsts.” Making contact with reticent interviewees seems like a valuable corrective. I suppose that having spam as your organizing theme presented another kind of double-whammy here because many of the people you would want to contact were either acting pseudonymously or are remembered for rather dubious achievements.

Writing the review for the LARB, I was struck by Bruce Sterling’s rather vitriolic blurb on the dust jacket — “Brunton has done mankind a service with this coldly objective analysis of a great human evil.” Did you have to manage a lot of lingering animosity as you started to talk to “survivors” of spam’s past?

 

Finn Brunton

There are so many interesting questions here, but I think they circle around a common theme — the peculiarities of doing digital history, especially with a technical orientation: access to archives, thinking through artifacts, working with participants, all as modified by the technology. This is, of course, a gigantic issue, too large for this immediate dialogue! But I have some thoughts connected to your questions that can help us get at it.

Both by temperament and by methodology I have a soft spot for lost causes and paths not taken. I’ll bullet these, since there’s a couple of subthoughts here:

• Part of this is a heritage from “strong programme” science studies: to understand the work of science, especially in historical retrospect, you need to be able to see the space of options in which science takes place, rather than using social explanations for the theories that turned out to be incorrect and leaving the correct theories to “truth.” That one group was right, as we now know, doesn’t help us understand how they went about producing truth, any more than it would to retrospectively marvel at Napoleon’s inability to anticipate bad weather and the Prussian army at Waterloo. (I love returning periodically to Stendhal’s description of Fabrizio stumbling through the battle in The Charterhouse of Parma: fog, mud, the dead, distant sounds, a plowed field, people fleeing, people charging, the wounded horse — “But is this a real battle?” he asks; “Something like.” replies the sergeant — because it’s such a great corrective to our sense of pellucid armchair quarterbacking, an evocation of history as blurred present.) Understanding the advent of the accurate theory involves understanding the whole space of the inaccurate, and giving it the weight it historically deserves; understanding TCP/IP also involves understanding OSI (for a great, journalistic look at the latter, see Andrew Russell’s piece “OSI: The Internet that Wasn’t”).

• And, while I’m not a full-time media archaeologist, I follow that field with immense interest, and take very seriously the commitment that Zielinski makes in the concept of variantology — that media history is the history of the vast space of unsuccessful alternatives and variants. This produces a richer, more meaningful historiographical record: to take Rick Altman’s phrases, “cinema-as-it-is” is in many crucial ways the product of categories of the moving image like “cinema-as-it-could-have-been” and “cinema-as-it-once-was-for-a-short-time-but-ceased-to-be.” However, variantology also reflects a deeper stake. Paying attention to the legacies keeps the game of history (in which the present is embedded) still in play; it expands the space of possibilities for action now. (It is closest in some ways to Benjamin’s “weak messianic power,” the appointment the past is waiting to keep with us, in all the failed tries, recalcitrant details, and silenced populations who can be brought into the sudden constellation of present action, for which stars of very different distances and times assume a single, coherent shape.)

• Finally, and related to the previous bullet, on a design level the space of the alternatives is an amazing reservoir of creativity, from the Cybersyn of Allende and Beer (chronicled in Eden Medina’s Cybernetic Revolutionaries) to the architecture of Cedric Price, Archigram, the Metabolists … the experimental politics of exploratory communalism, and initiatives like the Persimfans orchestra and Lysander Spooner’s American Letter Mail Company … and, getting back to the specific areas around computing that I work on, projects like T.O. Ellis’s amazing GRAIL programming language (1969!), Carl Hewitt’s Actor Model of computing (PDF), Jef Raskin’s astonishing interface and OS work (from the Canon Cat and The Humane Interface to Archy), and my beloved Plan 9 From Bell Labs.

For my purposes, following the history of spam was very much about exploring these forks in the road: of networking technologies and standards; of models of community, politics, online economics, and governance; of cultural conceptions of how networked computing should be, and what it should do. This meant keeping this space of alternatives, debate, and statements and counterstatements (legal, social, technological) in focus.

To your point, there are structural characteristics to digital technologies that make it possible to explore the space of alternatives in useful ways: in many cases, the people involved are still alive and happy to discuss (as in Belinda Barnet’s great recent book on hypertext, Memory Machines, which draws on conversations with Engelbart and Ted Nelson), and the hardware and software are, in various ways, available — whether through projects like the Media Archaeology Lab or Javascript MESS. People are knocking together the firmware to have working oN-Line System-style chording keyboards, and emulators for specific microprocessors, like the legendary MOS 6502. We scholars of matters digital have an enormous advantage in this respect: we can, relatively easily, pull together that whole charged field of technical details and social contestation out of which a technology actually emerged. And, of course, this puts an ethical burden on us to keep as much of this available for the future as we can. (In this respect, we also have a great boon from the hobbyist community, who keep operating systems and old sites and platforms and machines running — every alliance we can form there will be very, very valuable — which speaks to your note about the material history of computing and networking happening outside the academy as well.)

But, of course, it can still be a bit feast-or-famine. Some resources, some huge chapters, are to a shocking extent simply gone; others are available in unbelievably rich, contextualized, timestamped detail that would be the envy of any historian working with paper documents. (For an instance of the former, I’ve been fascinated by the methods Megan Ankerson has described in her work on the commercial web design industry in the 1990s — things like finding images on screens visible over someone’s shoulder in a photograph in a printed brochure, and using boxes and manuals — because of the ways that vitally important material was simply not preserved.) Ultimately, for me, the history of digital media is a chapter in the greater history of technology, and that has long been a struggle to restore names and timelines and context to what Siegfried Giedion called “anonymous history” in Mechanization Takes Command — indeed, in 1948 he was already decrying the “amazing historical blindness” that “has prevented the preservation of important historical documents, of models, manufacturer’s records, catalogues, advertising leaflets, and so on.” And many of the worst villains were the manufacturers themselves, who rarely had contingency plans for the material and documentary legacies of their companies, or an archival vision, as you point out about AOL versus Usenet (and as will be the case, I would bet, for Facebook versus Wikipedia).

 

Kevin Driscoll

Patent trolling aside, the notion that “dead end” technologies might serve as a common pool of unfinished ideas is really compelling. I’m curious to know more about the dead-ends that linger on, not just as sources of inspiration or objects of nostalgia, but as practical, everyday tools. For example, there are thousands people working in actual Plan 9 environments today. In these cases, continued liveliness is often accompanied and supported by a deep affective commitment among remaining users. I am sure that there is someone out there writing true crime novels on a Canon Cat. I don’t think that the emotional heft of this loyalty can be adequately explained by simple economic models such as path dependence or the sunk cost fallacy, nor is it just a cultural preference for contrarianism. Instead, it feels like the early rumblings of a rather radical opposition to compulsory upgrades.

One intriguing case that I have been watching unfold recently is the Neo900 project. Neo900 is a proposed drop-in replacement motherboard for the Nokia N900 smartphone, one of only two products to ship with Nokia’s Maemo/Linux operating system. For years, enthusiasts have consumed a steady stream of N900 clones from China rather than “upgrade” to an Android phone. To date, supporters have raised over 50,000€ for an alternative sort of upgrade that promises autonomy from Nokia. For these hardcore Maemo enthusiasts, the N900 is not just a curious “path not taken” but rather a path that they aggressively refuse to abandon. They are, in effect, “occupying” Maemo in a pragmatic protest against vendor lock-in and planned obsolescence — an ideological position made plain on the Neo900 site.

That Maemo was built on Linux gives the N900 enthusiasts an advantage over fans of particular Apple or Android phones. For the latter group, resistance is, more or less, futile. Whereas devices like the N900 (or the Canon Cat!) will continue to work as long as electricity comes out of the wall, an old Android phone will gradually cease to function properly if the user refuses to accept remote updates.

It’s interesting to link this rising resentment about forced upgrades back to the threats — real and perceived — that animate your “third epoch” in Spam. The recurring justification for Trusted Computing, centralized control, and mandatory remote updates is that they protect users and their information from malware. Maemo users are also shielded from malware, albeit by obscurity rather than any proactive defense. The ROI on N900 malware would be so minimal that malicious Maemo programs seem extremely unlikely. In this sense, deciding to use one of the lingering “dead ends” from the niches of computing culture could be a strategic means of protecting oneself from snooping. (Maybe activists will start buying Palm Pre’s off of eBay?)

The lingering usefulness of dead end technologies makes it difficult to mark a hard boundary between past and present. And yet, it seems that there is an unmet desire for a history of computing in popular culture. Tech news sites regularly feature retrospective slideshows of companies in decline like BlackBerry and Nokia, Steve Jobs biography was a bestseller, and “The Social Network” made it to the megaplex. As I was thinking about your last set of answers, I came across this diagram representing all of the known implementations of the Forth programming language (a favorite of both Raskin and Nelson.) The “family tree” representation reminded me, on one hand, of “The Evolution of Video Game Controllers” infographic by Pop Chart Lab, and, on the other, of the “quasi-linear” diagrams that Pinch and Bijker included in their discussion of the Penny Farthing in “The Social Construction of Facts and Artifacts.” Is there something peculiar about the history of technology that lends itself to visual representation? Why would someone decorate their home with a genealogy of video game controllers? Did any graphical artifacts play an important role in your research leading up to Spam? Did you consider including any figures? (A network diagram of the mid-1990s spam industry would be a thing to behold!)

 

Finn Brunton

As it happens, in the interval between our last exchange I started re-reading Carlo Ginzburg and a few other members of the microstoria school, for whom I have a deep affection and respect, and their thinking and concerns might provide a way for us to go back up a few levels to the historiography of the digital more generally. (There’s a reason I spend so much time in a book that’s nominally about spam with a Natalie Z. Davis book!) I’d like to try out the idea that there’s something particular to the nature of digital technology and its history that distinguishes the kind of work we and others in the space do, and raises a particular set of problems and questions broadly.

One of the essential problems of microhistory is “asking big questions in small spaces” — using the space of a single village, a single person, a single trial, to come to terms with what I will call, a little tongue in cheek, the general dynamics of an era, a populace, a system. You look for the “normal exception,” the everyday person who happens to have been recorded and documented in some detail, illuminating all the hordes of background extras who are normally out of focus in the distance behind the king, commander, vast Marxist historiographic waves, and so on.

However, the kind of digital work we and others do — and I mean that broadly, from (as Tarleton puts it) the work you and I do, on spammers and BBSers and assorted malcontents and mutants, to those studying ARPANet, big connectivity infrastructure, contemporary social network business plans, the spread of personal computing whether through games or spreadsheets — this work is of a piece because the “normal exceptions” really aren’t exceptional. Part of what makes the story of networked computing interesting is that little and big are so thoroughly mixed together. In my quick, back-of-the-envelope way, I’d like to distinguish this from the history of computing more broadly: in some of the those cases, particularly in the early institutional/industrial history, there really is a noticeable distinction between the operation of the big iron and the lives of those outside the Eliot Noyes gleaming glass enclosures — but this points up how much big and little blend once networking enters the picture. Hacker culture (and to some degree phreaker culture), pro-am transitions by daydreamers, activists, and bored graduate students, the consequential inventiveness of literal and figurative small-timers: the outsize contributions of many scattered micro-actors are such a significant part of the story they leave no element untouched. To turn my previous comment around, understanding OSI — one of the most classically institutional, formalized parts of the history of networking — means understanding the other standards, which were extraordinarily open to groups of all sizes and degrees of influence.

We could go on in this line: open source, web standards and consortia, working groups, public mailing lists, user revolts and protests (to go along with user generated content), the freely circulated RFCs, Zimmerman’s PGP code migrating around Usenet and beyond, students Xeroxing the Lions’ Commentary on UNIX 6th Edition, with Source Code, open governance and open data projects, IP piracy, edit wars on Wikipedia… It is inescapably a history in which little and big actors are tangled up together in a unique way, and have to be jointly accounted for — “unique” not in the sense that there has never been such an entanglement before, but that we have never before been able to see it so clearly. We don’t have to seek out the normal exception brought to (generally tragic) light by court proceedings or records of interrogation and punishment. As a rule, everyone involved generates almost relentless self-testimony: in blogs, commented code, mailing lists, comments generally, RFCs, memos, demos, tweets and public talks.

Your discussion of Plan 9 epitomizes this: it’s hard to imagine a more old-school Big operation in contemporary computing than an initiative to build a successor to Unix, hosted and underwritten by Bell Labs — but of course, as you point out, the story of Plan 9 (for more than half its life, at this point) is the story of hobbyists and researchers of many stripes doggedly keeping it alive, forking it, reinventing it, and appropriating elements. And, of course, this quality is part of what fascinates me about spam: the collapse of the focal layers from the everyday interaction of someone with their email client, through the technicians, administrators, and architects of the network, to the scale of Interpol, Google, national jurisdictions, cyberwarfare — and the spammers seeking out their normal exception, that small but inevitably reliably group of recipients who will take the bait (and produce a testimony of their own). For the story to be clear and accurate, we need to be like Gregg Toland, filming in deep focus, keeping all the elements at different scales visible at once. As people working in a historical register on digital technologies, that’s our privilege and our challenge.