Media Scholarship Needs Updating: Iterative Article “Editions” for a Sped-Up World

The undergrads in my Social Media and the Self class laugh on cue whenever MySpace surfaces in the scholarly literature that we’re discussing. And MySpace appears all the time—as does Dodgeball, Flickr, and Orkut. Even the nods to Digg strike the students as hilarious—a whole fall-and-rise cycle later. I understand. It is a little ridiculous, so I often substitute magazine pieces by smart journalists or the rare public-facing essay by media academics like danah boyd or Nathan Jurgenson. The journal articles, even those published in the last year or two, very often report on data a half-decade old.

This is a problem, and not because of undergraduate snickering. By the time we share our research with each other, it’s already out of date. If scholarship is a cooperative enterprise—something like an ongoing conversation—then the multi-year delay from research to published article makes for a stilted exchange. The reality that we’ve labored to understand is already history.

The obvious culprit is the traditional academic publishing system. Even in the age of Manuscript Central, the submission upload feels like loading the stage coach. Every step in the process—peer review most obviously, but also copy-editing, proofs, and then the publishing-queue purgatory—has its own idiosyncratic duration. For book chapters and monographs, publishers’ seasonal list plans, blurb-begging and conference roll-outs claim additional months.

For media scholars these delays have always grated, if only because our research is aimed at a moving target, the media system. The key shift is that the pace of change has accelerated so rapidly that the standard delays—once an annoyance—are now crippling. Driven by venture capital and hyper-caffeinated coders, the media industries are running laps around our attemps to make sense of them. In the 1920s American sociologist William Ogburn had fretted that a gap—he called it a “cultural lag”—had opened up between the country’s technological development and the rest of the culture. We’re experiencing something similar, a scholarly lag that’s getting worse with every refresh of the Techmeme feed.

It’s true that recency is a superficial measure of scholarship. The dynamics of online self-presentation, for example, were not recast by Facebook’s toppling of MySpace. And just because a study looked at a now-dead social network doesn’t mean that its analysis is thereby vacated. Concepts generally out-live the data that they interpret, and the resonant ones can circulate for decades.

Nevertheless, recency matters—especially to our field. Important features of the media landscape are getting regularly made over, and we’re too late to comment. Silicon Valley bloviators like to boast about “breaking shit.” They’ve definitely broken our scholarly model.


5009661706_f512a5c5d8_oOne solution is to bypass the academic publishing system altogether. A growing number of media scholars are taking their analyses straight to the reading public—think of Zeynep Tufekci’s topical essays or Galen Panger’s astonishingly rich piece on the Facebook research kerfuffle. In the recent past we had to compete for the scarce real-estate of opinion journals like The Atlantic or The New Republic; even if you landed a piece, you had to discipline your prose into citationless brevity. Now it’s easy to embed citations in unobtrusive links to published scholarship, and trimming an article’s length is nothing but a decision about lucidity and the reader’s attention. You can publish your analysis in a range of public-facing outlets, some new—like JSTOR Daily, The Conversation (“academic rigor, journalistic flair”) and platform/editorial hybrid Medium—or even the websites of the once-(space)-stingy New Republic’s of the world. One ancillary benefit is that your work can joust with smart non-academics like Clive Thompson or Nicholas Carr. It’s true that all the new platforms add to the already crippling problem of published abundance. But well-written public pieces really do find an audience, through Tweeted endorsements and wisdom-of-the-discerning-crowd flags like Pocket’s “Best Of” or Nuzzel. At any rate there’s little risk, with a piece of popular writing, that your thoughts will get suspended in unread amber behind Elsevier’s paywall.

It’s obviously not realistic, for practical reasons alone, to expect scholars to reroute their scholarship to The New Inquiry or the TimesOpinionator. Tenure evaluation committees discount popular writing, and professors who publish for the educated public—and not for their colleagues too—pay a (diminishing?) reputational price for their choice of audience. There’s surely some boundary-work at play here, but the queasiness reflects justified fears that public writing is an end-run around the review process. Peer review may be slow and aggravating, but this—our organized skepticism—is the least-worst way to validate knowledge. It’s the only epistemological bulwark we have.

Why not speed up the peer-review process? This would indeed help, alongside an accelerated publishing schedule. I’m convinced that lots of us would move faster if publication delay were more transparently linked to review turn-around time. It would also help to know that our unpaid labor wasn’t fattening Springer’s profit margin. Open access mega-journals in the natural sciences, like PeerJ and PLOS One, prioritize rapid peer review—and the soon-to-launch Open Library of Humanities promises to replicate the experience for social scientists and humanists.

Also worth exploring are the various post-publication peer-review experiments cropping up—again, mostly in the natural sciences. PubMed Commons, PubPeer, and the Faculty of 1000 are three high-profile examples. The idea is that two or three anonymous (and probably tardy) colleagues shouldn’t hold the only set of keys to knowledge certification. Why not lower barriers to publication, the reasoning goes, and push back the reviewing scrutiny to informed readers? Ratings and comments from qualified academics, named or anonymous depending on the platform, would get the wheat/chaff sifting done better—and faster. And the publishing would happen first.

Why wait for publication at all? Can’t the formal trappings of the journal article—the formatting, pagination, and logo—be added later? This is how the cool kids (physicists and mathematicians) do it, by depositing their pre-publication papers (“pre-prints”) in arXiv. Yes, the papers find their way into print eventually, but in the meantime they’re getting read and cited by fellow physicists. For arXivists the official publication is more like an elegy—a museum display case for tenure committees and historians of science.


Still, I’m not convinced that any of these publishing models addresses the media scholar’s dilemma. ArXiv works for physicists because it gets papers distributed fast, so they can be read and cited in the next paper, which in turn will furnish data referenced in a third. And so on. The thing that’s rapidly changing, for the physicists, is the body of research itself. ArXiv permits tightly bound, problem-specific clusters of physical scientists to share (and argue over) findings at the pace of their research.

But for us it’s not the literature that’s rapidly changing; it’s the reality itself. The problem isn’t, say, delay in getting a far-flung colleague’s data on quarks. The problem is that young adults are abandoning Match.com’s algorithms for the swiped gratifications of Tinder right now— meaning that your dating study from three months ago is already out of date. The fundamental nature of the quark has not changed liked this.

Another way of getting at the point is to invoke a contrast drawn by sociologists of scholarly life. In Academic Tribes and Territories, Tony Becher and Paul Trowler distinguish between rural and urban patterns of disciplinary research. Urban disciplines like physics have lots of people working on specific, well-defined topics. Rural fields like media studies are far more spread out, with porous subfield borders and plenty of wandering vagabonds.

It’s obvious why urbanists, with their high people-to-problem ratio, need a fast system of sharing information. All that collaboration and competition make pre-print sharing a la arXiv an occupational necessity. The point is to keep scientists always and already informed about each other. Living the rural life, we’re not quite so pressed to stay in constant touch with one another. Our problem, ironically, is not the urbanist one of communication. Our problem, instead, is that the objects of our research won’t sit still.


The way out of this dilemma is to make sure that our research doesn’t sit still either. At least some of our published output, in other words, should get regularly updated. In place of the current (and centuries-old) one-off, version-of-record practice, we ought to issue multiple versions over time—weaving in new data, secondary literature, and even current-events framing. If the existing model is inert, typeset one-off publication, what I’m suggesting here is something dynamic and breathable.

4326235229_5d14d5c341_oThere are plenty of analogs that we might borrow from. One voguish model is the “Vox Card Stack”, the topical explainers on persistent news stories (think “Israel-Palestine” or “Obamacare”) that receive regular updates on Vox.com. They’re Ezra Klein‘s attempt to supplement the recent-slice-of-reality norm in reporting with context and backstory. The cards’ key feature, for us, is their double character: They have stable topics but dynamic content. Cards are revised, in other words, to reflect new developments in their “story.” Wikipedia operates this way too, even with breaking news entries.

The problem with models like Vox or Wikipiedia is that it is hard to track updates. For good and venal reasons, scholars live by citations. We need stable, discrete referents to cite, even if we give up the notion of a single version of record. If there are lots of versions, these should be ordered, labeled and easily accessible. We would also want some indication of the revisions made, perhaps directly in the text alongside bullet-point summaries in a kind of changes log.

Software, of course, is the obvious model for this kind of methodical tracking. Applications tend to get iteratively updated, with corresponding version numbers and release notes. Software versioning is now an everyday linguistic trope, invoked by tech-industry blowhards and even fellow scholars (“Media Studies 2.0”). It’s easy to imagine the 2.2 version of an article getting “released”—with earlier versions accessible in a reverse-chronological archive.

The software analogy, however, brings with it lots of latent cultural baggage. (The fact that most engineers would be puzzled by this claim is exactly the point.) The thought of fresh young scholars posting clever, Markdown-formatted notes for their latest “point release” brings on the involuntary shudder. The self-congratulatory “entrepreneurial” culture of the venture-funded startup doesn’t, or shouldn’t, look like academia—especially for media scholars, whose task it is to scrutinize Silicon Valley. If we agree that media scholarship needs more built-in revise-and-revise dynamism, we will surely need new publication and citation conventions, and even GitHub-like “revision control” to manage it all. But we should be wary of the metaphors we work by.

My own preference is for the language of “editions”—a word with literary and scholarly resonances. This is the way that publishers already label successive textbook updates, of course. The point would be to extend the descriptor to all publications with iterative updates.

The book/article distinction is increasingly arbitrary anyway, a Procrustean inheritance of publishing convention. In its place we’re already seeing a more flexible word-count sliding scale, with room for the novella-length scholarly “single”, for instance. There’s no reason that the benefits of editioning should kick in at 50,000 words.

I’m not proposing that we abandon the one-off, publish-then-perish model altogether. Discrete, event-based studies probably deserve to be frozen in time. Even “editioned” publications would be retired at some point; no one is signing on for indefinite updates.

There are all kinds of practical challenges for an “editioning” scheme like this, too, from the mechanics of citation practice on through to publishing workflows retrofitted for iterative updates. Take peer review: Is it practical to submit every minor revision for blinded critique? Would a reviewer remain on retainer for the life of an editioned publication? These and a host of thorny issues would need to get ironed out.

What’s plain is that the current system doesn’t work, and that it’s failing media scholars in particular. If we want lively scholarly debate—if we want to join the public conversation—we need to pick up the pace. We can’t publish more work (nor should we). Instead let’s publish our work more often.

Comments are closed.