Are Surveillance Capitalists Behaviorists? No. Does It Matter? Maybe.

The release of the documentary The Social Dilemma has understandably irritated scholars who study the social dimensions of science and technology. Lisa Messeri’s Twitter thread has an excellent summary of all that’s wrong with the documentary (all of which I agree with).

But the documentary’s starting point — that the technical mechanisms these companies have created to cognitively direct user attention (a.k.a. algorithms that make you doom-scroll) have deleterious consequences and are our biggest problem today— is something that at least some scholars agree with. I want to highlight one problem with this account (that keeps recurring in technology criticism broadly): the presumption that surveillance capitalists and Silicon Valley engineers are behaviorists — intellectual descendants of B. F. Skinner — whose guiding principle is to build elaborate technologies of stimulus designed to provoke particular responses in their users.

In this post I want to pose and answer two questions: is that correct? And does it matter? Short answer: surveillance capitalists are not behaviorists, but behavioralists. Behavioralists are okay with guiding individual level behavior as long as it leads to higher-order system behavior that they think is useful; in other words, they have a different theory of freedom than behaviorists. Painting Silicon Valley engineers as behaviorists is no doubt politically useful (on which more below) but will it be persuasive when push comes to shove in the battle to regulate the digital economy? I try to untangle some of these contradictions below.

The argument that Silicon Valley engineers are behaviorists is made most explicitly in Chapter 12 of Shoshana Zuboff’s The Age of Surveillance Capitalism. (It also occurs, in some form, in other works of tech criticism that I love: Audrey Watters’ writing on the history of ed-tech and in Yarden Katz’ critique of AI).

In Zuboff’s telling, the flagship companies of “surveillance capitalism” — your Googles and Facebooks — accumulate fine-grained data about their users’ (i.e. our) actions, extract knowledge about their users from this data, and then bring this knowledge to bear by actively trying to shape their users’ attention/behaviors in ways that reap more profits. This style of value extraction, Zuboff argues, can only end in the destruction of basic human freedom. As more data is collected, more predictive knowledge extracted from it, self and society will be increasingly automated, as all of us users do the bidding of our new masters, the surveillance capitalists. The book describes in great detail many of the technical-legal decisions that had to be taken in order to establish surveillance capitalism.

Zuboff connects this particular impulse to extract value from data to shape behavior to a strain of social thought that she calls “instrumentarianism,” a close cousin of totalitarianism. “Totalitarianism operated through the means of violence,” she argues, “but instrumentarian power operates through the means of behavioral modification.” According to Zuboff, instrumentarianism is an approach to human action that does not put much premium on the insides of human beings; it only cares about what they do, especially in what can be quantified, so that this behavior can then be modified over and over. She argues that the origins of instrumentarianism lie in behaviorist research with its focus on stimulus-and-response, operant conditioning, and the like. But if the means of instrumentarianism are different from totalitarianism, the end is roughly similar: the inhibition of human freedom and autonomy.

In her emphasis on human freedom, Zuboff has something in common with Tristan Harris, the ex-Googler and prominent talking head in The Social Dilemma, who had a crisis of conscience after a decade of working for Google, and now argues that “the tech industry[‘s …] design techniques to keep people hooked to the screen for as long and as frequently as possible” are “hijacking […] the human mind. [S]ystems […] are better and better at steering what people are paying attention to, and better and better at steering what people do with their time than ever before.” If Zuboff sounds like Max Weber lamenting the iron cage, Harris has a whiff of an old-fashioned moral crusader who is comfortable in a TED talk; one gets the feeling that in a different time he would be equally at home fulminating against alcohol or TV because they kept the human mind captive.


Who were the behaviorists? Coming to the fore in the early 20th century, behaviorists were a group of psychologists who argued that human action was best understood as responses to external stimuli (complicated actions could arise as a result of chained stimuli: a stimulus that leads to a response that leads to a different stimulus and so on). In making these claims, behaviorists were guided by two impulses. Psychology in the late 19th century had chosen introspection as a way to understand the human mind. Behaviorists found this method too unscientific, too reliant on the researcher’s subjectivity. In an era of quantification, they sought to build a psychology that was a science, open to quantitative measurements and refutation, i.e. around what Peter Galison and Lorraine Daston have called “mechanical objectivity.” Behaviorism’s rise was also predated on the social changes of its time: the rise of advertising and bureaucratic organizations raised the possibility of applying psychological insights to stimulate consumption, work, and efficiency.

If behaviorism was the dominant school of thought in psychology in the early 20th century, a backlash began to set in by the end of the 1930s. It gathered steam in the 1950s and reached its climax in Noam Chomsky’s famous takedown of B. F. Skinner’s Verbal Behavior. By the 1960s, behaviorism had been replaced by cognitivism (or if you prefer, cognitive science). If behaviorists famously restricted themselves to thinking about stimuli and responses, and dismissed the mind as irrelevant, cognitivists conceived of the human mind as an information processing system. They argued that, when conceived this way, the mind could be studied scientifically. Indeed, cognitivists believed that the computer program — the artifact and the concept — offered a rigorous way of studying what the mind does: you could build computer programs to simulate the mind, and indeed, you could also see the mind as a computer program itself, as an entity that engages in “planning” and execution.

Marine Anti-Aircraft Gun, Tulagi, circa 1942

Anti-aircraft guns circa 1942. CC-BY-2.0.

Where did this revolt against behaviorism come from? Historians like David Mindell, Paul Edwards, Katherine Hayles, and Jamie Cohen-Cole locate this revolt at the intersection of two different trends. First, electrical engineers working on difficult technical problems of servomechanisms, radar, amplifiers, and anti-aircraft guns had been forced to conceive of these operator-controlled technical mechanisms (see the figure above) as “systems” which responded to the “feedback” from their environments by adapting themselves. Once these human-machine assemblages started being understood as systems that adapted themselves by passing “information” and messages with their environment, it was only a matter of time before human beings (and human minds), electricity grids, bureaucracies, organizations, corporations, and even societies were all re-interpreted as “systems.” The new techniques of linear programming, operations research, and computer programming became the tools — conceptual and practical — through which such systems could be managed and manipulated.

Second, and equally important, the behaviorist take on human nature was simply incompatible with the politics of the cold war in the United States. The behaviorist argument that people were shaped by their environments might be applicable to citizens living in totalitarian states but simply could not do for citizens of a free society like the US. Cognitive scientists argued that the human mind was always, in potentiality, an “open mind” (unless it had been corrupted by authoritarian states) that strives to process information in a non-ideological way. This was both a technical and normative move: it suggested a way of studying the mind as well as the way a mind should be. Amusingly, the fight between behaviorism and cognitivism got quite personal: personality psychologists sought to show, through their psychological survey instruments, that behaviorist psychologists themselves exhibited authoritarian tendencies (rather than an open mind).


It was in this ferment of the cognitive revolution that the world of Artificial Intelligence (AI), the precursor to today’s surveillance capitalism, was born. As with other flag-bearers of the cognitive revolution — linguists, psychologists, anthropologists, neuroscientists, philosophers, communication engineers — AI researchers too saw themselves as part of a revolt against behaviorism and were committed to a model of the mind as an information processor.

But wait, you might say, today’s AI research is very different from the AI research of its first few decades. That is indeed correct. The early decades of AI research was premised on the notion that when human beings did putatively intelligent things, they were enacting a plan that they had worked out in their heads. The folk notion of “planning” was wedded to a highly specific technical machinery of state-space searching and utility maximization. Proving a theorem, playing chess, and diagnosing medical patients, according to AI researchers, all involved some sort of planning on the part of human beings (and therefore could be modeled using computer programs).

The kind of AI that is done today is usually referred to as “machine learning.” Rather than understanding intelligence as an expression of linguistically-rendered rules or planning, machine learning researchers build “classifiers” that consist of a function that is computed using “training data.” Want to know if an image contains a bridge in it? Then come up with a bunch of images with bridges in it (i.e. training data) and use them to train a statistical classifier. Once trained, the classifier can (with widely varying levels of confidence) tell you which images contain bridges. The key here is to not start out with any particular model of what a bridge is but to leave it to the training data and the learning algorithm (both human choices, to be noted).

Or take the topic with which this article began: the ubiquitous recommendation algorithm that puts stuff on your screen. How is that accomplished? Well , to figure out what to put on a user’s timeline, Twitter engineers try to collect data on which tweets a user reads and how much time she spends on them, then build a classifier that will come up with a score for whether that user will read a tweet. They then deploy this classifier — which is itself massive and sucks in hundreds of inputs to give its output — in Twitter’s enormous software infrastructure such that all possible tweets this user might receive (say from all the accounts that she follows) are run through this classifier, and recommend to the user only those that have a high score. Of course, the user’s response to the recommended tweet just becomes more training data for the algorithm, and on and on.

Well, that sounds pretty behaviorist though, doesn’t it? Isn’t the algorithm offering up a stimulus, gauging your response, and then switching the stimulus again, all to get you to act in a certain way? On the surface, it certainly seems that way. But there are a few complicating factors.

First, the designers of recommendation algorithms seem to be motivated less by behaviorism proper than by behavioral economics — an approach to institution design that came out of social psychology and economics. Thus Nir Eyal, the author of Hooked: How to Build Habit-Forming Products, begins his book by describing where he found the insights that he then tried to implement: “I looked for insights from academia: drawing upon consumer psychology, human-computer interaction, and behavioral economics research.” What did Eyal find there? He says: “The field of behavioral economics, as studied by luminaries such as Nobel Prize winner Daniel Kahneman, exposed exceptions to the rational model of human behavior.”

This notion, that human beings are not the most rational of decision-makers, and need a robust and well-designed “choice architecture” to help them do the things they want to do, is not so much a behaviorist tenet as it is a a part of the cognitive revolution. The historian Hunter Heyck argues that starting in the middle of the 20th century, American social scientists (including psychologists) reformatted the object of their inquiry: rather than the human being who chose from different options, they started to study the process of choosing at the level of “systems,” not just at the level of human beings or individuals but for animals, machines, organizations, and even societies — all systems. Social scientists could thus raise decision-making to the art of the highest democratic good while simultaneously showing that human beings were limited decision-makers: satisficing agents, according to Herbert Simon or systemically irrational, according to Daniel Kahneman, both Nobel Prize winners.

Now, to be fair, Eyal does draw on B. F. Skinner, while using the concept of “variable rewards” that he argues well-designed habit-forming apps must give to their users in rewarding their sense of “tribe, hunt, and self.” But one can argue that “reward” here is mostly just synonymous with “feedback” (and feedback at multiple system levels) and the analysis is carried out more in the spirit of designing a choice architecture than building stimuli. Can these habit-forming techniques lead to bad results? Absolutely, says Eyal. But he quotes Thaler and Sunstein to argue that these techniques should be “used to help nudge people to make better choices (as judged by themselves).” Designers, according to Eyal, should “build products to help people do the things they already want to do but, for lack of a solution, don’t do” (my emphasis).

What do actual Silicon Valley engineers think as they go about building their algorithms? The evidence is mixed but it suggests that many, if not most, engineers see themselves not as behaviorists but as choice architects. In his study of engineers designing algorithmic music recommendation systems, the anthropologist Nick Seaver finds that engineers do think that their users need to be “hooked” but hooking is merely the first step on a journey that has a whole lot of paths and destinations. Music recommendation engineers think about their relationship to music listeners in a variety of ways: as guides, as educators, and service providers. In my own ethnographic research with algorithm designers working in the world of Massive Open Online Courses (MOOCs), I saw the same approach: engineers saw themselves as empowering learners by building for them choice architectures (of resources, problem sets, instructional material) through which they could learn better. And Canay Ozden-Schilling, in her ethnographic work on electricity grid designers, finds them using similar tropes as they go about their project of of turning passive electricity consumers into active users.

On the other hand, Natasha Schull’s ethnographic research on Las Vegas casinos tells a very different story. Casino designers and slot machine engineers do not seem to have any exalted notions of their relationship to their users beyond “hooking” them. And the “hook,” as one addiction counselor tells Schull ominously, is simply “the drive-in to the zone” — the zone being that area of consciousness in which nothing exists for the habitual gambler than the machine and the game. No choice architecture here: just the hook and then the zone. (Some of the “dark patterns” stuff would fall into this category as well.)

To recapitulate, most, if not all, surveillance capitalists and Silicon Valley engineers do not see what they do as being necessarily in conflict with individual autonomy because on a broader conceptual level, everything in the cognitivist conceptual apparatus is modeled as decision-making at different levels of abstraction (machine, individual, organizational, social). In such a scenario, manipulating informational parameters of individual decision-making to make higher level “decisions” more optimal, is no abridgment of individual autonomy.


Does any of this matter? It is here that we reach a double-bind. On the one hand, engineers themselves do not see what they do as inhibiting individual freedom and autonomy. On the other, it is often by drawing on tropes of individual freedom and autonomy —to show how they are restricted through designed algorithmic systems — that scholars and activists have often succeeded in drawing public attention to questions of algorithmic governance. (For now, I will conveniently ignore the other value that has helped create public awareness of algorithmic systems: that they need to be fair and non-discriminatory.)

Take the two biggest controversies around Facebook: its emotional contagion study and Cambridge Analytica. The ways in which these played out in public discourse mirror some of the early fights between the cognitivists and the behaviorists. Cognitivists argued that behaviorism was illiberal (and so were behaviorists) because it explicitly violated the autonomy of individuals. Similarly, it is often the illiberality of Facebook — writing about its workings using behaviorist tropes — that Surveillance Capitalism and The Social Dilemma highlight.

Are we forever doomed to arguing about autonomy in a world of algorithmic systems through the lens of behaviorism? More important, will the argument that social media algorithms are behaviorist/illiberal actually help us win the public debate around how social media should be regulated?

The first question is hard to answer. But the second one is even more important. I hope that invoking the specter of behaviorism helps us win public support in the battle to regulate social media but I worry on two counts.

First, arguing that Facebook is addictive because of its recommendation algorithms or because it gives political campaigns the ability to uncannily target persuadable voters ends up hyping — even if unintentionally so — the personalization algorithms of Facebook and Google and YouTube. Many scholars have argued that the problem with Zuboff and The Social Dilemma is that they end up amplifying the self-serving narratives of these companies about their latest magic trick, be it the persuasive power of their political advertising algorithms or the awesomeness of their artificial intelligences. This has most been the case with Cambridge Analytica: rather than evidence of Facebook’s sloppiness while dealing with third-party app developers, the controversy turned into a question of whether CA had “manipulated” citizens into voting for Donald Trump — which was exactly CA’s pitch to various campaigns.

Second, surveillance capitalists have the intellectual resources at their disposal to counter the charge that their apps are sites of Pavlovian manipulation of users. In the years since cognitivism ousted behaviorism, cold war researchers and computer programmers created a new ideology of freedom. Historian Fred Turner has argued that even as cold war computer labs built technologies that were “large, complex, [and] centralized,” the labs themselves were sites of “flourishing […] non-hierarchical interdisciplinary collaboration” (p18); these cold war labs helped “perpetuate an extraordinarily flexible, entrepreneurial, and, for its participants, often deeply satisfying style of research” (p17). Cultural entrepreneurs like Stewart Brand and Howard Rheingold connected these free-wheeling non-hierarchical practices of collaboration within cold war computer labs to the emerging counter-cultural movements and their desire to free themselves from the military-industrial-bureaucratic system creating the ideology that might be called “cyberutopianism.” These entrepreneurs imbued the digital computer, otherwise a symbol of the hated system and the government, with the notion of liberation; the digital computer thus came to be seen as the way through which the counterculture could liberate itself from the hateful forms of bureaucracy (corporate or government) that it despised.

As the anthropologist Chris Kelty argues, cyberutopians believe that their apps make users more free, not less. Theirs is a theory of “positive liberty” meaning that there is no contradiction here between shaping human behavior to make humans even more autonomous and free. As Kelty puts it:


If there is something to be concerned about in Silicon Valley’s approach to liberty, it is not that it is overly libertarian, but that it is a kind of positive liberty imposed not through government action, but through the creation and dissemination of technologies [… that have] been designed to liberate (or coerce) the individual into being a freer, and more individual, individual.

As an example of cyberutopianism, look no further than Mark Zuckerberg’s post-election in-the-midst-of-Cambridge-Analytica manifesto from 2017 which is titled “Building Global Community.” The word “community” appears in it more than 100 times and Zuckerberg argues that Facebook is essentially a tool that people the world over draw on to build communities. Facebook’s goal, however, is to create the social infrastructure that will help people fulfill their potential: “the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.”

In the fight to regulate algorithmic systems, will people believe Zuckerberg or will they believe Zuboff or Harris? I hope it’s the latter but if the fight over California’s Prop 22 is any indication, I’m worried it’s the former. And if so, it raises a different question: in the fight over regulating digital platforms, how might we think of countering cyberutopianism?Empirical research (e.g. by Morgan Ames on OLPC, Christo Sims on digital schools, and Lilly Irani on design) points to some ways but that’s a post for another time.

[Cross-posted on Medium and the TQE newsletter.]