The last tweet you got may have been from a robot. A socialbot, to be exact.
Like some neo-Asimovian science fiction story, networks of socialbots are beginning to spread across social media, liking, Tweeting, and friending unsuspecting humans, biding their time, gathering information, and subtly shaping our online lives. Moreover, these ‘bots are fascinating, and a little frightening, because they are meant to appear to be human. In other words, unlike a spambot or a Web-crawling bot, socialbots are social media profiles that look and behave just like human-operated profiles.
Examples of socialbots include James M Titus, who won the 2011 Socialbot Contest by gathering the most followers in Twitter, and the Air Force’s “Persona Management” software, which is used to network in war zones and gain intelligence on who’s who in combat areas by creating believable social media profiles that can automatically interact with targeted individuals.
I’ve taken an interest in these ‘bots because I think that they can tell us something about the architectures of social media. Because of my research interest in thinking about the genealogies of social media and the larger histories of computationalism, I’ve taken socialbots as a chance to link a very new technological practice to what is now a decades-old practice: programming computers to fool humans. I’m talking of course about the famous Turing Test of intelligence. So let me diverge a bit and go back to the 1930s.
Alan Turing was initially interested in the limits of computable numbers, which center in part on the Entscheidungsproblem (the “decision problem”) in mathematics. [NB: I’m not qualified to talk about that at all!] What’s intriguing about Turing is, as he was working through this mathematical quandary, he constructed a mental machine, a thought-experiment he called the “Universal Machine.” We now recognize this as a digital computer. This machine is capable of imitating any other machine so long as that machine’s steps can be broken down into discrete states and non-states – ie, 1s and 0s or bits. This is a basic conception of computer simulation, and now we regularly engage in this with virtual machines and simulated systems.
Here’s what I find compelling about Turing: his thought-experiment machine, the universal machine, was meant to be a way to bracket off a metaphysical conception of intelligence in the process of working with computable numbers. But in doing that, Turing started to think about human thought as it could be imitated by the universal machine, just as any other machine could be imitated.
How could this work? Consider the phrase “states of mind” as an entry-point into this idea. If we can explicate our “states of mind,” the mind’s functions can be textually encoded and replicated by the universal machine, just as the letters in this article can be converted to 1s and 0s. If we think of the mind as a discrete state machine that processes such “states of mind,” it then becomes transparent, its inner workings made visible to Turing’s universal machine.
In a sense, then, by the time Turing wrote the famous 1950 paper on the Turing Test, he was conceiving of the human mind as a digital machine, a metaphor that is still very prevalent today. And if the mind can be replicated by a computer, then we have little choice to argue that the machine is intelligent. (To do otherwise would be to posit some metaphysical property of the mind – precisely what Turing sought to avoid).
Turing of course predicted that by today there would be machines capable of appearing human in a text-based conversation. Programmers have been trying to do this very difficult task and failing left and right. Apparently it’s much harder than even Turing thought.
Until now, it seems. We might say that the programmers behind James M. Titus succeeded where others have failed: James M. Titus has seemingly conquered the Turing Test. By gaining so many followers, should we say Titus is intelligent, at least by the standards of the Turing Test?
But I think it’s not so simple. This is where I think socialbots reveal something about social media architectures. It’s not that the programmers of James M. Titus are so skilled; rather, it seems to me, the answer is that we are asked to act quite machine-like when we use social media.
I argue that what’s happening in Facebook and Twitter is the social production of patterns of discrete states of mind. That is, when we Tweet, fill in a profile, Like something, or comment, we’re contributing to aggregated datasets. We of course do our individual Facebooking and Tweeting and we conceive of it as individual. It can be quite complex, of course. But it’s always delimited: you have 140 characters, your profile picture must be less than 250K, you put your status update here, you select your gender and interests from these drop-down boxes.
These limitations, coupled with the aggregated actions of millions of social media users, create a highly useful discrete-state machine: a machine I call the “social media confessional machine.” The patterns that emerge from this machine is what socialbots imitate. As one white paper on socialbots puts it, “digitization makes botification possible.”
One way to think about this is by considering the practices of marketing. For marketers, our online activities are segmented into examples of specific types: you’re a Sports Enthusiast. You’re a Rural Single Mother. I’m a Technology Enthusiast. This sort of typification is essential to making marketing appear individual, but of course it is really a massifying practice. And this segmentation arises from patterns of online activity.
In a similar manner, it is these types – rather than specific individuals – that socialbots imitate. For example, James Titus, the winner of the 2011 Socialbot Challenge, played the role of innocent and charming Twitter newbie. He has a blog about “kitteh fashun” and he tweets innocuous questions like “Three places you’d like to go?” and “What experience has changed your outlook on life?” In this way, he was not unlike Joseph Weizenbaum’s famous ELIZA program: he provides a non-threatening sounding board. This is perfect for a form of media that reduces emotional exchanges to quantities like “like” and “retweet.”
Less innocent but no less based in stereotype, consider the practice of using a picture of a “hot girl” for the profile image of a socialbot, which is a recommended move in the socialbot literature. The pattern here is simple, but effective: straight men are more likely to friend or follow a hot girl. As one socialbot white paper puts it, socialbot engineers don’t even need to figure out who’s hot: they can just use pre-rated pictures from Hotornot.com, a site famous for allowing users to rate profile pictures on a scale of 1 to 10. Again, this emergent pattern can feed the socialbot machine.
While these patterns are somewhat crude, I argue that socialbot engineering will get more complex. We’re already seeing the development of techniques like Social Network Analysis to produce the patterns that such socialbots required to fool us online. One such pattern is the triadic closure principle, in which if I’m a friend of you, you’re more likely to become a friend of one of my friends, thus closing the triangle. This technique was used to great effect by a research team from the University of British Columbia, who were able to gather 250GB of personal data from Facebook by using automated socialbots.
We’re only at the beginning of big data analysis, and thus, I would argue, we’re only at the beginning of socialbot research.
But for now, perhaps there’s a lesson to be gleaned from this new field: as complex as our interactions are in social media, there’s something fundamentally disturbing about the ease with which ‘bots are able to gain our friendship. And of course, now we have to wonder what exactly they’ll do with our friendship. I will leave that for another post.