Technology writers and media philosophers alike have spent time considering a rather candid remark by Google’s former CEO Eric Schmidt, made in an August, 2010 interview. Discussing the future of the search giant, Schmidt said the following to the Wall Street Journal:
We’re trying to figure out what the future of search is […] I mean that in a positive way. We’re still happy to be in search, believe me. But one idea is that more and more searches are done on your behalf without you needing to type. […] I actually think most people don’t want Google to answer their questions […] they want Google to tell them what they should be doing next.
Via critical theorists like Andrew Feenberg, the philosophy of technology tells us that there are always implicit forms of world-representation and subjectivity that come bundled with our machines and systems. Schmidt certainly makes this clear in his vision for Google: there will be a particular model of subjectivity latent in its designs and in the developing technologies of industrial social computing more generally. To be more specific, there will be an embedded account of how information search intersects with algorithmic technique to produce some kind of representation of human need. It’s worth understanding the terms under which such techniques have been justified.
In terms of the materiality of computers, an algorithm has a fairly precise, mathematical definition: it is a primitive recursive function. Algorithms are sets of rules for symbols, which taken together produce deductive systems that can account for their own iterability. In other words, software structures logically operate on themselves, building up vast complex movements–rational infinities, one might say–from just a few simple rules. In terms of their historical development through mathematics and formal logic, they get designed so that thinking can literally become mechanical. As many writers on this blog have already noted, in this day and age the development of such processes deserves greater critical scrutiny. We need to be asking, under what assumptions and commitments can algorithms represent something as seemingly ineffable as human need?
Every theory of need is also a theory of desire. So another way to pose the question is to ask, borrowing a concept from the philosopher Gilles Deleuze: according to what assumptions do algorithms produce a conjunctural AND…AND…AND… –a model of succession that, through its adoption, modulates desire on the border between subjects and objects in representation? From their initial application to the retrieval of documents, how have algorithms progressively developed to support Schmidt’s more deeply open question of ‘…what one should do next?’ Three main strategies concerning the representation of need developed over time in information systems theory bear mention:
A focus on instrumental need for a specific document, by following a simple ‘best-match’ engineering principle. This was a long-standing principle within computer science, where interfaces were designed to satisfy precisely-formulated queries, with an overall theoretical framework of formal-semantic correspondence between subject and object. You typed the name of the document you sought exactly right, or you didn’t find it.
A focus on a cognitive need for obtaining knowledge. Thanks to more sophisticated theoretical frameworks developed in the library and information sciences, designs became more capable of modeling the socially contextualized ‘problem situation’ of an inquirer; one resolved by the provision of “transformative” information that satisfied a need. Systems became more attuned to the epistemic correspondence between subject and object: one could move through the conceptual stages of need more dialectically, from vagueness to precision, guided by expert suggestion.
A focus on the intersubjective, cognitive-existential need for perpetually obtaining and communicating knowledge. In the contemporary moment of industrial social computing, its ‘transmission’ and ‘reception’ are equally important. Modern web interfaces now allow for the posing and satisfying of ongoing, socially contextualized ‘problem situations’ with so-called transformative information, by connecting users with similar needs to one another. The focus is on collective sense-making, and for better or worse, its theoretical framework is one of utilitarian-economic correspondence between subject and object, otherwise known as rational choice theory.
In terms of algorithmic iterability, past informational needs now encounter, shape and resolve the needs of the present, in ways that come to resemble flocks and markets more than any planned relationship between librarian and patron. Following Schmidt’s account, informational need is being reframed as the perpetual daily energy of intentionality itself, rendered as a computable unit of displacement: clicks, comments, and ‘likes’ metabolizing, displacing and refining overall epistemic context for both companies and their users, trying to make sense of new ideas and events as they get expressed online.
It’s at this point that critical concerns towards Schmidt’s predictions come into relief. For once informational need and the existential realization of need per se–achieving goals in the world, including public communication–merge together online, experiential significance gets subtly redefined through algorithmic media as an information channel. The particular expression of significance gets reduced to a behavioural side-effect of private decision-making and goals, through the economic logic of rational choice theory. Jodi Dean calls it communicative capitalism; elsewhere I have described the situation, through Habermas, as the rise of a semantic ‘steering medium’–a formalized system like money, but nominally premised upon mutual consensus achieved in discourse.
Ultimately, when Schmidt talks about searches done on your behalf, or Google telling you ‘what to do next’, he is arguing for the user to delegate their rationality to a Google-driven device. Via contemporary forms of algorithmic iterability, the effect is for social information systems to apply a subtle but pervasive form of what Habermas called functionalist reason to all electronic discourse. At the level of critique, the risk is that experiential significance and expression fall prey to an intensified level of bureaucratization. Time will tell if social computing can step beyond this framework, to become critically responsive in its designs to still-deeper accounts of existential need and desire.
-Contributed by Neal Thomas, University of North Carolina at Chapel Hill-
Comments are closed.
Comments are closed.