Affordances, technical agency, and the politics of technologies of cultural production

a dialogue between Gina Neff, Tim Jordan, and Joshua McVeigh-Schulz

 

(This is the first of Culture Digitally’s “dialogues.” Spurred first by comments by Gina Neff at the March 2011 workshop, and then by one of her blogposts, I asked if we could use an excerpt of that post as the opening salvo in a dialogue about how we should (re)theorize the politics of technological systems, the value of the concept of ‘affordances,’ and new directions for thinking about ‘technical agency.’ I asked Tim Jordan to be her dialogue partner, in part because of the draft paper he had shared with the group. As you will see, a timely comment from Joshua McVeigh-Schultz spurred us to add him as a sideways challenge to Gina and Tim’s emergent discussion. TLG)

 

Gina Neff

In two cases that I’ve studied, Building Information Modeling in commercial design and construction (Neff, Fiore-Silfvast, and Dossick 2010), and consumer biosensors (forthcoming), advocates for these new tools tout them as “revolutionary,” with the ability to enact sweeping organizational and social changes in the systems where they are implemented. The technologies are designed to change, to quote Charles Lemert (forthcoming), “social things.” In both cases, technological solutions are imagined, designed and created to solve social problems. At the same time, the introduction of these tools often force users to contort themselves and their relationships around the tools, modifying and hacking their existing systems to fit the requirements of the new technology.

Perhaps this is expected. Tool designers are not organizational theorists and they do not necessarily think through the social, organizational, and institutional contexts into which technology is introduced. But time and again, the lessons that we know in the fields of science and technology studies and communication theory seem forgotten, ignored, and rejected by the people who work in technology.  Tools don’t make change, we argue in our seminar rooms, people (or networks of “people-tools”) make change. Meanwhile, blissfully unaware of our certainty, programmers type away, setting into the wild tools they hope to change how we think, act, and organize.

Within the social studies of technology, technological determinism is dead. By that I mean that the same kind of logic that motivates technologists carries absolutely no theoretical purchase in contemporary scholarship. It is not that we academics and they technologists come from and speak and work within different cultures. It is almost as if we have no way of translating across this gap.

My problem is that we as academics of technology don’t yet have the theoretical language and tools to talk about these systems. We have rightly corrected technologically deterministic theories to better account for user agency and the social construction of tools. However, I am beginning to think that we may have “overcorrected,” with the pendulum swung too far in the direction of human power, ignoring the serious questions that remain about how tools are designed, how they function socially, and how users are aware of their positions and power.

Kevin Kelly, a founder of Wired and a tech guru, has recently published a compelling book arguing that technology “wants,” “drives,” does-an example of a book that is “wrong” by the current theories of the field, but popular nonetheless. A video of Kelly talking about the project is here. While the “technium,” in his formulation, does include culture, his point is that this monstrous assemblage takes on a life and an evolutionary logic of its own once put into place. Kelly is a prime example of the influential folks in Silicon Valley who would fail any graduate course in Science and Technology Studies for this kind of commitment to technological determinism. Technology, in that worldview, is not only capable of making social relationships and institutions function more effectively, the purpose behind the design of tools is often expressed as a desire to make society work differently through new tools.

For the users of the systems in architecture and healthcare that I’ve been observing, they too feel like they lack control and power within those systems. The openness of systems for modification by users isn’t always so apparent within the moment or within the system. And these tools and systems can often work seemingly fine without us. What got Kelly’s What Technology Wants  to the best seller list was that it resonated with a fear that many of us non-experts have-we can’t fully control the machines around us.

I’m not yet willing to cede full-blown agency to technological tools in this case, but I suggest that we should begin talking about “technical agency” – technical in two senses, as in the agency that is possible by systems of technology and as a limiting description of that agency or latent agency, a not-quite, but “technically,” agency. Doing so will allow us to confront both the industrial drives pushing technologies to act and the ways in which users can feel trapped within technical systems they can’t quite construct their way out of.

 

Tim Jordan

I think there’s a lot of value in focusing on the experience users have with technologies and how it can be hard for us to analyse technology in such contexts without a concept of technological determinism. When looking at hacking I found it very interesting the way both crackers and free and open source software programmers actively seek out being technologically determined. For example, a cracker will spend some time working out what hardware and software they are confronting in a machine they want to break into because they have tools to fit the situation. Each programmer has to know what hardware they are using, what programming language they are using, what bug or extension they are working on and so on. In both these kinds of cases, the human actor seeks out certain technical agencies in order to determine (and/or produce?) their further actions. These further actions then impact straight back on the technologies they have solicited altering them for cultural/social reasons, whether their motives are political (pushing for greater freedom of information), financial (stealing credit card databases), or professional (promoting free/open software).

Embarrassingly, I once argued this in a paper and some people came up to me afterwards and said the polite academic equivalent of ‘Ahem. Affordances, duh’. I think affordance theory (Gibson 1977; Norman 1988) does a good job of encapsulating this kind of situation and I think it’s probably why affordance has become a popular term. Affordance theory asserts a number of things, but I’d focus on its claims that technologies produce fields of action (including unexpected actions), but that not all actions are possible. The ‘not all actions’ indicates something irreducible in each technological artefact that lets us say that in specific situations technologies do determine (as long as “determine” is understood as fields of action and not one single action).

The obvious and immediate problem is this looks like yoking together two contradictory positions of technology and society determining each other without resolving the contradiction, and I find theories of affordances tends to ignore the question of what ‘irreducible’ means here, even though this is the claim that lots of social studies of technology criticised. I’m tempted to suggest this resolves into different layers; technological determinism is an everyday phenomenon, whereas cultural/social ‘construction’ is a meta- social construction. The problem here is that if you go back to the examples I gave, the redetermination of technologies for social/cultural interests is not done at a collective/meta-social level but at the micro. The programmer who recodes according to Unix philosophy is re-determining technologies for cultural reasons, the hacktivist who DDoS’s Mastercard to show support for Wikileaks tries to alter the information landscape for political reasons and so on, and crucially they do so in their everyday actions.

One alternative framework is actor-network kinds of approaches, in which nothing is irreducible and everything can be followed as an actor. One expression of this (along side Law’s “mess,” or Haraway’s use of “becoming-with”) is Latour’s idea of the “factish.” I’ll crudely summarise this as the collective practice that emerges when the question ‘is it real or is it constructed?’ is refused. All the various actants or actors in the making of a world are mingled together, meaning that human and non-human must all be seen together in their various actions and connections. It is in this inter-mingled sense of collective creation that the factish emerges and entirely reorders what we might have thought was meant by what is real or independent. The factish refuses the opposition between being the thing that “really acts” or is “just part of a construction.” In response to these oppositions that the factish refuses, Latour argues: “The factish suggests an entirely different move: it is because it is constructed that it is so very real, so autonomous, so independent of our own hands. As we have seen over and over, attachments do not decrease autonomy, but foster it. Until we understand that the terms “construction” and “autonomous reality” are synonyms, we will misconstrue the factish as yet another form of social constructivism rather than seeing it as the modification of the entire theory of what it means to construct.” (Latour, 275)

I take this to mean that we have to follow the actants/actors and that we will find technical agency because nothing stops a particular actant from having agency, whether it’s technical or human. But the nature of agency will only be known from within the particular story that we can tell, the case study that opens up the world to our gaze. What I find problematic here is that we have great ways of doing case studies, and when you’re as good as Latour, Law, Moll, and Haraway at telling stories then they are pretty convincing, but I’m never sure what each study means itself. The only really significant conclusion is that ‘it’s all traceable’, which doesn’t get us far in thinking about technological agency across different instances of technologies.

So, as a first response I can’t help thinking from within the major theoretical frameworks I’ve been relying on and criticising. I agree with Gina that we don’t have languages that work in understanding technical agency, even given the enormous amount of work over the last 30 (at least) years in social studies of technology. But from where looking at affordance and factish gets me, I tend to think we have to define technological agency knowing that it will be traceable to social/cultural contexts and that in doing that tracing we need to be able to theorise something across particular technological artefacts.

 

Gina Neff

At the risk of saying, essentially, “Yup, everything that Tim just said,” I want to add a bit to this discussion.

First, I think you’re spot on when we’re talking about levels of analysis from the macro to the micro. Affordance is often, although not exclusively, used when we are talking about how users on a micro level intervene in technical systems. I would add that we are also often talking about mixed time scales – of openness and affordances in the very long run or the short run, but not really considering the interplay in between. John Maynard Keynes once said, in the long run, we’re all dead, to oppose the idea that markets, in the long run, self-regulate. The same might be said, I would argue, for our ability to socially construct technologies. By the time socio-technical systems move or change, we, the users, might as well be dead.

Within communication technology studies I think this notion of time and scale are major blind spots. We are so wedded to the idea of human agency of human communication and to technical systems’ ultimate surrender to their social construction that we forget how things look from within the systems and to people who work inside, with, around, or through such technical systems. The material matters. Our theoretical language has centered on the role of human communication in these systems when what we say or do through them may be the least interesting things that occur.

For example, from a labor perspective, contemporary socio-technical systems can be brutal. Simon Head, in a prescient book The New Ruthless Economy, gives guidance for how surveillance, monitoring and a “New Taylorism” pervade the design of ICTs, from innocuous email and web interfaces to Electronic Resource Management systems. And these systems almost have an taken for granted-ness… the always on, always observed, always working.

My point is not that such technical systems have independent, fully formed agency. Of course, the power of tools is traceable within a network of actors and actants and agencies, and for this much I think you and I share a debt to the work in Science and Technology Studies that has gotten us to this point. But the language of affordances and constraints as  scholars use it, leaves room for us to say something about the agency of technical actants in a meaningful way.

This is what I’d like to develop as a technical agency – technical both in the sense of not quite fully agentic, and also in the sense of in practice, technically, appearing, seeming, emerging with agency. Of course the artifacts have politics, to quote Langdon Winner’s (1980) famous essay. These politics come from having embedded in the technologies a series of choices that have implications and ramifications that far outlive the original design meetings in which those choices were made. For hackers and experts, systems look more crackable, more full of potential and possibility, than they look to the rest of us – appearing to us as given and relatively fixed. For the rest of us, our practices around technical systems can only place these systems in the middle and twist, bend, and reconfigure the social around them, not the other way around.

This is one way to read through Kevin Kelly’s concept of the power of the technium — technology wants, drives, does, demands. Once in place, these systems seemingly take on a life of their own, at least from the point of view of people who work within them. Our email can drive our agenda for the day. From my observation of work practices in a recently completed study of architectural design and commercial construction, these tools and systems can drive meetings, restructure organizations, set agendas for meetings. As Steve Barley (1986) has pointed out, the tools become the occasion for restructuring. In short, we begin to take de facto orders from materialized instructions that we don’t fully understand technically, much less politically. Once in place such systems limit and drive us.

I’m with you on this one — affordances and constraints gets us part of the way there. But in a moment when so much of our data trail and traces are blackboxed, when the increasing drive for “smart” systems begins to look like Norbert Weiner’s fantasies of cybernetic organisms, and when work is organized through these systems to make submission to them a de facto requirement (ever tried completely unplugging from your email?), then I begin to think that the techno-determinists of Silicon Valley are on to something.

 

Tim Jordan

A question popped into my head as I was reading your comments: Why wouldn’t a technology be fully agentic, even if we can trace it to networks of actors/actants? If I were to trace my own agency it would refer off to all kinds of things, from work contexts, family history, and so on, which would in turn include all kinds of actors/actants. This could lead to thinking about agency a bit more critically and leading to the symmetrical-ANT-like accounts in which the story of actor/actants need to be followed to try to see what actants’ agency consists of. But I often still want to to distinguish human from technological agency, even while accepting that they each have agency within particular contexts.

One of the criticisms here, coming from human/animal studies, is the tendency toward human exceptionalism, in which the human becomes the presumed measure of all things, thus animals and technologies only gain status as actors in relation to a human-defined standard. I think someone like Haraway is pretty good at tracing this kind of thing, the politics that results from human exceptionalism, and why it has been a bad thing. When reading her most recent book about companion species (which for her is primarily a dog she partners with in agility trials) I started to wonder about technologies as companions and then as species: Are avatars in online games a species? Are technologies species? So while I buy into being careful about human exceptionalism (especially because in its history the ‘human’ often turns out to be a particular political project) I’m not sure I want to then flatten all agency or refuse to distinguish between different types of agency.

So I like the idea of a ‘technical agency’ as a way of neither reducing technologies to what humans designed them to do nor claiming agency for them  that is the same as humans and animals. But I suppose I also wondered if there’s still some of the ambivalence that we started with, in your idea of technical agency as “technical both in the sense of not quite fully agentic, and also in the sense of in practice, technically, appearing, seeming, emerging with agency.” I would like to see this explained a bit more, as it’s enticing in opening up a specific sense of technical agency, but leaves me with two worries.

First, it seems to suggest something less than full agency, which implies a question like: What defines full agency? Or is an ‘ideal’ sense of agency being implied? I’m probably over-interpreting but I’d like to nudge the idea forward for criticism that there are different types of agency about and it might be useful to draw that out.

Further, in connecting to a notion of practice, would that require defining a technical agency based on a view of how a technology actually functions? Can we only know what a technology is from the way it is used? The view of technology as use often makes sense, given unexpected uses, but I wonder if it decouples technical agency from designers too much, particularly in the face of the kinds of brutal examples of labour control that you mention, which are intentional in important ways that connect to exploitation.

So I like the direction of starting to think about moving past symmetrical accounts in which all things in the story are actors/actants, because I think this flattens the accounts, tends to obscure the politics of different types of actors, and leads to a kind of case study approach that either is or verges on empiricism. I’d like for a bit to leave behind also the contradictions of affordances. But maybe you can open up a path forward by provoking us to think about what types of agency are associated with which actors.

 

(At this point, a provocative post to our Culture Digitally site by Josh McVeigh-Schultz seemed to weigh in on this discussion. I asked Josh to reformat his post for inclusion into this dialogue, as a kind of sideways provocation. TLG)

 

Joshua McVeigh-Schultz

Game designers must make difficult decisions about how game procedures model the real world, or, in other cases, make decisions to present a particular editorial perspective. Indeed, this process of translating a particular perspective on the world into procedural rhetoric is precisely what Ian Bogost (2007, 2010) intends to convey when he says that games such as his “newsgames” have the potential to be persuasive. But the process of choosing particular models over others can also reify the curatorial logic of traditional journalism. Like journalists, game designers are also gatekeepers, except that designers constrain the possibilities of procedural logic rather than discursive content. Admittedly, by focusing attention on system mechanics, newsgames can lead players to a more nuanced understanding of a real world problem as a balance of competing mechanisms. But Bogost’s framework suggests that translating a real world issue into procedural rhetoric somehow precludes the possibility that assumptions about systemic relationships might themselves be ideologically inflected.

Political videogames… are characterized by procedural rhetorics that expose the logic of a political order, thereby offering a possibility for its support, interrogation, or disruption. Procedural rhetorics articulate the way political structures organize their daily practice; they describe the way a system “thinks” before it thinks about anything in particular (Bogost, 2007, p90).

By erasing human agency from the way a “system ‘thinks'” Bogost seems to be recasting what Nick Couldry (2003) has described as the “myth of a mediated center.” His claim that procedural rhetorics expose the logic of political order also seems to imply that players’ in-game decision-making and learning processes inevitably help them to unpack the implicit claims that a particular game system makes. But I am not convinced that players invariably have access to this kind of procedural logic; certainly game designers do, but players may not always understand the design decisions that undergird their play experience.

Players do have access to those mechanics that are made the explicit subject of in-game decisions, for example: decisions about role-play alternatives, decisions about resource selection, decisions about responding to obstacles, decisions about targets or objectives, and observations of outcomes all rise to the level of explicit player attention. Likewise, players can compare a game to other games and make observations about genre-constraints or particular affordances. These sorts of features tend to be explicitly underscored as objects of attention and their boundaries become well known through play. Highly reflective players may even be able to critique the ideological assumptions behind a game’s semiotic properties, as Mary Flanagan’s notion of ‘critical play’ implies (Flanagan, 2009).

But compared to players, designers have access to more implicit features of a game’s procedural mechanics that players may have difficulty grasping. A player of the popular children’s game Connect Four, for example, may develop an intuitive sense of features like grid size, turn-taking, and the requirement to connect four units in a row as oppose to three or five. But a game designer can understand these mechanics more deeply by calibrating them over multiple prototype iterations: changing the grid size, shifting the unit requirement, and altering the choices available in a turn. This process of tweaking specific mechanics and witnessing the outcome is a key part of what game designers do, and it is a transformational experience for novice game designers learning the craft for the first time. This is an essential difference between seeing a game as a player vs. seeing it as a game designer.

These more fundamental features of a game’s mechanics may be accessible to players only in an intuitive sense (or not at all). These are the features that become naturalized, because the player has no frame of reference to imagine an alternative. We can think of this distinction as similar to the way that linguistic anthropologists differentiate between grammatical structures that rise to the level of meta-linguistic awareness vs. those structures that are inaccessible to explicit reflection (Jakobson, 1960; Silverstein, 1981). This theoretical position is rooted in Benjamin Whorf’s (1956) distinction between ‘cryptographic’ or covert structures that lie beneath the surface of a native speaker’s awareness and ‘phenotypic’ or overt structures that rise to the surface. Can we similarly make a distinction between ‘covert’ and ‘overt’ game mechanics or, by the same token, ‘covert’ and ‘overt’ procedural rhetoric? For example, the fact that only one move is allowed per turn in Connect Four or that gravity in the game only goes in one direction might be candidates for covert mechanics. Likewise, covert procedural rhetoric could refer to how models of the real world get translated into game mechanics in such a way that alternatives don’t seem possible or obvious. Along these lines, a game about the economy might embed covert procedural claims about either lower taxes or public stimulus triggering job growth, but unless the properties of these models were made explicit there is no guarantee that players would have access to these design decisions as rhetorical positions.

Interestingly, Bogost points to the “covertness” of verbal rhetoric, which “require[s] coherent and methodical movement between causal pairs,” thereby “cover[ing] over the network of relations that contribute to final outcomes” (98). Where verbal argument reduces complexity to causal pairs, procedural rhetoric, by contrast, accommodates a much wider field of dynamic interrelationships. In this sense the expressive and explanatory power of games seem to dwarf that of linear argumentation. But the constraint of sequential causal pairs also means that verbal arguments can be segmented more transparently, and their weaknesses can be probed. By contrast, just because game systems can handle greater complexity does not guarantee that the game’s procedural rhetoric will be more subject to explicit reflection. Procedural rhetoric is not an explicit “claim” in this sense, because it makes its claim in a way that is not always transparently available as an overt object of commentary for players. In cases where the procedural logic is available as an object of commentary, we find examples of vibrant public debate, and we see this in the way that MMOG players often collaborate to publicize complaints about game mechanics to developers.

Despite these challenges, from the perspective of Thomas Malaby’s (2007, 2009) distinction between rituals and games, procedural rhetoric in games has the potential to be a more contingent form of argument than its verbal counterpart, and this is the aspect of games that excites me most. Games that model the real world give players an opportunity to test a hypothesis or disrupt a narrative framework, and they can “fail” on their own terms in a way that verbal argument never can. Bogost embraces this kind of contingency by pointing to the fantasy sports leagues as a model of how game systems can test predictions about the world. Likewise, Play the News adapts this model of participatory predictions as a way of getting players to make educated guesses about future events. In this way, different models of reality get tested in a public forum. Where two players make different predictions, they have an opportunity to locate their disagreement in granular differences of opinion about how they model the world.

 

Tim Jordan

Covert and overt game mechanics are social as well as technical. I agree that something like the implementation of physics (gravity etc.) in a game is a good candidate for what you’re calling a covert mechanic (which might also be a form of technical agency). Similarly there are such social or cultural conventions as keybinding, mouseturning, etc. I was reminded of the latter as a good friend of mine has recently taken up playing a tank for the first time in an MMOG (Rift, though it’s more or less the same as World of Warcraft for this example) after years of playing a caster. He’s always been proud of key-turning (for non-gamers that is turning by pressing keys rather than the classic position of one hand pressing buttons and the other on the mouse, using the mouse to turn the avatar) but now I find he’s not as good as I’d expect,  as he can’t react quickly enough (in my opinion; he blames me). A certain requirement for efficiency and speed that a covert mechanic or a technical agency creates is met by a cultural arrangement of technical agencies in the mouse, keyboard and fingers.

Affordance theory remains, at least to me, an obvious fit for this kind of analysis because it allows the flexibility of seeing where agency is both constrained and enabled by technologies and that non-technological agencies can form technologies. However, it does so by fusing two kinds of causation together rather than theorising them. As bluntly as I think I can put it, affordance theory is based on a contradiction that it assumes rather than confronting.

I think the problem comes from presuming what it is that needs explaining. The presumption is made that there is some kind of distinction between matter and discourse, and that this idea of matter is outside of social or cultural or discursive constraints. It’s the common sense view that there is ‘something’ that is not social or cultural but is instead matter or physical or real. Gibson says:

An important fact about the affordances of the environment is that they are in a sense objective, real, and physical, unlike values and meanings, which are often supposed to be subjective, phenomenal, and mental. But, actually, an affordance is neither an objective property or a subjective property; or it is both if you like. An affordance cuts across the dichotomy of subjective-objective and helps us to understand its inadequacy. It is equally a fact of the environment and a fact of behaviour. It is both physical and psychical, yet neither. An affordance points both ways, to the environment and the observer. (Gibson 1986, 129)

Though Gibson tries to say ‘neither/both’ he maintains two kinds of facts and imposes or assumes a dualism; the environment always imposes some kind of reality. Hutchby builds on this: “Does this reference to capabilities not mean that there are, after all, the kinds of determinate properties to technologies which social constructivists argue against? In a way it does. To focus on affordances in the way I suggest is to accept that there are features of artefacts that are not constructed through accounts.” (Hutchby, 29). What is going on here, I’d argue, is a presumption of a matter/discourse divide when that is what needs explaining when we are analysing techno-social inter-relations.

I think this intuition is held more widely than affordance theory. From an entirely different direction, Karen Barad is another example of making this presumption, almost because it is common sense. “It is difficult to imagine how psychic and socio-historical forces alone would account for the production of matter. Surely it is the case-even when the focus is restricted to the materiality of “human” bodies (and how can we stop there?)-that there are “natural,” not merely “social,” forces that matter.” (Barad, 66) She seems to base her view on the fact that there must be something out there as matter that is beyond our discourse, but this assumption implements the distinction she is investigating.

Rather, I’d argue that we should think about how we produce such distinctions and how we operate them. Barad has some really interesting ideas drawn from Bohr about how things become objective when they are ‘cut’ or measured or fixed by some kind of instrument. If instead of assuming there is a matter/discourse divide, we look to such moments when things become fixed, then that moment is the creation of a particular matter/discourse divide, which we can then point to as some kind of reality. I’d argue a lot of empirical science studies shows these kinds of moments when scientists do the flip and stop arguing about the nature of something but start to presume it, while erasing the evidence of the discursive/cultural work that went into creating the fact.

Instead of looking for affordances then we would be analysing the ways in which reality is generated through various types of agencies and forms of action. The cost is giving up on a notion of an outside to discourse and the social, which a lot of people don’t want to do for fear of a new idealism in which we get accused of presuming we can invent whatever reality we want.

 

Joshua McVeigh-Schultz

I’m interested in your (and Gibson’s) point about affordance moving between discourse and matter, but that we actively construct this dichotomy as a way of demarcating objects.

This has really interesting implications in a game design context, where iteration between thinking about narrative metaphors and core mechanics is always a crucial part of the process. Often in this context the terms ‘affordance’ or its verb form ‘afford’ are deployed as a way of flagging different discursive strategies in a brainstorm session (and inviting different habits of mind). For example, a team member might say something like: “What core mechanics does the metaphor of a fish in a fish bowl afford?” i.e. what does the fish want? How might water as a dwindling resource set up particular game objectives? And how does the constrained volume of the fish bowl limit how much of this resource can be collected? These sorts of “affordances” of the metaphor of a fishbowl set up particular systems objectives, constraints, resources etc. But the relationship between these two levels (mechanics and metaphor) is not always transparent and has to be actively constructed throughout the process. Game designers who work in teams will shift between abductive thinking about metaphors to more systematic crafting of mechanics. And the process of iteration on a design often implies a continual struggle over the lines between what counts as mechanics vs. what counts as metaphor until (ideally) the two seem inseparable.

The connection to Barad is an interesting one to think about here. I’ve been reading Anne Balsamo’s book Designing Culture: The Technological Imagination at Work, and she synthesizes Berard’s notion of intra-actions in a helpful way for me:

Intra-actions are iterative; they build on one another. She argues (2003: 815) that “it is through specific agenic intra-actions that particular embodied concepts become meaningful,” and further than “the material and the discursive are mutually implicated in the dynamics of intra-activity” (822), and “outside of particular agenic intra-actions, ‘words,’ and ‘things’ are indeterminate” (820). It is through specific intra-actions that the distinction between words and matter is constituted and, continually reproduced through subsequent intra-actions. (Balsamo 2011: 34)

Relating this concept to the role of metaphors and models in games, we could argue that games tend to structure mechanics by jostling one metaphor against another – for instance, in the example I described earlier, the metaphors of the “fish” and the “bowl” both constrain and afford fields of possibility for the other.  And it is through this jostling (or what Barad calls intra-action) that we start to understand the potential mechanics of a game as a system.

Returning to the notion of covert mechanics in this context, I wonder if part of what is potentially problematic about the unconditional celebration of systems thinking in games is that there is the possibility that different sorts of intra-actions (those set in motion by the design team vs. those perceived by the players, for example) are not visible to one another. To borrow Balsamo’s framework, the distinction between words and matter (or perhaps metaphors and mechanics) are continually reproduced through Barad’s notion of intra-action. But baptismal intra-actions (those of the designer) constrain subsequent intra-actions in ways that are not always available for commentary (and here I’m thinking also of the way that Latour talks about the genealogy of the door closer too). Games also have the potential to make ‘matters of concern’ seem like ‘matters of fact,’ and it’s the possibility of this trajectory that troubles me.

 

Tim Jordan

Translating ‘concerns’ into ‘facts’ seems to me like an inevitable part of any game design. Consider these two anecdotal examples from my own gaming past. “First was a game I played which tried to model post-glasnost USSR, in which you fiddled with various budget settings and policy priorities and the game then unveiled the direction USSR/Russia went in. Funnily enough, if you simply liberalised the economy gradually, it all turned out fine! There were some neo-liberal assumptions all too obviously built in. Second was the first time I showed Sim City to a friend who worked in housing: their response was ‘I don’t agree with zonal policy for managing cities’ and then refused to play it. In both these anecdotes, certain politics are hardwired into a game that then represents itself back to the player as the world.

This doesn’t mean to me that game design of the real world is inevitably a failure – that would be to presume any conceptualisation could capture what ‘really’ goes on in the world. My objection to Barad, whose work I also really like, is that in presuming the matter/discourse divide she inevitably finds it, whereas the point is, what kinds of mechanisms do we go through to produce that divide? Knowing that those mechanisms will be discursive (in the sense of both representations and semiotics, and the social and cultural practices needed for such representations). I think empirical social studies of science does a lot of this work, showing us the ways scientists cut at a certain point and produce a fact, an essential part of which is removing all the signs of discourse.

 

Gina Neff

It occurs me through this discussion that bringing in a notion of agency that does not necessarily include consciousness, that is instead latent or emergent, helps to bridge several of these conundrums that we find ourselves considering.  Splitting agency into a world of actants with and without consciousness emphasizes questions of action-who can act, when, how and why. Why not adopt a new language for agency in a digital age? Doing so will let us set aside a debate in Science and Technology Studies that has prevented us from both recognizing the limits to human and user action, and the extent of action within technical systems oftentimes several layers removed from oversight or design.  Talking about designs, affordances, and constraints still centrally places that action with some imagined set of users or producers. As Tim and Josh have pointed out, there are times and virtual places when and where we are not fully in control of our machinescapes. It is time that we bring in concepts that let us describe and understand those moments, rather than continue to rely on an idealized view of the agency of users and producers.

 

 

References

Balsamo, Anne (2011) Designing Culture: The Technological Imagination at Work. Durham, NC: Duke University Press.

Barad, Karen (2007) Meeting the Universe Halfway: Quantum physics and the entanglement of matter and meaning. London: Duke University Press.

Barley, Stephen (1986) “Technology as an Occasion for Structuring: Evidence from Observations of CT Scanners and the Social Order of Radiology Departments.” Administrative Science Quarterly 31(1): 78-108.

Bogost, Ian (2007) Persuasive Games: The Expressive Power of Videogames. Cambridge, MA: MIT Press.

Bogost, Ian, Ferrari, Simon, and Schweizer, Bobby (2010) Newsgames: Journalism at play. Cambridge, MA: MIT Press.

Couldry, Nick (2003) Media Rituals: A Critical Approach. London: Routledge.

Flanagan, Mary (2009) Critical Play: Radical Game Design. Cambridge, MA: MIT Press.

Gibson, James (1977) “The Theory of Affordances,” in Robert Shaw and John Bransford, eds., Perceiving, Acting, and Knowing: Toward an Ecological Psychology. Hillsdale, NJ: Lawrence Erlbaum Associates: 62-82.

Gibson, James (1986) The Ecological Approach to Visual Perception. Hillside N.J.: Lawrence Erlbaum Associates.

Haraway, Donna (2008) When Species Meet. Minneapolis: University of Minnesota Press

Head, Simon (2005) The New Ruthless Economy: Work and power in the digital age. Oxford: Oxford University Press.

Hutchby, Ian (2001) Conversation and Technology: From the telephone to the internet. Cambridge, UK: Polity Press.

Jakobson Roman (1960) “Closing Statement: Linguistics and Poetics,” in Thomas Sebeok, ed., Style in Language. Cambridge, MA: MIT Press: 350-377.

Jordan, Tim (2008) Hacking: Digital media and technological determinism. Cambridge, UK: Polity.

Kelly, Kevin (2011) What Technology Wants. New York: Penguin Group.

Latour, Bruno (a.k.a. Jim Johnson) (1988) “Mixing Humans and Nonhumans Together: The Sociology of a Door-Closer,” Social Problems 35: 298-310.

Latour, Bruno (1999) Pandora’s Hope: Essays on the Reality of Science Studies. Harvard: Harvard University Press.

Law, John (2004) After Method: Mess in Social Science Research. London: Routledge.

Lemert, Charles (forthcoming, 2012) Social Things. Lanham, MD.: Rowman & Littlefield.

Malaby, Thomas (2007) “Beyond Play: A New Approach to Games,” Games and Culture, 2(2): 95-113.

Malaby, Thomas (2009) “Anthropology and Play: The Contours of Playful Experience,” New Literary History 40(1): 205-218.

Neff, Gina, Fiore-Silfvast, and Dossick, Carrie (2010) “A Case Study of the Failure of Digital Media to Cross Knowledge Boundaries in Virtual Construction,” Information, Communication & Society, 13(4): 556-573.

Norman, Donald (1988) The Psychology of Everyday Things. New York: Basic Books.

Silverstein, Michael (1981) “The Limits of Awareness,” Sociological Working Paper n84. Austin: Southwest Educational Development Laboratory.

Whorf, Benjamin (1956) Language, Thought, and Reality. Cambridge, MA: MIT Press.

Winner, Langdon (1980) “Do Artifacts Have Politics?” Daedalus 109(1): 121-136.

 

Comments are closed.