The Ethnomathematics of Algorithms

Inspired by recent postings by Tarleton Gillespie and C.W. Anderson,  I wanted to chime in with my own take on the material and political aspects of algorithms. I’ve been mulling this over in my work with sensor enthusiasts, both in the Quantified Self movement and in open data communities like Cosm (formerly Pachube). This is part of a much larger collaborative project with Richard Beckwith and others to rethink how data might be circulated, exchanged and valued in ways that are different from traditional “big data” approaches to aggregation.

I am interested in these communities because they confront the materiality of data in ways that most of us do not—unless, of course, you’ve recently been given a shockingly high pulse measurement from a mobile phone, and needed to question whether the sensor is working, and what it means if it is.  These communities also share the concerns expressed in here  and here about the political implications of closed algorithms.  Algorithms are not just at work in search or on social media, but also in sensors. They clean data at point of capture, and reduce it yet again in the business models that assume people only want pre-digested, easily interpreted data.  There is lots of grumbling in these communities about the inadequacies of this assumption.

One of the most important things we learned is that when data is situated within someone’s lived material world, the workings of the algorithm become both more and less mysterious. Why this both/and quality?  I’ll give examples in a second, but it is worth a side note that this notion is in part a shameless borrowing from the anthropology of numbers. Verran (2001), Guyer (2004), Eglash (1999), and Lave (1988) are a few examples of this exciting body of work, though “ethnomathematics” is not a professional identity they would all share.  I hope that as interest in algorithms grows, more of us engage with this literature.  I find it helpful for situating big data’s numbers, even though the social worlds they work with sometimes have little to do with the technologies that preoccupy scholars of digital culture.

In my own case, I recently came across Verran’s reworking of Peirce’s icon/index/symbol distinction to apply to numbers (Verran 2012).  The “index” element could not describe sensor design better.  When computer scientists use pulse sensors to create predictive models of stress and calm, they are trying to make an index.  They turn to heart rate because heart rate always indicates more than just the pace of a beating heart.  In the most straightforwardly Peircian way, it is the smoke to the “fire” of the phenomenon being sensed.  It is not the phenomenon itself, but linked at some distance through a material assemblage of transducers, algorithms and screens.

A pulse isn’t the worst place to go looking for emotion, but it also will never fully capture whatever that “emotion” actually is (or constructed to be).  No matter how many sensors you pile on, they will collectively always stop short because they are designed to work indexically.  More and more smoke does not make fire (though there are alternative designs, like this wonderful critical HCI piece).  In this way, most sensors create an excess and a shortage of meaning, both at the same time. I’ve been playing around with calling this phenomenon “little big data.”

We can see this  more-than-and-yet-less than quality at work in home energy sensing. Many of the people we talked to were using sensors to monitor their home energy use.  Like a heart rate, home energy data left all sorts of unintended traces, such as whether someone is at home.  Yet at the same time they indicated much less than their users ascribed to them.   People told us stories about how they would clamp a sensor on electricity mains thinking that what they were measuring was “home energy use” or “energy efficiency.”  This brought them rather quickly to the understanding that there is in fact no such thing as efficiency “out there” which you could just put a sensor on and trace.   It only begged more questions about where else energy use was (oil? gas?), what was really “high” or “low” (per person? for an old creaky house?) etc.. The sensors asked people to confront just how complex constructions like “energy efficiency” are.  If they were not inclined to do that, the number fell into disuse.

For some, these numbers served as a generative guidance system for literally feeling one’s way through the house (how else to find the leaks in HVAC system if not with sensor-enabled divining rods and hands?).  For most, however, escaping the flatness of the data was a real problem. There was just not a lot with which to have a dialogue, even if people didn’t take the sensor readings at face value. People had started using Cosm because they wanted to get beyond just displaying a number “trapped” on the wall, which most folks eventually stop looking at. Yet the work of making that data relate to anybody else, or anything else, was relatively high even in Cosm’s remarkably open environment.  For example, some interviewees were surprised that we had compared their indoor thermostat temperature with outdoor temperature, which we pulled from the Internet.  This was a straightforward data fusion and clear example of all the “new knowledge” that  open datasets are supposed to make possible.  It should not have been a surprising move for open data enthusiasts, but it was.  The imagination that there could be a connection between different kinds of data has outpaced the relatively high levels of work it takes to “clot” together numbers into social routines (Verran’s term again), such that it would be an “obvious” move.  Data’s nascent excesses and betrayals make it seem as if there is always value in moving it and recombining it, as if there is always more to be mined.  This view then set up the analogous experience of somehow being cut short when the labor it takes to build the necessary routines, paths and material interoperability becomes visible.  The indexicality makes these two sides of the same coin.

Where does this leave our discussion of the materiality and politics of the algorithm, then?  I would probably add to it that while we should continue to interrogate the political consequences of closed algorithms, part of this might include people’s struggles to make meaning even when the material constraints are considerably less. Data is vexing even for people who have good access to data, sharp statistical skills, and where houses and a racing hearts situate numbers in meaningful ways.  This perspective could put us in a better position to ask what else has to happen for data to really matter for different kinds of people, such that they might see compelling reasons to resist an overwhelmingly closed system. “You used 77kwh today” does not open up the path to sustained questioning.  Looking in these sorts of contexts for algorithms’ entanglements with practice (#5 on Gillespie’s list) also situates algorithms on much longer paths along which numbers travel.  In these more heterogeneous economies of meaning-making, it could be worth examining numbers that make lateral movements–movements that stand away from, if not fully resist, the dominant “hoover it all up” model of data aggregation. What kind of resistance this might be, and how it fits into the broader picture of other aligned movements, remains to be seen.

 

References

Eglash, Ron. 1999. African Fractals: Modern Computing and Indigenous Design. Rutgers University Press.

Guyer, Jane I. 2004. Marginal Gains: Monetary Transactions in Atlantic Africa. University of Chicago Press.

Lave, Jean. 1988. Cognition in Practice: Mind, Mathematics and Culture in Everyday Life. Cambridge University Press.

Verran, Helen. 2001. Science and an African Logic. University of Chicago Press.

Verran, Helen. 2012. “Number.” In Inventive Methods: The Happening of the Social, eds. Nina Wakeford and Celia Lury. Routledge.

 

Comments are closed.