Interaction, Agency and Ants


ant.sim.110810-530pxThis week my post-seminar musings circled back to our discussion about what we expect of our computers and how we understand and imagine them.  I found thinking about what exactly what we mean by “interaction” pretty interesting. I’m going to duck the whole question about how good or bad Brenda Laurel is on Aristotle and focus instead on the issue that Janine raised when we were talking about agency and computers.

There’s much that resonates with me in Brenda Laurel’s definition of agents as “entities that can initiate and perform actions” (p. 569).  Thinking about my computer, or my ipad, or my Iphone, I definitely see a potential there for performing actions, a potential that is realized countless times over the course of any given day. Initiation is a bit more complex, but it seems to me that when I tell Siri to send a text to Alan, “she” initiates the action by executing the program that calls up the text window and then “asks” me what I want to say in that text.  I don’t think I have a theory of mind about Siri. I do expect “her” to interact with me so that we can successfully accomplish something I couldn’t do by myself. And at some level it does feel like I’m engaging a cognitive entity when I use my phone. But because I know that Siri is a suite of programs and technologies that can’t make associative leaps independently of what her programmers gave her, I understand that her limits are absolute – she cannot be “trained” to quit confusing “Alan” with “Ellen.”  She does know that Alan is my brother because she was programmed to ask “what is your brother’s name” the first time I said “send my brother a text.”  But when I asked her to send a text to my mother, she asked what her name was, and when I told her she replied: “there is no Bonnie in your contacts.”  I’m pretty sure that the next time Siri gets an upgrade there will be an association between “mother” and “mom” somewhere in her code, but this is not something that Siri can develop (initiate) on her own.  At the end of the day, she is the creation of her programmers and designers.  In some sense of the word she is “organic” – that is complete and more than the sum of her inter-related parts.  But she is not unique.  My Siri is just like your Siri and every other Siri out there, even if she does call me “Amy.”

But you can interact with her.  I liked Janine’s assertion that computers are technologies or tools that help humans accomplish specific tasks, but not entities with which we interact. We both thought about how the concept of “interaction” squared with what we think about humans’ use of other technologies.  I suggested cars, skiis, and a cello, and Janine proposed a broom. I agree that brooms do not have agency. But you might be able to make a case for agency and interaction with skiis, and certainly with a cello.

After class I also thought about how we understand our interactions with some animals (where the “theory of mind” issue is often invoked to deny animal agency). Dogs, for example, can certainly initiate and perform actions. They do things for us that are beyond our solo capabilities (herding sheep, finding a lost child).  And mine have never confused Alan with Ellen or not known who “mom” was. They continue to learn over the course of their lifetime, without a software upgrade. They are also unique individuals, a claim that can be made about cellos (and flutes) as well.  All flutes might have the same components, but each has its own feel and sound. Musicians make music with their instruments.  Through breath and/or touch they animate the flute to create something exquisite and unique. The performer might initiate the breath or the touch, but it is the synergy between the breath, the fingers, and the flute that creates the sound we recognize as music. Of course instruments are technologies in some ways, as are some animals at least some of the time.

I feel like I should write something about human-computer interaction in terms of ANT (Actor Network Theory) but am going to end with Sim Ant as a reminder of the connections between cognition, play and agency – as well as the generational differential we’ve talked about before in terms of how we respond to emerging digital technologies.  Here’s Will Wright’s description of the development of Sim Ant and the game’s connections to animal culture:

“The next game I did was called SimAnt; it was actually based on the work of Edward O. Wilson, who is the premier myrmecologist in the world. He had just published this very large book called The Ants18 that won the Pulitzer Prize that year. Ants have always fascinated me because of their emergent behavior. Any single ant is really stupid, and you sit there and try to understand what makes it tick. If you put a bunch of these little stupid components together, you get a colony-level intelligence that’s remarkable, rivaling that of a dog or something. It’s really remarkable, and it’s like an intelligence that you can deconstruct. Ten- and fifteen-year-olds really got into SimAnt; it was really successful with that group. Most adults didn’t play it long enough to realize the depth of ant behavior and mistook it for a game about battling ants.”