surgeons, Linda ends up making a compromise: the Furby is both biological and mechanical. She tells her friends, “The Furby is kind of real but just a toy.” She elaborates that “[the Furby] is real because it is talking and moving and going to sleep. It’s kind of like a human and a pet.” It is a toy because “you had to put in batteries and stuff, and it could stop talking.”
So hybridity can offer comfort. If you focus on the Furby’s mechanical side, you can enjoy some of the pleasures of companionship without the risks of attachment to a pet or a person. With practice, says nine-year-old Lara, reflecting on her Furby, “you can get it to like you. But it won’t die or run away. That is good.” But hybridity also brings new anxieties. If you grant the Furby a bit of life, how do you treat it so that it doesn’t get hurt or killed? An object on the boundaries of life, as we’ve seen, suggests the possibility of real pain.
AN ETHICAL LANDSCAPE
When a mechanism breaks, we may feel regretful, inconvenienced, or angry. We debate whether it is worth getting it fixed. When a doll cries, children know that they are themselves creating the tears. But a robot with a body can get “hurt,” as we saw in the improvised Furby surgical theater. Sociable robotics exploits the idea of a robotic body to move people to relate to machines as subjects, as creatures in pain rather than broken objects. That even the most primitive Tamagotchi can inspire these feelings demonstrates that objects cross that line not because of their sophistication but because of the feelings of attachment they evoke. The Furby, even more than the Tamagotchi, is alive enough to suggest a body in pain as well as a troubled mind. Furbies whine and moan, leaving it to their users to discover what might help. And what to make of the moment when an upside down Furby says, “Me scared!”?
Freedom Baird takes this question very seriously. 9 A recent graduate of the MIT Media Lab, she finds herself engaged with her Furby as a creature and a machine. But how seriously does she take the idea of the Furby as a creature? To determine this, she proposes an exercise in the spirit of the Turing test.
In the original Turing test, published in 1950, mathematician Alan Turing, inventor of the first general-purpose computer, asked under what conditions people would consider a computer intelligent. In the end, he settled on a test in which the computer would be declared intelligent if it could convince people it was not a machine. Turing was working with computers made up of vacuum tubes and Teletype terminals. He suggested that if participants couldn’t tell, as they worked at their Teletypes, if they were talking to a person or a computer, that computer would be deemed “intelligent.” 10
A half century later, Baird asks under what conditions a creature is deemed alive enough for people to experience an ethical dilemma if it is distressed. She designs a Turing test not for the head but for the heart and calls it the “upside-down test.” A person is asked to invert three creatures: a Barbie doll, a Furby, and a biological gerbil. Baird’s question is simple: “How long can you hold the object upside down before your emotions make you turn it back?” Baird’s experiment assumes that a sociable robot makes new ethical demands. Why? The robot performs a psychology; many experience this as evidence of an inner life, no matter how primitive. Even those who do not think a Furby has a mind—and this, on a conscious level, includes most people—find themselves in a new place with an upside-down Furby that is whining and telling them it is scared. They feel themselves, often despite themselves, in a situation that calls for an ethical response. This usually happens at the moment when they identify with the “creature” before them, all the while knowing that it is “only a machine.”
This simultaneity of vision gives Baird the predictable results