Rodney Brooks gave the first presentation at Solid Conference, talking about the integration of software and hardware in robotics. For decades, industrial robots were programmatically simple, performing the same action over and over again–not all that distinct from the machines that preceded them.
Brooks, though, is interested in how software allows hardware to change it’s behavior, to become more effective, and, in a way, smarter. He showed the robot Baxter, demonstrating how it picks up items to pack them. On Baxter’s second pass, Brooks takes the object from Baxter midway in his movement. An older robot would continue to go through the packing motion. Baxter, however, realizes that the object is missing, and halts it’s motion to instead go get the next object.
Brooks commented how many of his customers find this disturbing–they expect the machine to behave like a machine. However, Baxter is now exerting something like agency.
This is challenging for people because we assume that anything reacting with agency is alive. Machines are things we use for a purpose, typically a singular purpose. However, if software allows that machine to appear smart, to behave in unpredictable but savvy ways, people no longer perceive it as just a machine. Even if it doesn’t look like a person or animal, we still treat them as alive (think about how people typically name their Roombas.)
The challenge for design is to appropriately set expectations. I wrote an earlier post about the role of purpose in product design–people use an app to fulfill a purpose, and if that app changes (usually to add purposes) people often reject that change. As we start designing for wearables (smart watches, glasses, clothes, etc) and robots, we have to recognize that people bring a preconception of purpose stability–I use a watch to track time; I wear glasses to improve eyesight. Making these things smart crosses a cognitive chasm, where the person no longer perceives it as an object, but now a living thing.