The mechanical engineers building today’s robots from titanium sinew are working in parallel — if not tandem — with artificial intelligence researchers building the logic engines of the future and the interfaces to go with them. One of the more popular genres of interface to study, and the one likely to affect the robot market within this century, is the artificial personality. Anyone with an iPhone has probably already been at the receiving end of Siri’s sarcasm, but no one has ever bought an iPhone to listen to Siri’s jokes. When will that UX experience be as important to consumers as the phone’s functionality? When will automation become a popularity contest?
Anthropomorphizing something used to be a largely linguistic trick. We named our boats and cars; we used descriptors like “depressed” and “optimistic” for markets; we talked to our microwaves. Now, there’s an engineering process that can be described as “anthropomorphizing” artificial intelligence. This is the process of making the UI approximate human-to-human interaction. The interesting thing about this process is that it cuts both ways.
Is a waiter-bot that messes up orders but makes excellent small talk more valuable than a waiter-bot that gets orders right, but exhibits no traits whatsoever? Probably not. Artificial personalities make it easy for us to ascribe intentionality to the behavior of complex systems. When a microwave breaks, we don’t consider its motivation, only its malfunction. That won’t be true if the microwave has just been lending a sympathetic ear to our work complaints. Conversely, we’re more likely to be happy with an interaction if we get what we want in an enjoyable, slightly unpredictable way. The personality is a big plus if systems work and a potential hazard if they don’t.
That said, robot physiology is remarkably simple compared to human anatomy. Robots won’t have a sour affect because they missed lunch. They will offer a consistency humans cannot and also mechanize character. In other words: We’ll be able to remove traits we dislike as OSs and upgrade them, which is where this conversation gets considerably more concrete.
This past spring, Google was awarded a new patent that could make it possible for a robot to change personalities on the fly in response to changing environmental conditions and user information — all through a simple download from the cloud. Basically, any robot that’s connected to the cloud would be able to move back and forth between an almost infinite number of programmable personalities. The question is whether this technology will arrive before or after the advent of functionally perfect robots. If it arrives before, robot purchases will be entirely dictated by robot function because personalities will be interchangeable. If it arrives after, personalities will differentiate the products available on the market.
Given the Google patent and the relative progress of research into artificial intelligence and robot engineering, it seems likely that the cloud of personalities will precipitate, watering the growth of physical specimens. Personality won’t shape the market.
Still, that’s a relatively simplistic conclusion because A.I. systems will be — and already are to an extent — capable of developing new personalities. ‘Tabula Rasa’ technology is still in its infancy, but many kinds of groups are working towards developing systems that start from scratch and learn about the world like a newborn baby. This links function with personality and, in a sense, worldview. The utility of the robot is linked to a worldview rather than pure schematics. If this becomes normative, wiping robot personalities — cleaning the “tabula” if you will” — will be an act of disconcerting destruction. That said, personalities constructed “organically” in this manner could still be mass marketed.
The race between the personality and function is on, and though personality has all the seeming advantages of the hare, betting against the tortoise is always unwise. They say that character is destiny, but that’s not always true.