How customer service and Starbucks are killing conversation

One of the defining activities of human beings is conversation. We like to talk to one another, and do it often.

In our interactions with companies, our conversations have become increasingly, and insidiously, scripted. When we call “customer service”, we’re put in contact with someone who has been told how to talk to us, and is discouraged from veering off-script. And when we try to have a human conversation with them, we get what are clearly canned responses (especially if we’re expressing dissatisfaction with a product or service).

Actually, though, we’re now happy to get a human, even if it’s scripted, because the increasingly typical first line of response for a phone call is Interactive Voice Response, where we’re expected to talk our way through a series of menus. Such systems became famous for pissing off customers, and so they were programmed to respond to the use of profanity by getting you to a human. At the outset, this was, in a small way, joyous, because it actually felt like we were being heard. But recently I’ve realized I will swear or yell at an IVR at the outset, because I know it will trigger the system to connect me with a person. Which means I’ve been programmed by IVRs in how to behave. The IVRs have co-opted us.

Many conversational spaces have been shaped in this way. The one that probably irks me most is the Starbucks ordering process, which was broken down by the folks at Dubberly Design. That article lauds this approach as one that works for “beginners” as well as “aficionados” of Starbucks. I find it diabolical, because Starbucks essentially dehumanizes the conversation between the customer and barista, turning it into a programmatic code. For a company that claims it’s all about the customers’ experiences, it’s disheartening how they make the primary interaction between humans one that could take place between robots.

Designers are taught to shape environments and tools to support their users’ behaviors and desires, but oftentimes this leads to over-specifying in an attempt to “optimize” an experience. This leads to static, stagey, and ultimately unfulfilling engagement, where we realize we are expected to play a role, and cannot just be ourselves. The challenge for experience designers is to specify just enough to support a good interaction between customer and company, but also allow for the emergent and irreplaceable spark that can occur between people.

Book Review: The Most Human Human (in short: read it!)

After seeing the an interview with author on The Daily Show, and reading a glowing notice in The New Yorker, I made a priority of finishing The Most Human Human before I ended family leave.

It’s a delightful and discursive book, wending its way through cognitive science, philosophy, poetry, artificial intelligence, embodied experience, and more. The author, Brian Christian, writes with a deft touch, in an episodic and occasionally meandering style that feels like you’re taking part in a good conversation.

Which makes sense, considering the book’s supposed raison d’etre is the author’s preparation for being a confederate (a human participant) for the Loebner Prize, in which judges of a Turing test have conversations with computers and humans, to determine both The Most Human Computer and The Most Human Human.

As part of this training, Brian, who has B.A.s in philosophy and computer science (from Brown, natch), and an MFA in poetry, endeavors to better understand just what makes humans human. In doing so, he runs across what he calls “The Sentence,” which every discipline that studies humans (anthropology, psychology, sociology, etc.) has some version of, and goes something like, “The human being is the only animal that ____________”. Except that the items that have filled in that blank (“uses tools”, “has language,” “feels remorse”, “thinks”, etc. etc.) have been taken down one by one. Perhaps the best fill-in is, “obsesses about its own uniqueness,” because, really, what does it matter if humans aren’t wholly unique (except, perhaps, in our agglomeration of traits), and yet why do we seem to get so worked up about being distinct from all other creatures? But I digress.

This book came at a particularly opportune time, given the theme of my recent writing on business in the Connected Age — that it needs to embrace our humanity. In the context of computation and automation, Brian addresses the world of work, how many activities that were once done by people are now done by machines, computers, and robots. He astutely points out that replacing people with machines isn’t the problem, but what happens before then, when people’s work tasks become so rote and repetitive, that you’ve essentially turned people into machines. You can’t have IVR (interactive voice response) until you’ve already turned customer service representatives into automatons by requiring them to closely follow a pre-defined script.

The book also digs into our collective left-brain bias, with a quote from Oliver Sacks: “The entire history of neurology and neuropsychology can be seen as a history of the investigation of the left hemisphere.” It’s becoming clear, though, that we favor the left-brain over the right at our own peril — those with strokes affecting right-brain function can find it impossible to make decisions, because it turns out decisions are rooted in emotion, not rational analysis. There’s even a mention of user experience, and how it’s ascent demonstrates a shift away from a left-brained “rational” desire for more features and functions, toward a whole-brained understanding of how people behave, to support, as my colleague Jesse calls it, “human engagement.”

Anyway, I could go on, but I simply don’t have the time. In short, get the book, read it, engage with it, talk to it, take notes as you think about it, and enjoy it.

The “Connected” Meme Flourishes

At the beginning of March, I gave a talk where I posit that we are in a “Connected Age” and that business must alter its practices accordingly. Shortly after, I find out that Dave Gray had recently written a blog post about the Connected Company, which then turned into its own blog, and Google Group.

And now I hear about Tiffany Shlain’s new film, “Connected”, a documentary that is tag-lined “an autobiography about love, death, and technology,” and which seems to hit on many of the themes I’ve been mulling around the Connected Age.

(And in finding out about the film, I found out about the book Living Networks, “leading your company, customers, and partners in the hyper-connected economy.”

A Beneficial Reconsideration: The Original Ending of ELECTION

Over ten years ago, I blogged my thoughts about the film ELECTION (scroll down to July 30, 1999 — this was when I maintained the site by hand!), which was easily among my favorite films of the 1990s. I haven’t seen the movie since then, so I don’t know how it holds up. What I have seen, though, is the original ending (embedded below) to the film, which some lucky guy found in a box of videotapes (remember those?) he purchased at a flea market. If you’re a fan of the film, it’s definitely worth watching. It’s also clear why they shot a different ending — given the sardonic edge of the movie, what you see here feels like it was made for a Lifetime Original or something.

“Scientism”, rationality, and practicing design

Even though it was written by someone from frog, I have to give props to Ben McAlister’s “The Science of Good Design: A Dangerous Idea.”. At some point soon-ish I plan on writing about some of the follies of how design consulting is bought and sold, and Ben hits on a key aspect — the need for certainty from the design process. The reason designers will cite research in a scientistic fashion is because clients seek dependability and predictability. This is one of many holdovers from Industrial Age bureaucratic thinking, and will be among the most difficult to root out.

The challenge that practicing designers, though, is how not to come across as simply arbitrary. Design warrants a rationale, but should not be stifled by obsessively conforming to rationality.

Thoughts on CAVE OF FORGOTTEN DREAMS

Thanks to paid family leave, Stacy and I were able to duck out to a matinee of Cave of Forgotten Dreams, in 3D… with an 8-week-old in tow!

The subject matter of the film, the prehistoric art on the walls of the Chauvet Cave, is heartstoppingly powerful. Bearing witness to creative and communicative output that is over 30,000 years old was emotionally overwhelming — more than once I nearly sobbed as I took in the imagery. This art connects us not with “a people” from over 30,000 years ago (a span of 1,500 generations), but with specific individuals. The quality of the art is stunning, and it becomes evident that we’re not so different from those Cro-Magnons who ran Europe wearing reindeer to survive during the ice age.

A perhaps more quotidian thought occurred to me, related to my recent writing on the Connected Age. The primary theme running through my latest work is that business must embrace humanity, human values, human ideals. And this movie makes evident that the creative impulse, manifested in visual art and music, is key to human-ness. The cave art seems to have two purposes — to help understand and process the world around them, and to communicate to others what you’ve experienced. Those activities of understanding and communicating are foundational in the Connected Age.

I’m of the opinion that Cave of Forgotten Dreams should be required viewing for, well, everyone. It would be hard to find a more universal film, even one told through the idiosyncratic perspective of Werner Herzog. 3-D is definitely a “feature” — though occasionally head-hurting, it’s essential to appreciate the undulations within the caves, and how the artists used the topography in their work.

A billion dollar idea I’d have no idea how to make happen

While pursuing ideas related to the Connected Age, I’ve read some stuff on collaborative consumption and the sharing economy (think Zipcar, AirBNB, Freecycle).

The Industrial Age was defined, in large part, by ownership. In Connected Age, we’re seeing a move to access.

This lead me to think about “the cloud”, where our data is stored, and how Amazon’s EC2 exemplifies this shift to access. You don’t need to have racks and racks of your own computers — just rent access to their service.

But then I thought about their major outage a few weeks back, and how the sharing economy could actually do EC2 one better. The problem with EC2 is that there’s still a singular point of engagement, and if it goes down, you’re hosed.

So, here’s the billion dollar idea I have no idea how to make happen. Provide Amazon EC2-iike services, but with a SETI@home-like distributed computing model. There has to be lots of spare cycles and storage, accessible through high-bandwidth connections. And with a distributed model, you could avoid the single point of failure that seemed to bring down EC2.

I’m sure this would be an enormous technical challenge, but the upside seems enormous.

Skype and Microsoft Could Work Very Well…. or not

I’ve read a fair bit of head-scratching over Microsoft’s US$8.5 billion acquisition of Skype, particularly after the challenges Skype had after it was acquired by, and then released from, eBay.

Skype never made sense as part of eBay. eBay is about commerce. Skype is about communication. (Paypal, which is about money, fit with eBay perfectly, and now is possibly a more valuable asset than eBay.)

For Microsoft, the lion’s share of revenues are still generated by Windows and Office. But you cannot go to sleep on Xbox or Windows Phone 7. The Xbox 360, with Xbox Live and Kinect, is perhaps the leading entrant in the living-room-colonization race. In my household, we dropped DirecTV (“cut the cord”) and purchased an Xbox 360 with Kinect, and signed up for Xbox Live. Thanks to streaming Netflix and ESPN (as well as an 8-week-old and a two-and-a-half year old, which means we’re home a lot and can’t really do much besides watch TV), our TV experience is about 95% Xbox-mediated.

And have you played with the voice controls for Netflix via Kinect? Sweeeeet.

And while Windows Phone 7 hasn’t made an appreciable dent in the smartphone race, it has a lot going for it — Microsoft’s giant piles of cash, Nokia’s commitment to using it, and the fact that it’s a cleverly designed smartphone OS, which didn’t try to just mimic Apple (ahem, Android), but do something distinct and potentially meaningful.

Now, what does Skype have? According to the reports, 170 million active users. And around 30 million concurrent users online, and as you can see from this Skype fanatic’s recent blog post, that number has been rising faster and faster.

Skype is also positioned very well for the technological advances we know are coming, particularly 4G. 4G will make mobile video calls a reality, and Skype is far ahead of all competitors in terms of quality and scale of their service. Additionally, as more and more televisions become connected to the internet (either directly or through some type of set-top box (cable/satellite, game console, or Roku-like device)), and those televisions become equipped with input devices (microphones, as we see with Kinect, cameras as we’re starting to see with some TVs), Skype is ideally suited to be the tool to tie all this together.

It’s also worth noting that for voice and video calls online, Skype is the pre-eminent brand. This Wired article mentions how Windows Live Messenger offers the same functionality and has a much larger user base… But how many people are using Messenger’s voice and video capabilities? I would guess that the overwhelming majority stick with instant messaging (as they do with iChat, AIM, and Yahoo! Messenger).

So, Skype has the leading mindshare and technological base for an activity that, while currently popular, will simply explode in the next 5 or so years as 4G, broadband, and cameras and microphones become basic internet plumbing. Skype will make Microsoft’s current product lines more desirable, and its potential is too great to really understand (it feels a bit like when Google bought YouTube.)

Now, by no means is this a slam dunk. The biggest challenge Microsoft will face in making the most of Skype is organizational. If you read about the acquisition of Danger and the misery that was the Kin, you know that Microsoft can be something of a shit show, particularly when there’s internal competition. One hopes that they’ve learned from that horrid experience, but I also know that organizational change is really freakin’ hard. Microsoft would probably be best served by simply leaving Skype alone (as much as possible). Skype needs to be free to build for all platforms, but there are definitely opportunities where Skype can integrate with other Microsoft components to create something special and new.

We’ll see.