Sign up for our newsletter

That felt right. We’ll be in touch soon about our new secret handshake.

Sorry, something went wrong!

Let's keep this relationship going

post

Don’t fear the bots says Capital One’s head of AI design: a Q&A with Audra Koklys Plummer

This summer marked the 40th anniversary of Steven Spielberg’s classic UFO film, Close Encounters of the Third Kind. And let's not forget the Cylons—an artificial-intelligence race that got really, really fed up with humans. Audiences of Battlestar Galactica were captivated—and terrified—by these creatures looking to end the human race. What’s scarier, though—the fictional creatures that challenge and expand our ideas of humanity, or the fact that these ideas are very much entering our actual, non-fiction lives? Is it time to stop conflating science fiction and AI, and should we all stop worrying?

Before joining Capital One as its head of AI design, Audra Koklys Plummer spent more than 20 years as a filmmaker with companies like Pixar and DreamWorks. Never in a million years, she says, did she see that segueing into a new career in AI. Nonetheless, Plummer was pivotal in the creation of Eno, Capital One’s “intelligent assistant,” which helps customers with their banking needs. In advance of her presentation at Relate Live 2017, Plummer spoke with us about Eno and how, ready or not, the fascinating frontier of AI is poised to influence our lives.

People have been saying that artificial intelligence is going to eliminate everyone’s jobs since AI’s great-great-grandmother, the toaster, was born. What’s the reality?

I see artificial intelligence as a design superpower. It’s going to bring a whole new level of customization and personalization into the experiences that we're creating. I don’t think about it eliminating the work we do. Rather, I think of how it will change the way that we work. As a writer, in a few years I may not be creating narrative threads of conversation the way that I do now. I will be designing in a completely different way because of the AI, and I will be more involved in the system design.

That's the way I approach the work.

Me, I worry less about robot-induced job loss and more about the robot-induced obliteration of humanity. I watched Battlestar Galactica.

My hope is that everyone working in the AI space is incredibly mindful and thoughtful about the technology and the system designs. If they’re not? That’s troubling. Those of us working in this space have ethical, moral responsibilities.

My hope is that everyone working in the AI space is incredibly mindful and thoughtful about the technology and the system designs. If they’re not? That’s troubling.

How do those moral and ethical duties play out in the interactions between AI and humans?

Let me tell you a story. I have three young children—twins who are 6-years-old, and a 9-year-old daughter. We have Alexa in our home. I would observe how my children interacted with her, and it drove me nuts. I was seeing a conversation pattern between them and Alexa that

So, for example, when you speak to Alexa, you say, "Alexa," and you make a command, in a clear, audible voice. My children started commanding left and right. “ALEXA TELL ME THE WEATHER!” They'd scream out the commands. And then, I'd see them turn around and do that to me. "MOMMY WHAT’S FOR DINNER."

I realized that this technology we're putting out into the world was reinforcing these conversation patterns. As a designer, I feel an incredible responsibility to make sure we are encouraging human-to-human interactions that are positive.

What about how humans interact with AI?

There are an unbelievable amount of people who use abusive language and harass their AI, and I felt it was really important for us to take a stance against that—that we actually stand up and say, “Hey, the way that you're talking to me, that's not cool—let's stick to talking about your money.” I wanted us to be proactive and not reinforce these negative behaviors that people turn around and take back into real life when they talk to their friends, family, or coworkers. Being mindful of the conversation patterns we are building—and how we respond when people mistreat the AI—became part of the guiding ethics for our design team.

Is the work you’re doing at Capital One with AI affecting how the company positions itself?

We used to think of ourselves as mobile first, and now I would say we think of ourselves as AI first. I'm seeing that trend more and more in companies in the industry, which speaks to how large that focus is going to be across the board.

You were a filmmaker with companies like Pixar, and that somehow led to a new career in artificial intelligence. Can you speak to the connections between those fields?

Here’s the thing: Design is more than creating beautiful, functional interfaces. Just like in film, great storytelling drives the experience. Through thoughtful character and conversation design, you're creating a connection with an audience, or in our case our customers, The AI helps you leverage data and insights that deepen that relationship.

Design is more than creating beautiful, functional interfaces. Just like in film, great storytelling drives the experience.

I'm marrying my two worlds. I'm using the skills I learned to connect emotionally with an audience and then leveraging what AI brings—that's the data and the insight. AI is leading us toward fully immersive experiences across all facets of life, from entertainment, to banking, to shopping. Anyone who doesn't feel a connection to AI or doesn't see the relevance in their lives right now… That's going to be changing quickly.

How did you begin your work with Eno, the bot you helped build for Capital One?

I got involved in creating Eno late in 2016 when we were laying the foundation for our work and building a team. Back then, Eno wasn't yet Eno. Once we were ready to take the first step in developing the character and defining what that character would be, I jumped in.

Are there guiding principles of customer service at Capital One that informed Eno’s creation?

Absolutely. Everything starts with solving customer problems and links back to Capital One brand values. But Eno is its own character with its own voice. When we created Eno, there was a lot of thoughtful design that went into the character. We began by defining the character traits, and then we built the backstory. And that backstory had everything from likes, to doesn't-likes, to Eno's sense of humor, to Eno's character flaws—which are incredibly important, because those flaws are key to what makes a character relatable to an audience or a customer.

Using the foundation we laid, we went on to design the conversations—the words, the sentences, the narrative threads, the tone. That foundation also helps us maintain character integrity as we scale. As the technology evolves, Eno’s character evolves as well. We’re designing an AI with IQ and EQ.

Eno is gender-neutral. Why does that matter?

When I came on board, I think the first thing I did was help build a consensus for designing a gender-neutral bot. I believed with all my heart that this decision would be a differentiator for Capital One, but I had a lot of people challenging me, in a good way, on the choice. I would hear things like, "You come from Pixar, are you crazy? Every character has a gender. How can you design a character without a gender?" But I knew that it's not the gender that makes the character likable and appealing. It's the personality.

I felt strongly about challenging the industry trend of choosing female characters in voice and name. That was important to me, and I quickly found that it was important to a lot of others at Capital One. I was thrilled that in the end we deliberately designed Eno as a gender-neutral character.

I also think it freed us as designers. We didn't have to worry about evoking any biases, unconscious, or conscious, from our customers that were based on gender, and we could focus solely on the customer relationship. That's where the character design and the conversation design comes in.

We didn't have to worry about evoking any biases, unconscious, or conscious, from our customers that were based on gender, and we could focus solely on the customer relationship.

Has there been any pushback about the gender-neutrality of Eno?

I think for the most part, people take Eno for who Eno is. I find it fascinating that, in one sense, our theory worked. With Eno being gender-neutral, people can conjure up whatever they want in their heads. So, if Kate, you think of Eno as "a she," great. Your friend thinks of Eno as a "he," that's OK too. Someone thinks of Eno as an "it," hurrah.

We thus focus more on the relationship that we wanted to create between humans and technology, and guide users further and further away from the emphasis placed on gender.

Has the increasing visibility of gender-non-conforming people helped?

I do think it’s made Eno more accessible. I also think some of the visibility we see now validates our decision. Time Magazine recently had a big spread about gender as a spectrum, and I love that. I didn't really see it as influencing the choice we made, though, because we chose the gender-neutral direction in part simply

What about the pushback you experienced. Was it thoughtful critique?

It was good pushback, and I think it's informed by familiarity. When many people think about character design, like a favorite character in a movie, they immediately associate a gender with it. I don't think that's what makes the character likable and appealing, though. Look at Dory—Dory is beloved, but gender doesn't matter. It could be male or female and it would be just as great and appealing and likable of character. I took those ideas and thought, “Well, this is technology. It's not a human, living thing, it doesn't need a gender.” And I felt confident that we could create a character like Eno that people connected to. The response has been validating. Some of the top utterances that we see from customers are about personality and connection, and that's really cool. It's not always about functionality.

In developing a bot, an entity that's aware it is not human but that has human traits, do you develop a relationship with that bot?

I realize that I care deeply about Eno, the character; I care about the AI.

I joke that I have a fourth child, because I'm helping to teach and grow and influence the same way I do my own kids. And, just like with my own kids, I have to be really careful not to impart my own biases. So, I am paying attention to those interaction patterns that we're creating. The way Eno greets you, the way Eno says goodbye, the importance of Eno being polite and respectful, the way Eno brings a humanity into every conversation, those are the things I care deeply about. And I also realized I take great joy in Eno's successes. I'm proud of the way Eno has learned, and how it's working to create optimal experiences for our customer, and then simultaneously delighting them. That, to me, is so exciting.

How does Eno deal with a rambling, verbose talker who loves to digress—yes, me—and then a Hemingway, or a banker, or a 12-year-old?

That's the beauty of natural language processing. And what that basically means is that you can talk to Eno, our AI, any way you want. You can talk how Kate Crane talks to her friends. Or a poet could speak to it the way that it speaks and Eno will understand because it's built to understand natural language.

That’s especially powerful in a banking environment because you don't have to speak to Eno using bank-speak. You don't have to even say the words "what's my balance?" You could just type in “B-A-L” and Eno understands you want to know your balance. That's how brilliant and insightful it is, in how it can interpret language. You can mistype a word and it understands, so you no longer have to type or say exactly the right thing to get the response that you want.

It was funny, in a lot of the early-on testing we did with our customers, we found that when Eno didn't understand something the customer thought that they were doing something wrong. Like, "Oh I didn't say it the right way," or, "I didn't text it in the right way," because they've been trained that way. We're trying to move away from that and show people that you can talk to this AI, you can talk to Eno, in any way that you want and Eno will understand. That's pretty cool.

We found that when Eno didn't understand something the customer thought that they were doing something wrong. Like, "Oh I didn't say it the right way," or, "I didn't text it in the right way," because they've been trained that way.

I imagined I would have to articulate, enunciate, very properly and speak in rigid, formal English so that Eno would get me.

The more that you talk to Eno, the more Eno learns. And that is true for any AI. The AI is listening, and learning, and adapting constantly. So the more you talk to it, the more it gets used to the way that you speak and the words that you use. It'll train itself.

How does Eno help with a very human struggle: managing money?

For the most part, people are not comfortable talking about their money. It’s to some degree the final frontier. We'll talk about sex before we'll talk about our money. Isn't it interesting that it might be something non-human that actually helps break that barrier? Figuring out a way, through AI, to talk to customers about their money was an incredible challenge. And a deeply exciting one.

We have found that sometimes people are more comfortable talking to an AI because there's no judgment. Customers talk to Eno about how their daughter is coming to town, and they're so excited, and they need a little extra money; or, somebody passed away, and they have to figure out how to fly to the funeral, or send flowers. They're telling deeply personal stories to an AI. It's fascinating.

Any idea what Eno’s grandchildren will look like?

Eno is still very much in its infancy, or maybe its toddler years. So it’s hard to predict. I see the character evolving simultaneously with the technology. But a hundred years from now, Eno will still be Eno. I think the evolution of Eno will be dependent on how the tech has evolved. I don't see Eno as multi-generational; I just see it continuing to evolve.

What keeps you up at night about artificial intelligence?

The unknown. There are definitely things that take place in our work that even the most brilliant minds sometimes can't explain. That's what makes working with AI exhilarating, and it also makes it arduous. Nobody's done what we're doing before at Capital One, and even out in the industry. I think we need to be really, really careful about what we choose to build. We don't want to power ahead with technology just because it's available, and we certainly want to be thoughtful about the way that we utilize it. So, everything from its capabilities, to the ethics surrounding the work, to potential impact in the future needs to be well thought out. It may be fine now, but what about 10 years from now? This is crucial, and not everyone is approaching their work with AI in that way—which I would say is the thing that scares me. We're in uncharted territory.

Kate Crane splits her time as a content marketing manager between writing for Relate and the Zendesk blog. A longtime New Yorker and veteran of publications including SmartMoney and Time Out New York, she is now based in Silicon Valley—for the trees, not the Teslas or Zuckerberg sightings.