Rise of the bots: Leading experts discuss the latest in chatbot technology
Conversational bot technology, like Intercom's Operator, helps businesses go beyond human limits to connect with more prospects and customers.
When someone comes to your site, Operator can launch tasks bots that do everything from qualify leads and book meetings to sharing product and content recommendations. Operator also powers Answer Bot (now even more powerful and called Resolution Bot), an intelligent bot that automatically answers your customers’ common questions, improving your team’s time to first response and freeing them up to handle more complex issues that only humans can tackle.
A lot of our thinking on automation and chatbots has been informed by conversations we’ve had with experts in the field over the last few years. In today’s episode, we’re featuring the best bits of those conversations. You’ll hear from:
- Microsoft’s VP of AI and Research, Lili Cheng
- Conversational design expert and author, Erika Hall
- Intercom’s machine learning expert, Fergal Reid
- Close founder, Steli Efti
To hear each of these conversations in full, check out episodes of our podcast. You can also subscribe to the show on iTunes, follow us on Spotify or grab the RSS feed in your player of choice. What follows is a lightly edited transcript of the episode.
Lili Cheng: Will chatbots replace humans?
You might have come across headlines proclaiming the death of jobs and humans as evil robot workers replace them. We think this is a false tradeoff. We think conversational automation will augment support and sales jobs, not replace them. It will help these teams scale their expertise and focus their time where it matters most. Lili Cheng explains how she sees chatbots working alongside humans.
Adam Risman: Looking at AI in general, I think there’s a conception that this technology is supposed to replace something. But perhaps technology and people are supposed to work hand in hand where a single option maybe isn’t the most efficient answer (for example, when a customer is doing some light investigation into a product and there are a lot of quick, repetitive answers that aren’t the best use of a human employee’s time). How do you see humans and bots coexisting?
Lili Cheng: I see them as one in the same system. One of the most common things we have people add to a bot is what we call “person in the loop,” which means that when you build an AI system, especially for a company, often you’re trying to do something specific for your company. Unlike a company like Microsoft or Amazon or Google, you might not have tons and tons of data around your customers’ interactions, because you’re trying to just sell an insurance policy or get somebody help with their medical process.
It’s important when you’re building a conversational experience to not have your lead technology limit what a user does. What I mean by that is if you launch an AI service and it can only do one thing and it doesn’t do anything else at all, you might not learn what your customers really want. You might be teaching your customers that you only do one limited thing. Typically, people will ask for a wide variety of things. We encourage companies to assess what their systems can do – or if it can’t do something, hand it off to a person who can make sure that you don’t lose a customer in that experience.
If your system can’t do something, hand it off to a person who can make sure that you don’t lose a customer in that experience.
People are great at learning new things, ambiguous things, and complex problems. And the idea is that it’s important to pair bots that can do repetitive tasks and solve a lot of simple problems people have with employees and workers who can do more complex and interesting things.
Adam: Thinking back to a few years ago, chatbots got a lot of criticism – maybe they fell flat, or maybe they were too general purpose. Have you seen these applications become more people-focused?
Lili: It’s interesting. If you go all the way back to 1995, one of the things we learned was that we were just early. Consumers weren’t used to chatting. They barely had email accounts, and the internet was pretty slow back then. So the people who were chatting online were a small segment of the total number of people communicating. That’s been one of the biggest changes. Today, pretty much anyone who has a phone gets text messages. People are used to feeds, email, and instant messaging. You can’t imagine life without these tools today. Although they’re popular, people aren’t necessarily used to communicating with businesses in these tools.
But sometimes there’s an experience that changes your mind, where you say, “Wow, that was just awesome. That totally saved me time, or that experience was so much better.” That change encouraged you to try others and use them more and more. I think you’re going to see that a lot with conversational experiences.
Erika Hall: How should businesses design chatbot interactions?
It’s pretty clear how chatbots can free up your team’s time. But where chatbots have fallen short in the past is with the poor user experience they provided. We’ve all seen our share of poorly designed bots that are unable to hold up their end of the conversation and turn out to be more frustrating than helpful. The efficiency gains from chatbots are useless if they end up costing you customers and users.
One discipline that has spent a lot of time thinking about how chatbot interactions can be improved is content design. Content designers are responsible for ensuring that your product’s interface language helps your users use your product effectively. When it comes to chatbot interactions, content designers think deeply about something they call conversational design — a design that mimics a human conversation.
Erika Hall, co-founder of the design studio Mule Design, is one of the pioneer thinkers on chatbot conversational design. She shares what effective chatbot interactions look like.
Adam Risman: When we say conversational design, we’re not just talking about what happens within a messenger. What falls under that umbrella to you? Because I think it’s a wider definition.
Erika Hall: I’m thinking about taking a deeper look at the mechanics and principles that make human conversation possible and extending those as a way of thinking about interaction design and interface design to make it more device independent and more natural for people in a way that doesn’t always involve talking to your computer or having a chat with your computer.
I’m looking at human conversation as a model for all interactions with digital systems because right now, we’re at a point where digital systems are inserting themselves into every realm of human activity. Every relationship, every transaction, it’s now possible to mediate it through a digital system and so to look at why interacting with people works as well as it does and applying those principles to interactions so that they feel human and humane and not like you’re having a bad interaction with a machine.
I’m looking at human conversation as a model for all interactions with digital systems
Adam: One of the things I really enjoyed about your book Conversational Design is its principle that conversation is actually the original interface, which makes total sense. What is it about conversation that you feel is being lost today when you’re interacting with the digital experience? What are the core principles that maybe we’ve lost sight of over time?
Erika: One of the key principles is the idea of having a shared goal because that’s one of the things that makes conversation work between or among people. That’s a miracle when you think people are intelligent systems walking around and you can’t directly see what’s in somebody else’s mind but as long as you speak the same language, you can very quickly exchange information with them. If you’re in a strange city and you walk up to somebody on the street, you can ask them for directions and there’s a protocol that makes that possible.
There’s conventional phrases we use. There’s a tacit agreement that it’s okay to make that request. If you walked up to somebody on the street corner in New York and you asked them how to get to the Empire State Building, I don’t think anybody would be appalled or think it was strange for you to do that. It would be “Oh, that’s a totally okay thing” and you think well, what makes it work? What makes it okay to walk up to any stranger and ask them that question but perhaps not ask them another question, like not ask them a personal question? There are all of these unspoken rules.
If we look at what’s beneath those and say okay, how do we have a system that makes it very clear? Well, here’s what the system allows you to do. Here’s what’s okay to do. Here’s what won’t work. To think about how you establish that sense of a shared goal because that’s what makes conversation work.
If you were to ask somebody for directions and they were to spin off into another tangent and talk about architectural history, that would be strange and antisocial and you would never expect somebody to do that. That would almost be like a hostile act. If you were like “I need to get to my friend’s office, they’re in the Empire State Building. Can you show me that direction?” If that person were to waste your time, you’d think that was some sort of violation and that was actually kind of rude.
There’s so many digital systems that do that, right? You go to the system with an intent and that system diverts you, whether with advertising or giving you irrelevant information or not giving you the basic information you need to have a successful interaction. It’s really looking at why can it be so comfortable to interact with people and so much less comfortable to interact with computers and how can we make that more like a good interaction with another person because now we’re interacting with computers for things we used to interact with people for, like even ordering a pizza.
Adam: I think the directions example is interesting because there’s something that you didn’t explicitly say there but it’s heavily implied, and it’s trust. You’re trusting this person that you are making eye contact with and asking for help is going to guide you in the right direction. I think that’s something that’s particularly relevant today when we have digital systems that we work with for our finances, for healthcare and all of the things that are incredibly sensitive.
Erika: Absolutely. There are a lot of systems that violate those principles and not even intentionally, but because the designers and developers and writers don’t think about it like that, we still even to this day, even with all this talk about human centered design, we’re still designing in a very device centered way.
We still think screens first. Even when we think about having voice interactions, we’re still thinking about interacting with the device first rather than saying “let’s set aside whatever hardware, whatever software, and just think about what kind of exchange is going to happen between the system and the individual person, customer, user or human.”
Adam: Say five years from now, what are you hoping people will do or think about differently as a result of reading this book?
Erika: I would say think less that the value is in the interface and more on what actual value is in the system. For example, don’t think “I’m making a chat bot.” Don’t think “I’m making a voice interface.” Don’t think “I’m making a mobile app” but think “I’m creating a system that provides real value to people that can be expressed in words that’s as easy or easier to interact with than having a friendly human being there ready to do your bidding.”
Fergal Reid: Has machine learning finally reached its potential?
Advances in machine learning technology are making chatbots more versatile and capable of handling different user scenarios. These improvements are causing businesses to take a second look at chatbots and consider how they can improve efficiency while preserving a positive, consistent customer experience.
Late last year Intercom’s cofounder Des Traynor spoke with machine learning expert Fergal Reid about the progress he’s seeing in the field, as well as the gaps that still need to be closed.
Des Traynor: When it comes to machine learning, you’ve said there are some things that we’re surprisingly good at now and there are problems that are more solvable then they were in, for example, the year 2000.
Fergal Reid: That’s true. It’s very real and exciting. One great example of this is computer vision. For generations, people were coding algorithms almost by hand, manually coding things to detect features of an image. They tried to detect straight lines and edges in a very manually coded way to recognize a bicycle or bird in a picture. The success was never really quite what we wanted. It was always easy to produce a compelling demo, but hard to produce a system that worked, that you could ship and put in the wild.
In the last five or so years, we’ve really crossed a threshold in computer vision. We now have acceptable accuracy. You can ship Google Photos with a built-in object recognizer to 100 million smartphones, and most of the time it works. There are hiccups and problems, but it’s hit this acceptable error bar for the end user. That’s obviously been one huge success story.
Other big success stories have been in audio recognition and natural language translation. What all these success stories have in common is that we’re much better at understanding unstructured data – data where things aren’t nicely labeled and classified, data that looks like a big image full of pixels or a big sound file full of bits and bytes. We’re much better at taking this unstructured data and turning it into structure, then we were five years ago.
This is because of something called deep learning, which is a breakthrough machine learning technology. You could also say it’s an old machine learning technology that’s finally come good. We finally have enough computation power and good techniques to realize its potential.
Des: Is there now like a prototypical example of a problem where we’re still struggling? Like if image recognition or vision is going well, is there a corresponding area where we have yet to really make a dent?
Fergal: There’s a lot of demands that we haven’t yet cracked. It’s one thing to look at unstructured data, where you have a 100 million photos and over time you’ve learned to recognize the objects in them, but there’s a huge amount of things we’re not even close to yet.
For example, consider talking to a chatbot, where a chatbot generates fully natural responses like a human. We’re definitely not at the stage where we have a system that’s intelligent and can hold the context of a conversation.
Des: The distinction you’re drawing there is something that generates responses on its own versus something that can make selections from a pre-configured answer bank, right? You’re saying we’re not at a stage where we’ve built a chatbot that can actually create, conceive and return an answer that’s appropriate.
Fergal: Exactly. We’re not yet at the level where we have anything that requires a general understanding of the domain. We have very powerful techniques for taking unstructured data and compressing that down to a simple representation that we can then use to say, “This looks like a cow or this looks like a dog, or this looks like the word, hello.” That’s a very limited, constrained task, something that requires a contextual understanding.
Basically, there’s a small number of problems for which we have figured out good solutions, and a much, much larger number of problems that we’re not anywhere close to solving.
Des: There are some people who might say, “If this works 10% of the time, that’s a win.” There are other cases where you might see some AI get it right 51% of the time, and be blind to the fact that 49% of your customers are now having a horrible experience. There’s a certain point where it’s cost effective for the business to release the AI into the wild; however, I worry that those two bars might be quite far apart in some sense.
Fergal: There’s a product development tactics question here: what products should you choose to ship? If you’re trying to ship a machine learning product, you really want to ship one where there’s a good tolerance for occasionally getting things wrong.
For example, Google recently shipped these smart replies for Gmail. They unintrusively provide suggested replies at the bottom of your email. If one of the replies isn’t very good, it doesn’t matter. If one of the replies is good, the user clicks on it and it saves some time. That’s a really nice way to deploy a machine learning product. Rather then say, “It’s going to respond on your behalf”, it simply suggests options.
Des: It suggest things I should say, and worst case, I won’t use the suggestions.
Fergal: Exactly. A successful machine learning products picks its battles carefully. It’s about choosing to ship something that has a high tolerance for occasional errors, baked into the nature of the product. Even if you want to ship something that does something on the user’s behalf, getting manual approval is a sound approach.
What’s the bar for success for this? It depends on the product. A good product manager has to be very thoughtful about trying to ship pieces that have that affordance and the robustness to combat occasional bad behavior.
Des: So if we’re finding a product feature that’s really well positioned to make use of these technologies, a simple requirement would be that the AI should augment, but not replace, anything that exists today. If you can make things easier for the user, simplify things, reduce things to a click, but don’t click on their behalf, that’s a good start.
AI should augment, but not replace, anything that exists today
Fergal: That’s a fair summary. But it depends on the domain. Take self-driving cars. People speculate that there’s a cliff, where if the self-driving cars are good, but not perfect, we’re actually worse off than when we started.
Steli Efti: Conversational automation in practice
So how do chatbots work in practice? Earlier this year, we spoke with Steli Efti, who’s the founder of the popular inside sales CRM software, Close. We got his take on how chatbots can be used to qualify sales leads and how businesses can evaluate their impact.
Adam Risman: When it comes to qualification and that idea of listening deeper, we’re also seeing chatbot experiences come into play, with messengers on sites delivering a higher volume of leads. Automation is great and can make it easier to get in touch, but the same time there are a lot of human aspects that simply can’t be replaced. What’s your advice on how to best incorporate these technologies without being overly reliant on them?
Steli Efti: My biggest recommendation is that people should try it. They should try having chat technology on their website or their app. They should try automation, but they should really be focused not just on tracking the numbers but looking at these things as experiments that need to be evaluated from 360 degrees.
Let’s say I have a website that has a lot of traffic, and I have a form people can fill out to request a demo or ask some questions. At the same time, I introduce a chat window and maybe we can get a qualification process going where a chatbot is asking a few questions and then prompting the user toward a demo. When you try this out, it’s really important to track the numbers but also to check in with the sales team a month later and ask: “How are the leads we send you through the chatbot different than the leads that come through the form? Have you seen any kind of quality issues? Has that interrupted your workflow because of the way that we send them to you?” It’s crucial to understand how the sales team feels about this and what kind of stories they have to tell.
You also have to come at it from a visitor angle; you should actually survey people who visit and exit your website about their experience with chat. I’ve heard many, many times about people going to a site, interacting with a chat widget, and not being happy with it. I had this experience myself, and to me, it’s not the chat window, or the bot necessarily; it’s the way it’s implemented.
A lot of times, we as an industry get overly excited about a new technology, but we’re not mindful about the implementation of the software. “Nobody is converting on our website. Let’s just use an A/B testing tool, and all our problems will be solved.” That’s not actually true, because you don’t have your value proposition or your ideal customer figured out. Your traffic is really poor. No matter how many A/B tests you run, you have fundamental issues.
Attaching AI to anything in SaaS is the new thing that everybody thinks is going to solve their problems. No, it’s not. It’s a tool, and if you use a tool in the right context it might help tremendously. But it also might not make a big difference. You have to test to figure it out. I see too many chat apps – too many bots – implemented in a way that’s not thoughtful, and then generating results that are not successful. Those tools are awesome, and hopefully they help increase customer intimacy, which is a thing that I care deeply about. But I would also warn people about getting overly excited about the tool. You missing a tool is never the reason why something isn’t working. It can advance or improve something that’s already working, but it usually will not fix something that’s broken.