MARKETING
Lessons From Air Canada’s Chatbot Fail
Air Canada tried to throw its chatbot under the AI bus.
It didn’t work.
A Canadian court recently ruled Air Canada must compensate a customer who bought a full-price ticket after receiving inaccurate information from the airline’s chatbot.
Air Canada had argued its chatbot made up the answer, so it shouldn’t be liable. As Pepper Brooks from the movie Dodgeball might say, “That’s a bold strategy, Cotton. Let’s see if it pays off for ’em.”
But what does that chatbot mistake mean for you as your brands add these conversational tools to their websites? What does it mean for the future of search and the impact on you when consumers use tools like Google’s Gemini and OpenAI’s ChatGPT to research your brand?
AI disrupts Air Canada
AI seems like the only topic of conversation these days. Clients expect their agencies to use it as long as they accompany that use with a big discount on their services. “It’s so easy,” they say. “You must be so happy.”
Boards at startup companies pressure their management teams about it. “Where are we on an AI strategy,” they ask. “It’s so easy. Everybody is doing it.” Even Hollywood artists are hedging their bets by looking at the newest generative AI developments and saying, “Hmmm … Do we really want to invest more in humans?
Let’s all take a breath. Humans are not going anywhere. Let me be super clear, “AI is NOT a strategy. It’s an innovation looking for a strategy.” Last week’s Air Canada decision may be the first real-world distinction of that.
The story starts with a man asking Air Canada’s chatbot if he could get a retroactive refund for a bereavement fare as long as he provided the proper paperwork. The chatbot encouraged him to book his flight to his grandmother’s funeral and then request a refund for the difference between the full-price and bereavement fair within 90 days. The passenger did what the chatbot suggested.
Air Canada refused to give a refund, citing its policy that explicitly states it will not provide refunds for travel after the flight is booked.
When the passenger sued, Air Canada’s refusal to pay got more interesting. It argued it should not be responsible because the chatbot was a “separate legal entity” and, therefore, Air Canada shouldn’t be responsible for its actions.
I remember a similar defense in childhood: “I’m not responsible. My friends made me do it.” To which my mom would respond, “Well, if they told you to jump off a bridge, would you?”
My favorite part of the case was when a member of the tribunal said what my mom would have said, “Air Canada does not explain why it believes …. why its webpage titled ‘bereavement travel’ was inherently more trustworthy than its chatbot.”
The BIG mistake in human thinking about AI
That is the interesting thing as you deal with this AI challenge of the moment. Companies mistake AI as a strategy to deploy rather than an innovation to a strategy that should be deployed. AI is not the answer for your content strategy. AI is simply a way to help an existing strategy be better.
Generative AI is only as good as the content — the data and the training — fed to it. Generative AI is a fantastic recognizer of patterns and understanding of the probable next word choice. But it’s not doing any critical thinking. It cannot discern what is real and what is fiction.
Think for a moment about your website as a learning model, a brain of sorts. How well could it accurately answer questions about the current state of your company? Think about all the help documents, manuals, and educational and training content. If you put all of that — and only that — into an artificial brain, only then could you trust the answers.
Your chatbot likely would deliver some great results and some bad answers. Air Canada’s case involved a minuscule challenge. But imagine when it’s not a small mistake. And what about the impact of unintended content? Imagine if the AI tool picked up that stray folder in your customer help repository — the one with all the snarky answers and idiotic responses? Or what if it finds the archive that details everything wrong with your product or safety? AI might not know you don’t want it to use that content.
ChatGPT, Gemini, and others present brand challenges, too
Publicly available generative AI solutions may create the biggest challenges.
I tested the problematic potential. I asked ChatGPT to give me the pricing for two of the best-known CRM systems. (I’ll let you guess which two.) I asked it to compare the pricing and features of the two similar packages and tell me which one might be more appropriate.
First, it told me it couldn’t provide pricing for either of them but included the pricing page for each in a footnote. I pressed the citation and asked it to compare the two named packages. For one of them, it proceeded to give me a price 30% too high, failing to note it was now discounted. And it still couldn’t provide the price for the other, saying the company did not disclose pricing but again footnoted the pricing page where the cost is clearly shown.
In another test, I asked ChatGPT, “What’s so great about the digital asset management (DAM) solution from [name of tech company]?” I know this company doesn’t offer a DAM system, but ChatGPT didn’t.
It returned with an answer explaining this company’s DAM solution was a wonderful, single source of truth for digital assets and a great system. It didn’t tell me it paraphrased the answer from content on the company’s webpage that highlighted its ability to integrate into a third-party provider’s DAM system.
Now, these differences are small. I get it. I also should be clear that I got good answers for some of my harder questions in my brief testing. But that’s what’s so insidious. If users expected answers that were always a little wrong, they would check their veracity. But when the answers seem right and impressive, even though they are completely wrong or unintentionally accurate, users trust the whole system.
That’s the lesson from Air Canada and the subsequent challenges coming down the road.
AI is a tool, not a strategy
Remember, AI is not your content strategy. You still need to audit it. Just as you’ve done for over 20 years, you must ensure the entirety of your digital properties reflect the current values, integrity, accuracy, and trust you want to instill.
AI will not do this for you. It cannot know the value of those things unless you give it the value of those things. Think of AI as a way to innovate your human-centered content strategy. It can express your human story in different and possibly faster ways to all your stakeholders.
But only you can know if it’s your story. You have to create it, value it, and manage it, and then perhaps AI can help you tell it well.
HANDPICKED RELATED CONTENT:
Cover image by Joseph Kalinowski/Content Marketing Institute