iOLAP is now part of Elixirr Digital. All previous iOLAP services, thought leadership and career opportunities will shortly be integrated into the full Elixirr Digital site

Blog
Natalie Charles

With AI powering more marketing efforts than ever before, it’s easy to see why so many brands are leaning on it to streamline processes, personalise experiences, and automate content. But while AI can deliver impressive outputs and efficiency, it can also have unintended consequences when it’s not set up, tested, or monitored properly.From misinterpreting brand tone to mishandling sensitive information, the potential for AI to miss the mark is a very real concern and not something to be taken lightly.

So, let this article serve as both a cautionary tale and an interesting showcase of some of the most spectacular AI marketing fails, as reminder of what can happen when artificial intelligence is used with just a little too much blind faith!

#1) Coca-Cola’s #MakeItHappy campaign

Back in 2015, Coca-Cola launched the #MakeItHappy campaign, aiming to spread positivity online by transforming negative tweets into playful ASCII art (images made from text characters). The campaign used a Twitter bot to take tweets tagged with #MakeItHappy and convert them into fun and positive text-based images. The idea was to show Coca-Cola as a brand that champions happiness, but things went south pretty quickly…

What went wrong?

The campaign backfired spectacularly when internet pranksters hijacked it. Some users began tweeting lines from Adolf Hitler’s “Mein Kampf”, and the bot, following its programming, turned the offensive messages into “happy” ASCII images. Soon, Coca-Cola’s bot was tweeting cheerful depictions of problematic text, leading to a total PR disaster. Coca-Cola swiftly axed the campaign, but not before the incident had gone viral and drawn significant backlash.

Lessons for marketers

Okay, so while this wasn’t exactly an AI fail, it’s still a good reminder that any automated tech—AI or not—needs the right safeguards. It’s also a classic example of how carelessness in automation has been tripping brands up for at least a decade!

The key takeaway here is to make sure your campaigns are tested thoroughly for potential flaws, especially when open to public interaction. In this case, more thorough testing could have uncovered vulnerabilities that could have been fixed before launch.

#2) McDonald’s Drive-Thru oddities

In 2021, McDonald’s started testing AI-powered voice recognition in its drive-thrus in an attempt to speed up service and improve the accuracy of orders.

What went wrong?

The AI system was set up to understand customer orders, but it often misunderstood requests and did things like recording orders for hundreds of McNuggets or completely mixing up customisations. These strange errors went viral and got a lot of negative attention.

Lessons for marketers

This incident highlights that even well-trained AI can stumble in real-world settings, so being sure to balance AI and automation with human oversight is the sweet spot for success.

  • Consider the environment: AI works best in controlled settings. A busy, noisy drive-thru was always going to be tough for AI to handle, so always test tech in real-life conditions before rolling it out.
  • Make sure it understands everyone: AI often struggles with different accents, slang, or background noise. If your AI voice recognition tool is customer-facing, make sure it’s flexible enough to understand all kinds of voices.
  • Always have a human backup plan: AI can be helpful, but it shouldn’t be the only option. In customer service especially, always have a plan for a human to step in if the tech isn’t working right.

#3) Toys “R” Us’ awkward video ad

In 2024, Toys “R” Us released a promo video created using OpenAI’s Sora, a text-to-video AI tool. The 66-second ad aimed to tell the origin story of founder Charles Lazarus, blending historical elements with imaginative dreamscapes and featuring the mascot Geoffrey the Giraffe. However, the AI-generated visuals faced criticism for looking strange and inconsistent, and unsettling viewers.

What went wrong?

The AI struggled to produce consistent and realistic visuals, resulting in characters with fluctuating features and unnatural movements. For example, the boy portraying young Charles Lazarus had varying facial features and glasses throughout the video. The inconsistencies made the ad feel disjointed and eerie, completely missing the mark on the nostalgia it was aiming for.

Lessons for marketers

It’s important to keep a human touch with AI-generated content. Having people review and refine it helps keep the quality high and the brand consistent. Testing AI content before it goes live can help you catch any oddities or mistakes that might throw off the message. And while new tech is exciting, it should add to the experience, not alienate your audience.

#4) Google’s Gemini misstep

In 2023, Google launched Bard (now called Gemini), an AI chatbot meant to boost search by answering user questions directly. Hot on the heels of OpenAI’s impressive ChatGPT, expectations were high.

What went wrong?

During its debut, Bard confidently gave out wrong information, including the infamous claim that the James Webb Space Telescope had taken the first-ever image of an exoplanet, which wasn’t true. The mistake was spotted right away and went viral, drawing public criticism and even causing a drop in Alphabet’s stock value.

This mistake was a classic case of “AI hallucination”, where an AI generates false information as though it were factual. Bard wasn’t able to fact-check or verify its responses, highlighting a major flaw in AI’s ability to ensure accuracy. Given Google’s reputation for information reliability, the error damaged public trust and raised concerns about using AI for reliable information-sharing.

Lessons for marketers

This example shows that even advanced AI from the biggest tech giants in the world can make huge mistakes.

  • Fact-check, fact-check, fact-check: AI isn’t always right, even when it sounds convincing. For any customer-facing AI, implement regular fact-checking to avoid spreading misinformation.
  • Protect your brand’s reputation: In industries where accuracy is crucial, AI mistakes can hit hard. Make sure you balance automation with human oversight, especially for public-facing tools and tech that represent your brand’s credibility.
  • Understand AI’s limitations: AI can provide fast responses but it doesn’t truly “understand” information. Think of it like the Chinese Room thought experiment: the AI may seem like it understands language because it produces the right answers, but it’s really just following patterns without any actual awareness of what it’s doing.

At Elixirr Digital, we’re here to make AI work for you. We understand the real challenges – like cutting through the hype, avoiding costly missteps, and leveraging AI for real solutions that fit your business. With our expertise, we’ll help you sidestep the AI pitfalls and unlock what’s possible, together.

Get in touch and let’s talk AI.

More on this subject