Insight

Artificial intelligence in marketing: 5 mistakes to avoid

July 17, 2025
Intelligence artificielle en marketing

Artificial intelligence (AI) is rapidly transforming the marketing world, helping businesses become more efficient—whether it's through content creation, audience segmentation, or personalized customer experiences. But when misused, AI can also lead to embarrassing errors, poorly targeted messages, or ethical issues with lasting consequences.

In an environment where every interaction shapes how your brand is perceived, AI must be used with care to avoid outcomes that are counterproductive or even harmful to your business. A U.S. study even found that 70% of people abandon a brand after just two negative interactions. In short, the margin for error is slim.

Here are 5 mistakes to avoid if you want to make the most of AI without putting your credibility or customer trust at risk.

1. Letting AI interact without human oversight

It can be tempting to hand everything over to the algorithm when integrating AI into automated processes like customer service, notifications, or chat interfaces. But without human supervision, AI can generate messages that are absurd, inappropriate, or simply false, which can damage your brand's credibility. Code editor Cursor learned this the hard way when its AI assistant falsely claimed that users were prohibited from installing the software on multiple devices, leading to a wave of cancellations.

Similar issues can arise in editorial contexts. In 2025, Google pulled a Super Bowl ad after its AI claimed gouda cheese made up 50% of the world's cheese production, a completely fabricated statistic. The Chicago Sun-Times also published a summer reading guide filled with fictional books and fake experts.

Whether content is delivered by a chatbot or published on a website, the risk is the same: without human review, AI can lead to misunderstandings, inconsistent decisions, or factual errors that may harm your organization. Human vigilance and judgment are therefore essential.

Did you know?

A 2025 study reveals that GPT-4.5, the latest version of ChatGPT, still "hallucinates" nearly 15% of its responses and factual information. Although it represents a significant improvement over previous versions, about one in seven answers may still contain inaccurate or fabricated content. Fact-checking therefore remains essential, despite advancements in AI.

Best practices to adopt

  • Submit all AI-generated content for human review
  • Use verification tools to check data, figures, and references
  • Keep a clear record of the sources used
  • Set up a fast correction mechanism in case of errors (erratum, removal, explanatory message)
  • Define blacklists of terms or dates to exclude from automated workflows
  • Run red teaming tests before deployment
  • Include a human checkpoint when AI goes beyond its intended scope.

In short, artificial intelligence can speed up your processes, but it should never replace human judgment. A convincing message isn't always accurate, and a smooth answer isn't always appropriate.

2. Targeting without accounting for bias and ethical concerns

While AI enables highly precise audience segmentation, it can also reproduce or even amplify discriminatory biases. Poorly managed systems may end up excluding certain groups, disproportionately targeting others, or reinforcing existing stereotypes. The result: not only is the brand exposed to criticism or legal consequences, but its reputation may also take a hit.

In 2019, Meta (formerly Facebook) had to revise its housing ad delivery system after it was accused of excluding protected groups. The company came under fire again in February 2025 following a complaint that ads for for-profit universities had been heavily targeted toward Black users, raising serious concerns about the algorithm's criteria.

These examples highlight a recurring issue: AI trained on historical data can inherit existing biases and, without clear ethical safeguards, make decisions that are socially unacceptable.

Best practices to adopt

  • Regularly audit training data and ad targeting outcomes, for example by testing how ads are distributed across different groups
  • Set clear limits from the start on the use of sensitive criteria (age, gender, origin, family status, etc.)
  • Have your systems reviewed by an independent third party
  • Make your AI governance policies public
  • Ensure diversity within the teams that design, test, and oversee these systems.

In short, a high-performing AI is not necessarily fair or equitable. It's up to humans to establish an ethical framework, exercise free will, and remain vigilant in how AI is used. Strict oversight is essential to prevent your targeting efforts from becoming a source of exclusion or negative publicity.

3. Personalizing content without considering context or emotional impact

Personalization is one of AI's greatest strengths in marketing. But if poorly calibrated, it can become intrusive, upsetting, or off-putting. An AI relying on incomplete or misinterpreted signals can easily trigger discomfort or rejection.

In early 2025, Meta launched and quickly pulled a series of AI-generated personas, including a Black, queer mother of two. Intended to humanize interactions, the personas were instead seen as artificial, stereotyped, misleading, and disturbing. The initiative was widely criticized for lacking nuance, showing how clumsy personalization can quickly backfire.

This kind of misstep goes far beyond avatars. AI can unintentionally reopen emotional wounds when used without context or explicit consent. For example, sending ads for maternity products to someone who experienced perinatal loss, or suggesting holiday gift ideas to someone who disabled all references to the season.

Best practices to adopt

  • Check the relevance and source of data before triggering automated actions
  • Provide opt-out options for sensitive topics (pregnancy, grief, holidays, etc.)
  • Test campaigns with diverse panels to identify potentially intrusive or inappropriate elements
  • Limit personalization to signals the user has willingly shared (explicit opt-in)
  • Include human review for segments with a high emotional risk

In short, personalization should never override human sensitivity. It also needs to consider timing, intent, and how the person on the receiving end might feel.

4. Using AI at the expense of brand identity

Artificial intelligence can speed up content creation and inspire creative ideas, but it also risks harming your brand identity through off-tone messaging, impersonal visuals, generic slogans, or poorly managed variations.

In other words, generative tools can produce content quickly, but not necessarily in a way that reflects your brand's essence. Some companies have learned this the hard way. In 2023, Levi's faced backlash after announcing it would use AI-generated models to promote diversity. The initiative was seen as superficial and quickly labeled “diversity-washing.”

Problems can also arise when AI generates content without proper validation, resulting in formats or tones that clash with your brand. Messages that feel bland or misaligned with your values can weaken the emotional connection with your audience. That's why it's essential to set clear guidelines from the start.

Best practices to adopt

  • Define the core elements of your brand identity (tone, style, vocabulary, approved visuals) before using AI
  • Have content reviewed by people responsible for brand consistency
  • Test messages with your target audience to catch off-brand results
  • Avoid simulating social commitments if they aren't backed by real action
  • Strike a balance between human creativity and automation to maintain an authentic voice.

Always remember, AI is a tool, not an identity. A strong brand uses it with intention, without losing what makes it genuine and credible.

5. Using AI without transparency

Artificial intelligence can lead to a sense of deception if it's used without being clearly disclosed — in fact, to lead by example, we'd like to mention that this article was written with the help of generative AI. Hiding the use of artificial intelligence in content, services, or interactions can erode trust, spark public backlash, and draw the attention of regulatory bodies.

That's exactly what happened with Quebec company Trévi in 2025, when it aired a commercial featuring a jingle fully generated by AI, without disclosing it. After facing public criticism, they re-recorded the ad with real human voices. It's also worth noting that music created through AI services like Suno or Udio raises serious legal concerns around copyright. Caution is essential, as the legal responsibility for such content falls on your organization—not the AI tool that created it.

Klarna also faced complaints when it was revealed that its AI-powered virtual assistant handled about 65% of support requests without informing users or offering an option to speak with a real person. The platform has since adjusted its approach and made human assistance more accessible.

These examples show that transparency isn't a minor detail, but a key requirement for maintaining customer trust. People want to know when they're interacting with a machine and how the content they see was created. Any lack of clarity, even if unintentional, can be perceived as manipulative.

Best practices to adopt

  • Clearly disclose the use of artificial intelligence wherever it applies
  • Keep an internal record of prompts, sources, and licenses used
  • Offer a clear option from the start to speak with a real person in automated services
  • Publish a transparency policy on AI outlining your intentions, quality standards, and recourse mechanisms
  • Act quickly when something goes wrong by acknowledging the issue, explaining what happened, and correcting it.

In short, artificial intelligence can enhance your customer experience, but its effectiveness also depends on trust. Transparency is not a minor detail.

Harness AI, one step at a time

Artificial intelligence is a powerful and efficient tool when you take the time to understand and use it wisely. Building an AI that is effective, responsible, and aligned with your brand identity requires a clear vision, a solid strategy, and well-defined practices.

Rely on our AI expertise to save time, optimize your resources, and move forward with confidence—without ever compromising what makes you unique. We'll guide you through every step of the process, from planning and integration to team training and the development of AI tools tailored to your needs.

Let's talk about your project and turn AI into a sustainable, high-performing asset that serves your business goals.

Contact us