AI in Marketing: What You Should Know

Human shaking robot hand

AI technology has revolutionized the field of marketing, but not without significant risks. Tools like chatbots and image generators have given rise to increased efficiency, and some marketers have learned the hard way that overreliance on AI won’t save a bad campaign (or product).

The Scotland Willy Wonka Fiasco

AI Image compared to real photo

A comparison of the AI used to advertise the event with the actual event. Germain, 2024. House of Illuminati / Stuart Sinclair

An event in Glasgow, Scotland, advertised as “Willy’s Chocolate Experience”, was such a letdown that it led to crying kids and police being called. The organizers, House of Illuminati, used AI-generated images to advertise their magical getaway. Unfortunately, when families arrived to the site, they were greeted by what was described as an empty warehouse sparsely populated by cheap decorations. The parents and kids were given half a cup of lemonade along with two jelly beans each. In addition to using misleading AI images, the advertisements featured typos which matched the kinds of mistakes AI is known to make. An actor hired for the event reported that the organization used AI to write scripts, which would explain the strange new characters and nonsensical phrases that guests were met with. Tickets cost $44 for parents, and House of Illuminati has since said they would issue refunds (Zhou 2024).

This is one example of marketers relying too heavily on AI to lighten their workload, but marketers should be aware that there are other risks that may not seem so obvious.

Protecting Data

In order to safeguard their own data as well as the data of their clients, marketers must be sure to follow some basic ground rules. If a marketer wants to use AI to synthesize data for a campaign, for example, they must know what kind of data can be given to a free chatbot and what data it is better to withhold.

  • Do NOT tell chatbots:

    • Personal Identifiable Information (PII) like Social Security numbers, birthdates, or addresses.

    • Financial data like credit card details, bank account numbers, and sensitive business records.

    • Corporate information like internal reports and business strategies.

    • Login Credentials.

  • You may tell chatbots:

    • Publicly available data such as free news articles and press releases.

    • Anonymized data.

It is important to remember that even seemingly innocuous pieces of data can be brought together to identify individuals and achieve malicious goals. Other options for conscientious users include using temporary chat modes or remembering to delete chats after they are finished.

Screenshot of ChatGPT's Temporary Chat

ChatGPT’s “Temporary Chat” mode, which allows users to have conversations which are not remembered or used in training data.

Avoiding Copyright Violations

In order to avoid legal complications and to protect intellectual property, brands must be aware of copyright rules when using AI in marketing.

  • The US Copyright Office recently published a report on the copyrightability of AI-generated work. If a marketer desires their work to be copyrighted, it must be authored by a human. AI may be used to assist human creation, but must not be used as a replacement for human creativity (Allison 2025).

  • Brands should only use LLMs trained on data that is properly licensed or belongs to the public domain. Failure to do to could lead to copyright violation claims from authors of copyright material, similar to what happened when Meta used millions of books to train their AI (Creamer 2025).

  • Brands should stay updated on developments concerning AI copyright laws since they are constantly evolving.

  • It is also a good idea to create and maintain evidence of human authorship when AI is used in the process of creation. This can serve as insurance in case a claim is made against a brand.

Graphic for The Society of Authors’ Meta Protest

Graphic for The Society of Authors’ Meta Protest. Courtesy of The Society of Authors, 2025.

Making Sure AI Doesn’t Lie

Unfortunately for users, AI chatbots don’t always tell the truth. Sometimes, chatbots “hallucinate,” meaning they output false or misleading information due to errors with training data and reasoning processes. AI may confidently assert falsehoods, or even cite nonexistent books and articles. In the face of this, marketers must stay vigilant to ensure they don’t repeat AI’s mistakes.

  • AI-assisted work, just like any other work, should be reviewed by humans before publication. AI can be used to pool together information, but the specific information should be verified independently.

  • To help prevent AI hallucinations, make prompts detailed and clear. The more context AI is given, the less gaps it has to fill in through assumption-making.

  • Provide templates for AI responses so that AI is less likely to generate irrelevant or false information.

By giving AI precise directions and incorporating human oversight, the chance of AI hallucinating is naturally reduced.

AI is a powerful tool that must be used correctly, and marketers get no exception. For a marketer, AI has the potential to make life better, but only if it is used appropriately.


Works Cited

Gizmodo, Thomas Germain /. “Lying Ai Ads for a Willy Wonka Attraction Led to Crying Children and a Visit from the Cops.” Quartz, Quartz, 28 Feb. 2024, qz.com/willy-wonka-chocolate-experience-ai-ads-1851292670.

Zhou, Li. “The Less-than-Magical Willy Wonka Event, Briefly Explained.” Vox, Vox, 29 Feb. 2024, www.vox.com/technology/2024/2/28/24086217/willy-wonka-glasgow-scotland.

Allison, Annie. “U.S. Copyright Office Issues Highly Anticipated Report on Copyrightability of AI-Generated Works | Reuters.” U.S. Copyright Office Issues Highly Anticipated Report on Copyrightability of AI-Generated Works, 2025, www.reuters.com/legal/legalindustry/us-copyright-office-issues-highly-anticipated-report-copyrightability-ai-2025-04-02/.

Creamer, Ella. “‘Meta Has Stolen Books’: Authors to Protest in London against AI Trained Using ‘Shadow Library.’” The Guardian, Guardian News and Media, 3 Apr. 2025, www.theguardian.com/books/2025/apr/03/meta-has-stolen-books-authors-to-protest-in-london-against-ai-trained-using-shadow-library?utm_source=chatgpt.com.

“Meta Protest.” The Society of Authors, 2025, societyofauthors.org/event/meta-protest/.

DigitalDefynd, Team. “10 Ways to Prevent AI Hallucinations [2025].” DigitalDefynd, 10 Mar. 2025, digitaldefynd.com/IQ/ways-to-prevent-ai-hallucinations/?utm_source=chatgpt.com.

Previous
Previous

The Ethics of Influencer Marketing

Next
Next

How Trustworthy is AI?