The AI-Authorship Trap: How AI-Generated Content Erodes Consumer Trust and Brand Loyalty 

Artificial intelligence (AI) is transforming marketing communications at an unprecedented pace.

Businesses are increasingly leveraging large language models (LLMs) to generate emails, advertisements, chatbots and customer engagement messages, aiming to improve efficiency and reduce costs.

However, research by Kirk & Givi (forthcoming) highlights a critical flaw in this strategy: AI-authored content, particularly emotional messaging, significantly diminishes positive word of mouth (PWOM) and customer loyalty compared to human-written content.

Furthermore, AI's well-documented tendency to fabricate false or misleading information—commonly referred to as hallucination—further undermines confidence in factual messaging. In short people can’t trust an AI to tell the truth and they don’t like hearing AI talk about emotions. 

The implications are clear. AI-generated content is an inferior product for companies looking to build trust, foster loyalty, and cultivate consumer advocacy. The primary reason a business might adopt AI-authored marketing content is cost-cutting, often at the expense of human employees. However given the price of developing and hosting AI systems is sacrificing long-term brand equity for short-term and potentially minimal financial savings truly a wise move? Let’s explore why AI-generated messages fail both emotionally and factually and how their adoption signals a company’s prioritization of budget over brand trust. 

Why AI-Generated Emotional Messages Backfire 

One of the most striking findings from Kirk & Givi’s research is the "AI-authorship effect"—a phenomenon where consumers react negatively to AI-generated emotional marketing messages because they perceive them as inauthentic. Across multiple studies, AI-generated content consistently led to lower PWOM, decreased customer loyalty, and higher moral disgust. This means there is evidence AI content has a clear cost on the business which needs to be measured against any savings from lower human wages. 

Emotional marketing relies on trust and genuine connection. When a brand expresses gratitude, sympathy, or pride, consumers expect sincerity. The problem?

AI does not experience emotions—it merely assembles words based on probability.

Even when AI-generated messages were structurally identical to human-written ones, Kirk & Givi’s Study 1 found that consumers still reacted negatively once they discovered the message was AI-generated.

Feature based marketing however relies on providing clear factual information about the product and yet AI LLMs hallucinate and can misreport facts especially in longer conversations like those in chatbot settings. 

Study 5 further confirmed that this poor emotional response to AI is driven by perceptions of authenticity and moral disgust. Consumers feel deceived when they realize an emotional message was crafted by an AI rather than a human, leading to a breakdown in trust. Since AI cannot generate the genuine, internally-driven expressions consumers expect from brands, AI-authored emotional messages are inherently flawed. 

The Unreliability of AI-Generated Factual Content 

Beyond its struggles with emotional authenticity, AI also falls short in factual messaging. Large language models like ChatGPT do not store or understand facts the way humans do. Instead, they generate text based on probabilities, which often leads to hallucinations—false or misleading statements presented with unwarranted confidence. 

The fundamental issues with AI-generated factual content include: 

  • Lack of true understanding – AI does not "know" facts; it only predicts words based on past data. 

  • High probability of errors – Even when trained on reliable sources, AI can fabricate statistics, misinterpret data, or cite nonexistent studies. 

  • Inability to self-verify – Unlike human writers who fact-check their work, AI lacks the capability to assess the accuracy of its statements. 

A company that relies on AI for factual messaging risks damaging its credibility. If customers suspect that product descriptions, marketing emails, or support messages are AI-generated, they will naturally question their accuracy. As trust erodes, engagement declines, and customers become less likely to recommend the brand to others. On the other hand, without constant monitoring AI might provide information which is factually incorrect or outside brand guidelines costing the company as seen in examples where Chris Bakke posted a conversation where a chatbot promised a car for $1 . 

Kirk & Givi’s Study 6 found that while consumers distrust AI-generated content, they view it slightly more favorably in cases where they already expect inauthenticity—such as repurposed or copied content. However, this does not indicate trust in AI; rather, it reflects a diminished expectation of authenticity overall. In short, AI’s best-case scenario is being seen as "less deceptive than a dishonest human," which is hardly a strong endorsement for AI-authored content. 


AI-Generated Content: A Cost-Cutting Gamble with Long-Term Risks 

If AI-authored content is demonstrably inferior for both emotional and factual messaging, why do companies use it? The answer is simple: cost savings and promoting AI based valuation increases. 

By implementing AI-generated content, businesses can reduce payroll expenses, often eliminating marketing staff in favor of automation. They can also have their market valuation increased during periods of AI hype/booms. While this might appear financially prudent in the short term, the long-term risks far outweigh the benefits: 

  • Decline in consumer advocacy – With reduced PWOM, fewer customers voluntarily promote the brand. 

  • Erosion of brand trust – AI-driven communications can feel impersonal, making customers more likely to disengage. 

  • Reputational damage – Companies that replace human interactions with AI lose the warmth and sincerity that distinguish trusted brands. 

Rather than enhancing marketing effectiveness, AI-generated messaging often signals a company’s retreat from quality, trust, and human connection. Businesses that embrace AI-authored communications purely for cost-cutting may find themselves facing a weakened brand reputation, lower customer loyalty, and diminished engagement. 

The Verdict: Human-Authored Content Wins 

The research is clear: AI-authored content is an inferior alternative to human-created messaging across both emotional and factual domains. Consumers reject AI-generated emotional messages because they lack authenticity, leading to moral disgust, lower PWOM, and reduced loyalty. Meanwhile, AI-generated factual content suffers from hallucinations, making it unreliable and damaging brand credibility. 

The takeaway is simple: When it comes to marketing communications, human authenticity isn’t just preferable—it’s essential. 

Next
Next

Solving Problems in Times of Disruption: Building Organisational Resilience