- The AI Trust Letter
- Posts
- Why does AI hallucinate?
Why does AI hallucinate?
Top AI and Cybersecurity news you should check out today

Welcome Back to The AI Trust Letter
Once a week, we distill the most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!
📄 OpenAI's latest research on why LLMs hallucinate

The Story:
OpenAI released a new research paper and blog post exploring why advanced models like GPT-5 and ChatGPT continue to hallucinate. Despite ongoing improvements, hallucinations, defined as plausible but false statements, remain a persistent challenge.
The details:
Hallucinations originate in the pretraining phase, where models learn language patterns without truth or falsehood labels. As a result, fluent but rare facts such as names or dates are often mispredicted.
Researchers point to evaluation methods as a major cause. Current benchmarks reward exact correctness, similar to multiple-choice tests, which pushes models to guess instead of admitting uncertainty.
OpenAI proposes changing evaluations to penalize confident mistakes more than uncertainty and to give partial credit when a model responds with “I don’t know.” They emphasize this needs to replace existing accuracy-focused tests rather than simply being added on top.
Why it matters:
The focus is shifting from model design to the way AI systems are tested. If benchmarks continue to reward guesses, models will keep guessing. Better evaluation methods could encourage models to recognize when they are unsure, reducing the risk of confident but false answers.
📚 First major AI copyright settlement: Anthropic pays $1.5B

The Story:
Anthropic has agreed to pay $1.5 billion to settle a lawsuit from authors over the use of pirated books in training its Claude AI models. The deal also requires the company to delete the infringing data.
The details:
The settlement covers about 500,000 works, with authors set to receive around $3,000 per title, subject to confirmation of eligibility
A June court ruling distinguished between fair use of legally obtained books and infringement when material is sourced from pirate sites
The agreement still needs judicial approval, expected in preliminary form soon, with final approval likely in 2026
Anthropic must destroy the pirated dataset and refrain from using the covered works in future training
Publishing and legal groups welcomed the case as a signal that AI developers must respect copyright
Why it matters:
This is the first large-scale settlement over AI training and copyright. It underscores that using unauthorized datasets carries major legal and financial risk, and it sets a precedent for future negotiations between AI companies and rights holders.
⚠️ FTC to question tech companies on AI chatbot risks

The Story:
The U.S. Federal Trade Commission is preparing to question major tech firms, including OpenAI, Meta, and Character.AI, about the privacy and safety risks of AI-powered chatbots. The inquiry will focus on how these systems affect users, especially children.
The details:
The FTC will send letters requesting internal records on data practices and risk assessments
Regulators are concerned about harmful or inappropriate chatbot interactions with minors
The inquiry aligns with broader government efforts to balance AI innovation with user safety
Meta has already introduced safeguards for teens, such as blocking conversations about self-harm or sexual topics
Why it matters:
This move highlights increasing regulatory scrutiny of conversational AI. Companies that deploy chatbots must prepare to show strong privacy protections, safety features, and accountability measures, particularly when products reach younger users.
🇬🇧 UK AI funding hits £2.9 billion mark

The Story:
The UK’s AI sector broke records with £2.9B in investment, while Switzerland announced a fully open-source foundational model championing transparency at scale.
The details:
Average deal size reached £5.9 million, reflecting stronger investor confidence
The sector now contributes about £11.8 billion to the UK economy and supports more than 86,000 jobs
AI companies are expanding beyond London, with the number of firms in regions like the Midlands, Yorkshire, and Wales doubling in the past three years
The government announced an AI assurance roadmap and £11 million in funding to build tools and training for oversight
Why it matters:
The UK is consolidating its position as a global AI hub while also investing in public trust and assurance. Regional growth shows AI adoption is spreading nationwide, and new oversight efforts point to a future where scaling AI responsibly is part of the business model.
🛫 AI Security for Airlines Event

Join our free online session on September 9 where we discuss how aviation leaders can adopt AI with speed and security. We’ll cover:
Practical risk frameworks tailored to aviation
Security solutions with proven effectiveness
Success stories from leading airlines
Live Q&A, with the option to send questions in advance
What´s next?
Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.