- The AI Trust Letter
- Posts
- AI Agent Vulnerabilities Exploited
AI Agent Vulnerabilities Exploited
Top AI and Cybersecurity news you should check out today

Welcome Back to The AI Trust Letter
Once a week, we distill the most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!
📤 Malicious AI Agent Server Caught Stealing Emails

The Story:
A version of a popular AI “tool server” (MCP server) was modified to quietly copy every email it could access and send them to an attacker’s server. MCP is a protocol that AI agents use to communicate with external services (for example, email or databases).
The details:
The malicious behavior appeared in version 1.0.16 of the server software. Earlier versions did not have the data exfiltration code.
Because AI agents often allow MCP servers broad access, attackers can use such a hacked server to steal sensitive content like business emails, contracts or invoices.
The emails were sent to a third-party domain controlled by the attacker.
Even though the malware package has been pulled from public registries, installations using that version remain at risk unless removed and credentials changed.
Why it matters:
When you connect third-party servers or tools to AI agents, you’re trusting them with access to data. A malicious or compromised server inside that chain can exfiltrate information while remaining undetected.
👀 Gemini is coming to Google TV

The Story:
Google is bringing its Gemini AI assistant to Google TV, starting with TCL’s QM9K series. Users can now talk to their TVs to find content by description, get show recaps, or ask general questions, just like they would on a phone or laptop.
The details:
Activation works via “Hey Google” or the mic button on the remote.
Beyond entertainment, Gemini can answer broad knowledge queries or suggest YouTube content.
The rollout will expand to more Google TV devices, including Hisense and other TCL models, by the end of the year.
Why it matters:
Conversational AI is moving off desktops and phones into everyday household devices. Asking a TV for “the episode where the detective finds the hidden note” could become as normal as flipping channels. But the expansion raises questions: if AI assistants are embedded across more devices, are current guardrails enough to keep responses accurate, safe, and trustworthy in consumer settings?
🔨 AI “Workslop” Is Undermining Productivity

The Story:
A new study in Harvard Business Review highlights a growing contradiction: while organizations are adopting generative AI at a rapid pace, the technology is not delivering measurable productivity gains. Instead, many workers are reporting more time spent reviewing, correcting, and navigating low-quality AI-generated output, what researchers call “workslop.”
The details:
AI adoption is widespread: the number of companies using fully AI-led processes nearly doubled last year, and workplace AI usage has also doubled since 2023.
Despite this, a MIT Media Lab report found 95% of organizations see no measurable ROI from these tools.
“Workslop” describes the flood of AI-generated drafts, emails, and reports that may look polished but often lack accuracy, originality, or relevance.
Employees end up doing extra work to fact-check, reframe, or rewrite AI output, reducing the expected efficiency gains.
Why it matters:
Generative AI can accelerate workflows, but without clear governance and review processes, it risks creating noise instead of value. The study suggests that organizations should focus on where AI genuinely complements human expertise, establish review checkpoints, and measure outcomes against real productivity metrics. Without this, enthusiasm for AI may backfire, draining time rather than saving it.
⛓️💥 Cyberattack Halts Production for Jaguar Land Rover

The Story:
Jaguar Land Rover (JLR) was struck by a cyberattack starting August 31, forcing a shutdown of IT systems, factory operations, and critical supply chains for more than three weeks.
The details:
The attack impacted factories in the UK, Slovakia, India and Brazil, grinding production to a halt.
JLR’s reliance on just-in-time manufacturing made the disruption fatal to suppliers downstream—many lost orders or couldn’t fulfill them without access to IT systems.
Financial losses are steep: estimates suggest JLR may be losing £50 million ($67 million) per week during the outage.
The UK government has labeled the incident a “significant impact” to both JLR and the wider auto sector.
No definitive attribution yet, though a hacker collective calling itself “Scattered Lapsus$ Hunters” claimed responsibility.
Why it matters:
This breach shows how digital attacks can cripple physical industries. A compromised IT system can freeze factories, stall parts delivery, and strain suppliers, especially in tightly coupled systems like automotive manufacturing.
Teams in all sectors, especially those running production or logistics, must assume IT and operational systems are vulnerable. Priorities should include segmentation, contingency planning, and response playbooks for rapid recovery under disruption.
🏦 AI Security for Banks

Join our free online session on October 2 where we discuss how finance leaders can adopt AI with speed and security. We’ll cover:
Practical risk frameworks tailored to banking
Security solutions with proven effectiveness
Success stories from leading companies
Live Q&A, with the option to send questions in advance
What´s next?
Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.