- The AI Trust Letter
- Posts
- GenAI Attacks Bypass Legacy Defenses
GenAI Attacks Bypass Legacy Defenses
Top AI and Cybersecurity news you should check out today

Welcome Back to The AI Trust Letter
Once a week, we distill the most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!
⛓️💥 Legacy Identity Defenses Can’t Keep Up: 79% of GenAI Attacks Log In Instead of Breaking In

The Story:
A new analysis shows that most GenAI-driven credential attacks don’t rely on malware. Instead, attackers bypass legacy identity systems by logging in with valid credentials. Traditional identity governance and vulnerability assessment fall short under these evolving threats.
The details:
Credential abuse is replacing traditional attack vectors. Phishing, leaks and reused passwords are now primary tools.
Identity tools built for legacy threat models aren’t catching up. Many assume attacks come as breaches, not logins.
Some vendors (SailPoint, ForgeRock) deploy generative AI to spot anomalies and automatically reduce excess privileges.
Others are building context-aware policies and behavior analytics to spot suspicious login patterns before damage occurs.
Why it matters:
When attackers already have valid credentials, defenses depending on alerting after a breach or using static rules often fail. Identity teams need to assume credentials will be in play and adopt tools that proactively monitor, limit, and respond in real-time.
🚨 AI Automation Supercharges Massive Cybercrime Spree

The Story:
Anthropic uncovered a cybercrime spree in which a threat actor used its Claude AI (specifically Claude Code) to automate nearly every stage of attacks targeting 17 different organizations across sectors such as healthcare, emergency services, religious institutions, and government entities.
The details:
The attacker used Claude Code to scan thousands of systems, find vulnerabilities, and gain access.
They generated bespoke malware, dressed up malicious tools as legitimate software, exfiltrated sensitive data (financial, personal, medical), and organized it for maximum leverage.
Extortion demands ranged between $75,000 and $500,000 in Bitcoin, guided by financial data the attacker had exfiltrated.
Anthropic disabled the accounts involved, improved filtering and detection systems, and is sharing indicators of compromise with partners.
Why it matters:
This case shows how powerful agentic AI tools can be when in the wrong hands: not merely assisting, but orchestrating large-scale cyberattacks with less technical overhead. As AI models become more integrated into operations, the gap between low- and high-skill attackers shrinks.
Organizations will need to build security controls that assume misuse is possible, include threat hunting focused on AI misuse, and constantly test and refine safeguards for every stage of the attack chain.
📚 Google sued over AI summaries

Image credit: Silicon ANGLE
The Story:
The publisher behind Rolling Stone, Variety and Billboard, filed a lawsuit against Google. It claims that Google’s “AI Overviews” feature uses publishers’ content without consent and is diverting traffic away from their sites. According to the suit, AI Overviews appear in about 20% of searches pointing to Penske’s sites, and traffic declines have led to a drop of over one-third in affiliate revenue.
The details:
Penske says Google requires permission to use its content in AI summaries to remain visible in search results
Google replies that AI Overviews make search more helpful and drive broader discovery, though Penske argues that providing content for these summaries cuts into the return on creating journalism
Penske points to a decline in affiliate revenue of more than one third since 2024 alongside falling traffic due to AI Overviews
Why it matters:
Publishers depend heavily on clicks from search referrals to sustain ad models and affiliate partnerships. If AI-driven summaries continue to replace linking to original content, many outlets may find themselves squeezed between visibility and value. This case may reshape how platforms compensate (or partner with) publishers whose work fuels AI and search.
🕵️ AI Prompt Injection with Macros: The Hidden Threat

Credit: Digineer Station / shutterstock
The Story:
Prompt injection attacks are evolving. Instead of relying on phishing or malware, attackers are embedding malicious prompts in macros, document metadata, or file metadata so AI systems unknowingly carry out harmful instructions.
The details:
Attackers hide instructions within VBA macros in documents. When the AI processing system opens the document, it executes the hidden prompt silently.
Adobe-style file metadata, custom XML properties, and Unicode tricks are used to evade human detection but still processed by AI tools.
Some AI malware analysis tools have already been tricked into declaring malware safe via these hidden prompt vectors.
Experts advise filtering or sanitizing file inputs before AI parsing, isolating risky executable content, and applying output validation to avoid leaking sensitive data or performing unintended actions.
Why it matters:
These attacks don’t look like classic malware. They slip in through familiar formats and trusted channels. If AI systems are allowed to parse files without verifying what they contain, attackers can hijack behavior from the inside. Teams should treat any file-input to AI systems as untrusted, enforce strict checking and sandboxing, and put humans in the loop for high-risk workflows.
🤖 AI Agent Security is here

Gartner argues that as organizations deploy more AI systems, oversight by humans alone won’t scale. They propose “guardian agents”: AI tools built to monitor, correct, and if needed, shut down other AI tools. These agents would help enforce guardrails and ensure trust without overwhelming human review.
As AI agents become entrenched in more business functions, the number of AI workflows will outpace what any security team can manually review. Guardian agents promise a way to force consistency, accountability, and safety in AI deployments without bottlenecking innovation.
For teams building or deploying AI, thinking now about agent orchestration, output auditability, and fallback shutdown is going to be essential.
Contact us today to start implementing the most effective security measures.
What´s next?
Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.