- The AI Trust Letter
- Posts
- Agentic AI-Powered Breaches
Agentic AI-Powered Breaches
Top AI and Cybersecurity news you should check out today

Welcome Back to The AI Trust Letter
Once a week, we distill the most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!
🤖 An Agentic AI Breach Is Likely by 2026

The Story:
Forrester warns that enterprises should expect a material security incident driven by agentic AI by 2026, as autonomous tools gain access to data and actions across business systems.
The details:
Attack paths include prompt injection that hijacks an agent’s tools, data-exfiltration through connected APIs, and supply-chain compromise of models or plugins.
Most organizations lack basic controls for agents: least-privilege scopes, human approval for high-risk actions, and auditable logs tied to identities.
Gaps show up at integration points: unmanaged secrets, broad OAuth grants, unvetted third-party connectors, and no sandboxing of untrusted content.
Forrester’s guidance centers on operational guardrails: inventory all agents and tools, restrict capabilities by default, run red-team testing against tool use, and implement kill switches and continuous monitoring.
Why it matters:
As agents move from copilots to actors, the blast radius shifts from bad outputs to bad actions. Treat agents like production services: verify inputs, constrain what they can do, and record everything. The goal is clear accountability and fast containment when behavior goes wrong.
🧠 OpenAI’s Sora App Hits Apple Store

The Story:
OpenAI released Sora, a ChatGPT-based app for iOS that delivers conversational audio experiences. Though Apple typically prohibits apps linking to external app stores, Sora is live on the App Store with limited WebView support that enables certain plugin installs.
The details:
Sora uses a hybrid approach: core functionality runs in the app, but users can install plugins via WebViews under certain conditions.
Apple’s policy generally blocks apps that try to route users to alternate stores via WebViews, but Sora includes a narrow exemption for plugin installs.
OpenAI confirms Sora does not violate Apple policy because plugin WebViews open to third-party content must remain fully sandboxed.
This marks one of the first times OpenAI has released a first-party app that grants users more agency over plugin use on iOS.
Why it matters:
Sora’s approach balances platform policy with user flexibility. As AI assistants become extensible via plugins, developers will test store rules and sandbox boundaries to enable dynamic capabilities. For teams building AI apps, this case shows how plugin models must be tightly controlled, permissioned, and isolated to survive app store scrutiny.
⚖️ California’s new AI safety law

The Story:
California enacted SB 53, a disclosure-focused AI safety law that requires frontier model developers to publish how they test and govern their systems, report serious incidents, and protect whistleblowers. The measure reflects a compromise approach: mandate transparency and accountability without pausing R&D.
The details:
Scope: Targets large/“frontier” AI developers, shifting oversight to those training and deploying the most capable systems.
Public safety frameworks: Companies must document and publish their safety policies and testing approaches so customers, regulators, and researchers can assess risk posture.
Incident reporting: Serious safety or security events tied to model behavior must be reported, moving AI toward the same kind of accountability common in other regulated technologies.
Whistleblower protections: Employees who surface AI safety concerns receive explicit protections, encouraging earlier detection of problems.
Why it matters:
SB 53 sets expectations that major AI vendors will show their work—how they evaluate models, handle incidents, and govern risk. For enterprises, this should translate into clearer vendor documentation and stronger signals for due diligence and procurement. It also hints at where broader US rules may land: disclosure, monitoring, and accountability rather than blanket shutdowns.
🚨 AI Gives Phishing a New Edge

The Story:
A new Comcast report analyzing 34.6 billion cybersecurity events shows how attackers are reshaping phishing with AI, combining mass automation with stealth tactics that blur the line between legitimate and malicious activity.
The details:
Generative AI enables criminals to create convincing phishing lures and malware, lowering the skill barrier for entry.
Shadow AI—unsanctioned use of AI tools by employees—expands the attack surface and complicates identity security.
Attackers use “residential proxies” from compromised devices to mask traffic, making malicious activity appear legitimate.
Human fatigue remains a key vulnerability: overloaded staff and inattentive end-users increase the chance of successful compromise.
On defense, AI supports anomaly detection and faster response, but effective resilience still requires skilled teams to interpret signals and drive strategy.
Why it matters:
Phishing has always been a leading threat, but AI changes its scale and believability. Defenses now need to assume that both human and machine identities are being targeted, and that attacks may look indistinguishable from normal activity. The balance between automation and human oversight will be critical for enterprise resilience.
🏦 AI Security for Banks: re-watch it now

Our event on AI security for Banks has been a great success!!
Thank you everyone for joining, you can watch it here 👇
What´s next?
Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.