- The AI Trust Letter
- Posts
- Introducing the Generative Application Firewall
Introducing the Generative Application Firewall
Top AI and Cybersecurity news you should check out today

Welcome Back to The AI Trust Letter
Once a week, we distill the most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!
🛡️ NeuralTrust Introduces the Generative Application Firewall (GAF)

The Story:
NeuralTrust has published a new paper introducing the Generative Application Firewall (GAF), a security layer designed to protect GenAI applications at runtime. The paper was developed in collaboration with external experts from organizations like OWASP, MIT, and the Cloud Security Alliance (CSA), bringing a broader industry perspective into how these systems should be secured in production.
The details:
GAF is built to sit in the request and response path of GenAI applications, inspecting prompts, outputs, and tool calls in real time.
It focuses on the risks that show up once AI is connected to real users, data, and workflows, including prompt injection, jailbreaks, data leakage, and unsafe agent behavior.
The paper frames GAF as infrastructure, not a feature, meaning it is meant to be deployed consistently across multiple AI apps and teams.
It also highlights the need for observability, policy enforcement, and continuous evaluation as AI behavior shifts over time.
The concept reflects a broader shift in security thinking: from model-level guardrails to application-level and workflow-level protection.
Why it matters:
GenAI security is moving from ad hoc guardrails to a standard runtime layer. This paper helps define that shift and gives the industry a clearer blueprint for what “secure by default” should look like for AI apps, copilots, and agents.
🤖 Agentic AI Spending Keeps Growing But Lacks Controls

The Story:
Many companies are excited about agentic AI, but most projects are not making it past early pilots. Even so, enterprise investment keeps increasing, suggesting leaders still believe agents will become a real part of operations, just not as fast as expected.
The details:
A large share of agentic AI initiatives remain stuck in proof of concept, with teams struggling to move into production
Common blockers include unclear ROI, integration complexity, and a lack of trust in how agents behave once they touch real systems
Enterprises are still increasing budgets, often treating 2026 as the year to build the foundations rather than rush deployments
The gap is not only model capability, but everything around it: data access, governance, monitoring, and safe execution
Why it matters:
This is a familiar pattern in enterprise tech. The promise is real, but production requires control. Agentic AI is not just another chatbot. It is software that can take actions, trigger workflows, and create downstream impact. The companies that win will be the ones that invest early in reliability, visibility, and guardrails so pilots can become trusted systems, not stalled experiments.
🌍 AI Sovereignty Remains Elusive for Nations and Companies

The Story:
A new piece from MIT Technology Review argues that “AI sovereignty” has become a popular political and corporate goal, but true sovereignty is almost impossible in practice. Even countries investing heavily in local AI still depend on global supply chains, foreign infrastructure, and shared research ecosystems.
The details:
AI sovereignty is often framed as full control over models, data, compute, and deployment, but that requires owning far more of the stack than most governments or companies realistically can.
Training and running frontier AI systems depends on hardware supply chains that are concentrated in a small number of countries and companies.
Even if you host AI locally, you may still rely on external dependencies like cloud tooling, software libraries, model architectures, and specialized talent.
Many “sovereign” AI strategies end up being partial solutions, like hosting in-country, using local data policies, or supporting domestic model providers, while still relying on global components.
The article suggests the real question is not “Are we sovereign?” but “Where are our weakest dependencies, and what happens if they break?”
Why it matters:
AI sovereignty is becoming a major theme in national strategy, procurement, and regulation. But if the goal is framed as total independence, most initiatives will fail on day one. The more useful approach is to treat sovereignty as risk management: mapping critical dependencies, deciding what must be controlled locally, and building fallback plans when parts of the AI stack remain external.
🚨 Why AI keeps falling for prompt injection attacks

The Story:
Prompt injection is not a temporary bug in chatbots, but a structural security problem. The core issue is simple: LLMs cannot reliably tell the difference between trusted instructions and untrusted content, especially when they are connected to tools, files, browsers, or email.
The details:
Prompt injection works because the model treats everything in its context as “input to follow,” even if the text came from an attacker hidden inside a document, website, or message.
This is not the same as classic software injection, where you can define strict parsing rules and block unsafe commands. LLMs interpret language, and language is flexible by design.
The risk grows when AI systems are given tool access, like sending emails, reading files, or updating records. A single malicious instruction can turn a helpful assistant into an execution engine.
Many current defenses focus on filters and “don’t follow malicious prompts” training, but attackers can often rephrase, split instructions across steps, or hide them in long context.
Schneier’s point is that the right fix is architectural: treat all external content as hostile, limit what the model can do, and enforce security outside the model itself.
Why it matters:
Prompt injection is becoming one of the most common ways to break AI systems because it targets the weakest layer: trust boundaries. If an AI agent can read untrusted content and take actions, it needs the same kind of isolation and permissioning we expect in any security-sensitive system. The lesson is not “train the model harder,” but “design the system so the model cannot be tricked into doing something irreversible.”
🖥️ Using Circuit Breakers to Secure the Next Generation of AI Agents

The Story:
As AI agents move from “chatting” to taking real actions, teams need a reliable way to stop them when something goes wrong. This piece explains how circuit breakers can act as a safety layer that pauses, limits, or shuts down agent workflows before they cause damage.
The details:
Circuit breakers are simple by design: if the system detects abnormal behavior, it stops the workflow automatically instead of letting the agent keep going.
They are useful when agents have tool access like sending emails, calling APIs, editing documents, or triggering payments.
Good circuit breakers focus on measurable signals, like repeated failed actions, unusual data access, risky tool calls, or unexpected spikes in requests.
They can be implemented at different layers: per user, per agent, per tool, or across the whole application.
The goal is not to block innovation, but to make sure agents fail safely and predictably when reality does not match the plan.
Why it matters:
Most AI incidents are not caused by one dramatic mistake. They come from small errors that compound while the agent keeps acting. Circuit breakers help teams stay in control by turning “stop the agent” into an automatic default, not a manual emergency response.
What´s next?
Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.
