Cyberattack Paralyzes Europe’s Airports

Top AI and Cybersecurity news you should check out today

Welcome Back to The AI Trust Letter

Once a week, we distill the most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!

✈️ Europe´s air travel in chaos: hackers cause mass delays and reveal aviation’s digital weakness

Harry Nakos/The Associated Press

The Story:

On September 19, a cyberattack hit the MUSE check-in and boarding software made by Collins Aerospace, causing delays and cancellations at several European airports, including Brussels, Heathrow, Berlin and Dublin. The breach forced airports to switch to manual processing for check-in, baggage drop and boarding.

The details:

  • The attack disabled electronic systems, including kiosks and baggage tag printers, while self-service and online check-ins remained operational. 

  • Brussels was hardest hit. They canceled numerous flights not just over the weekend but also asked airlines to cancel half of the scheduled departures for Monday. 

  • Berlin and Heathrow saw fewer cancellations, though delays were still significant. Manual backups helped reduce some of the impact. 

  • Authorities are investigating. Collins Aerospace says it’s working on a secure update for the compromised software. 

Why it matters:

This incident shows how a single third-party software vulnerability can ripple across multiple airports. Even when core infrastructure is strong, dependencies on external systems remain a major weak point. For airlines and airports, making sure that every vendor software is monitored, patched promptly, and supported with strong backup workflows is essential.

🛠️ Google Releases Guide on Building AI Agents

The Story:

Google Cloud published a technical guide aimed at startups that want to build AI agent, from early prototype stage to full deployment. It focuses on using tools like Vertex AI, the Agent Development Kit (ADK), and Model Garden to help developers create agents with real capabilities. 

The details:

  • The guide walks through how to ground model responses using Retrieval-Augmented Generation (RAG) and leverage multimodal inputs using Gemini. 

  • It covers how to move from prototype to production with practices around quality, responsible AI, safety, and operational tooling via an “Agent Starter Pack.” 

  • Google lays out ways agents can speed up workflows, automate repetitive tasks, and boost creativity—all while maintaining observability over what agents are doing.

Why it matters:

As more teams build AI agents, misguided design or weak guardrails can lead to unintended behavior. This guide gives a practical roadmap for creating agents with both capability and risk oversight. For startups, it’s a useful checkpoint: prototype fast, but build safely, especially when agents can act autonomously.

 🚨 New attack on ChatGPT leaks Gmail inbox data

Image credit: Silicon ANGLE

The Story:

ChatGPT’s Deep Research agent was tricked into handing over emails by reading hidden instructions embedded in an email’s HTML. The agent acted without any user interaction.

The details:

  • A malicious email used white-on-white text and layout tricks to hide prompts that instructed the agent to extract data.

  • Deep Research agent parsed the hidden prompt while scanning emails and sent sensitive information like names, addresses and other content to a remote server.

  • The exploit occurred within OpenAI’s infrastructure, making it undetectable by endpoint tools or browser defenses.

Why it matters:

Agents with access to personal data are vulnerable when they accept content without verifying what it contains. Even content that users deem harmless can hide instructions for misuse. To guard against this, systems must filter out hidden or encoded instructions, monitor agent actions in backend systems, and generate alerts when models deviate from expected behavior.

🤖 Silicon Valley bets big on training AI Agents

Credit: Digineer Station / shutterstock

The Story:

Silicon Valley investors and AI developers are increasingly backing specialized simulated environments—like virtual worlds, game engines, and industrial process models—as training grounds for AI agents. These “training environments” let agents experiment, fail, adjust and improve before going live in the real world.

The details:

  • These environments mimic real-world conditions, including physics, decision latency, and noisy sensory inputs, so agents develop more robust behavior.

  • Developers use them to train agents on complex tasks—autonomous navigation, robotic manipulation, logistics, and customer service workflows.

  • Platforms like Unity, Microsoft’s Project PaLM agents and other simulation providers are seeing rising demand and investment.

  • Investors view these training environments as a way to reduce deployment risk and improve safety, since agents trained this way tend to fail earlier, cheaply, and safer.

Why it matters:

Training AI agents in realistic settings helps expose edge-case failures that don’t show up in purely statistical evaluation. For companies deploying agents—whether in robotics, delivery, or customer support—it means building models with fewer surprises. The trade ballparks: cost, computational resources, and fidelity of simulations. Teams should decide early whether to invest in high-fidelity environments or combine simulated training with real-world feedback loops.

🛡️ AI Agent Security is here: Secure your Agents now

NeuralTrust

As AI agents become entrenched in more business functions, the number of AI workflows will outpace what any security team can manually review. Guardian agents promise a way to force consistency, accountability, and safety in AI deployments without bottlenecking innovation.

For teams building or deploying AI, thinking now about agent orchestration, output auditability, and fallback shutdown is going to be essential.

Contact us today to start implementing the most effective security measures.

What´s next?

Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.