The Start of Something Great - Issue 1

Top AI and Cybersecurity news you should check out today

What is The AI Trust Letter?

Once a week, we distill the five most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!

AI-Powered Attacks on the Rise

The Story:
87% of security professionals say their organization faced an AI-driven attack in the past year, and 91% expect these threats to accelerate over the next three years.

The details:

  • Dark-web trade in deepfake tools jumped 223% from Q1 2023 to Q1 2024

  • Only 26% of experts feel highly confident in detecting AI attacks

  • 51% name AI-powered obfuscation as their top concern

  • 85% report more multichannel attacks blending email, SMS and social media

  • 55% lack full controls for risks from their own in-house AI tools

Why it matters:
AI is boosting both the scale and sophistication of cyber-attacks faster than most defenses can keep up. Until detection improves—and teams enforce tighter controls and train employees—organizations will remain vulnerable to these fast-moving threats.

🛡️AI-Generated Code: New Supply Chain Security Fears

The Story:
Security experts warn that AI-generated code may be unintentionally weakening the entire software supply chain.

The details:

  • LLMs can easily hallucinate insecure libraries or invent “fake” packages as dependencies

  • AI acceleration in coding increases developer velocity but also the risk of introducing silent vulnerabilities

  • Attackers are likely to abuse prompt injection or hallucination flaws, targeting downstream enterprises through open source and commercial software

Why it matters:
Rushed AI adoption in coding could create systemic risk—review processes and code auditing must evolve to spot AI-specific supply chain gaps before attackers do.

🛡️ Fake AI Platforms Deliver Disguised Malware

The Story:
A new campaign tricks creators and small businesses into downloading malware by posing as an AI video-processing service.

The details:

  • Users upload images or videos to a site named “Luma DreamMachine” believing they’ll get AI-enhanced content

  • Final download is a ZIP containing a file named like “Video Dream MachineAI.mp4 .exe” with hidden whitespace before the .exe

  • Running it triggers a multi-stage install of the novel Noodlophile infostealer and the XWorm remote-access trojan, both loaded in memory

  • Noodlophile steals browser credentials, crypto wallets and can deploy remote access, exfiltrating data via a Telegram bot

  • The malware is offered as a service (MaaS) and may be spreading under other fake AI tool names

Why it matters:
As AI tools go mainstream, attackers use them as a lure. Verifying download sources, checking file extensions and running unknown files in a sandbox are now essential.

🏛️ What are the risks of an internal AI copilot?

The Post:
Internal AI tools, from code-generation helpers to data-query copilots, supercharge teams but they also create a sprawling new attack surface that can expose your most sensitive systems and data.

The details:

  • Data leakage: Overly permissive access to drives, databases, or knowledge bases can let the model ingest and later surface private info (PII, credentials, financials).

  • Internal abuse: Without LLM-layer RBAC, any user (or compromised credential) can query across departments and exfiltrate volumes of data.

  • Shadow AI: Frustrated employees may paste corporate secrets into public LLMs, creating unmonitored leaks outside your security perimeter.

  • Hallucinated actions: If your assistant is wired to take real actions (tickets, DB updates, emails), an innocent hallucination can trigger costly mistakes or fraud.

  • Code injection: Attackers can craft prompts that alter generated logic or commands, potentially deleting data or running unauthorized scripts.

Why it matters:
Left unchecked, internal AI becomes a single point of failure—capable of mass data leaks, compliance breaches (GDPR, HIPAA, SOC 2), operational chaos, and credential exposure. To adopt AI safely, you need:

  • Strict access controls at both data sources and the LLM interface

  • Semantic filtering and real-time data masking

  • Role-based guards and human-in-the-loop checks for privileged actions

  • Sandboxed execution and prompt sanitization for any generated code

Without these guardrails, your AI assistant stops being an asset and turns into a liability.

📶 Check out our latest Benchmark

In our latest deep-dive, we pitted three leading jailbreak-detection firewalls (Amazon Bedrock, Azure, and NeuralTrust) against a real-world private dataset of simple, “everyday” attack prompts and widely adopted public benchmarks.

  • Private Dataset Performance shows NeuralTrust dramatically outperforms Amazon Bedrock and Azure on both accuracy (0.908) and F1-score (0.897), compared to ~0.62/0.32 for the others.

  • Public Dataset Performance highlights that all models improve on the public set. NeuralTrust still leads in F1-score (0.631) and accuracy (0.625), followed by Azure and Bedrock.

  • Average Execution Time reveals NeuralTrust is over 3× faster (0.077 s) than the alternatives (~0.27 s), making it ideal for real-time deployments.

NeuralTrust Leaps Ahead

  • Catches ~9/10 real-world jailbreaks vs. ~6/10 for others

  • Handles sophisticated attacks with ease

  • Processes each prompt in under 0.1 s—3× faster than competitors

Ready to see the full breakdown and learn how to lock down your LLMs for good? Dive into the complete benchmark report and discover why real-world testing is the only way to build truly secure AI.

😂 Our meme of the week

Don´t be this person:

What´s next?

Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.