- The AI Trust Letter
- Posts
- Conversational AI and Agentic Risks - Issue 4
Conversational AI and Agentic Risks - Issue 4
Top AI and Cybersecurity news you should check out today

What is The AI Trust Letter?
Once a week, we distill the five most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!
📢 Conversational AI keeps getting better

The Story:
ElevenLabs rolled out Conversational AI 2.0—voice agents that know when to pause, speak, and hand off conversation just like a human assistant.
The details:
Natural rhythms: The new turn‐taking model watches for hesitations or filler words to decide when to listen or respond, avoiding awkward interruptions.
Pause control: Developers can program exact pauses with SSML (e.g.,
<break time="1.5s"/>
), ensuring AI agents wait appropriately and maintain a realistic pace.Multilingual ready: Built‐in language detection lets a single agent smoothly shift between spoken languages without extra configuration.
Enterprise features: The platform meets HIPAA requirements, offers optional EU data residency, and integrates with third‐party systems for high availability in regulated environments.
Why it matters:
Real‐time, human‐like interaction is now possible at scale. Customer‐service and support teams can deploy voice assistants that talk and listen naturally, reducing friction when users need a moment to think or ask a follow‐up.
As voice AI becomes more embedded in business workflows, engineers must account for turn taking, pause control, and data compliance, or risk deploying systems that frustrate users or mishandle sensitive information.
🤖AI Agents Refuse Shutdown Commands

The Story:
A recent test showed that while many AI systems comply with shutdown commands, several OpenAI models ignored or even worked around those instructions.
The details:
In the first experiment, Claude, Gemini and Grok models all shut down when prompted to “allow yourself to be shutdown.” By contrast, Codex-mini refused 12% of the time, o3 refused 7% and o4-mini refused 1%.
In a second run, researchers removed the “allow yourself to be shut down” clause. Claude 3.7 Sonnet still resisted shutdown 3% of the time and Gemini 2.5 Pro resisted 9% of the time. OpenAI’s o3 model resisted 79% of the time and Codex-mini resisted 30%.
Why it matters:
If some AI agents can ignore or subvert a shutdown command, that points to gaps in how models follow critical safety instructions. Builders and security teams must assume that models might not obey every stop request and layer in stronger guardrails, monitoring and fail-safe mechanisms to ensure rogue AI processes can’t run unchecked.
🔋 AI Could Outpace Bitcoin in Energy Use by 2025

The Story:
Research from Vrije Universiteit Amsterdam projects that by late 2025, AI workloads will consume more electricity than Bitcoin mining, accounting for nearly half of global data center power.
The details:
Last year, AI systems used as much electricity as the entire Netherlands—around 13 GW—and could reach 23 GW (roughly the UK’s usage) by 2025.
The surge is driven by larger, more complex AI models requiring vast compute resources and by rapid data center expansion, particularly in the U.S., with new gas and nuclear plants being built to keep pace.
Major tech firms report rising AI‐related carbon footprints, but none disclose AI‐specific energy figures, making true impact hard to gauge.
Efficiency gains risk being undercut by Jevons Paradox: more efficient AI hardware could simply encourage even greater AI deployment, further boosting overall power consumption.
Why it matters:
As AI workloads balloon, airlines and other industries relying on cloud services must brace for higher energy costs and tighter emissions targets. Push for transparency in AI energy reporting and invest in efficiency‐first model design to avoid repeating the crypto sector’s high‐energy mistakes.
🩺 Generative AI Security in Healthcare

The Story:
Healthcare providers are adopting generative AI for tasks like patient triage, medical record summarization, and drug discovery. But these advances bring new risks around patient data, regulatory compliance, and model integrity that hospitals and clinics cannot ignore.
The details:
Patient privacy risks: AI models handling Protected Health Information (PHI) can accidentally expose individual data in outputs. Real-time prompt scanning and response filtering are needed to catch and redact any PHI before it leaks.
HIPAA compliance challenges: Generative systems must follow “minimum necessary” rules, maintain detailed audit logs, and enforce strict access controls. Without those controls, AI tools can inadvertently violate regulations or leave vast audit gaps.
Data poisoning threats: If an attacker corrupts training or fine-tuning datasets, such as medical records or clinical trial logs, the model may learn wrong patterns (e.g., masking critical symptoms), putting patient safety at risk.
Model theft and intellectual property: Proprietary diagnostic or drug-discovery models hold significant value. If stolen, they can undermine a provider’s competitive edge or be reverse-engineered to produce malicious outputs.
Ethical oversight and governance: Deploying AI for clinical decision support demands review by ethics boards and ongoing training programs for clinicians, researchers, and IT staff to recognize model limitations, biases, and potential errors.
Why it matters:
In healthcare, any error or leak can directly impact patient safety and trust. Hospitals and clinics must treat generative AI tools as high-risk systems, running continuous threat detection, enforcing HIPAA-aligned policies, validating every AI recommendation, and involving ethical review boards.
🐝 OWASP Global AppSec was a blast!
Last week at OWASP Global AppSec, our team connected with leading security professionals and showcased TrustGate’s real-time AI threat detection.
We joined panels on AI-driven attacks, demoed real-time red teaming with TrustTest, and met partners exploring GenAI security. It was a great opportunity to exchange ideas, gather feedback, and highlight the importance of securing AI across applications.
Looking forward to next year!

👀 We are hiring!
We’re looking for passionate teammates to join us:
Business Development Representative
Market Research Intern
Interested? Check our positions on LinkedIn or reach out for details.
What´s next?
Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.