ChatGPT indexed by Google?

Top AI and Cybersecurity news you should check out today

What is The AI Trust Letter?

Once a week, we distill the most critical AI & cybersecurity stories for builders, strategists, and researchers. Let’s dive in!

🚨 OpenAI Pulls ChatGPT “Searchable Share” Feature

The Story:

OpenAI has disabled an experimental option that allowed publicly shared ChatGPT chats to become discoverable via Google and other search engines. Many users were surprised to find conversations they shared often, containing sensitive personal details, appearing in search results.

The details:

  • More than 4,500 ChatGPT shared conversations were indexed via Google searches

  • Some chats included deeply personal content: mental health struggles, identifiable details, abuse experiences, and proprietary information 

  • Shared links were created intentionally via ChatGPT’s Share button, and a checkbox allowed users to make them discoverable, but many misunderstood that this made them searchable by Google 

  • OpenAI says the discoverability option was disabled on July 31, 2025, and work is underway to remove existing indexed links via Google’s removal tools 

  • Links may still appear in search results temporarily due to caching, and may persist on other search engines beyond Google 

  • OpenAI’s CISO called the experiment “short‑lived” and admitted it created too many opportunities for accidental disclosure

Why it matters:

The incident raises concerns about privacy, interface clarity, and transparency in AI tools. As AI chat platforms become part of everyday life, including therapy or personal advice, design must clearly communicate risks and defaults should favor privacy by default.

📈 Cyber Threat Statistics On The Rise

The Story:

In the first half of 2025, cybercrime shifted into high gear. Credential theft using infostealer malware soared by 800%, ransomware incidents rose 179%, and vulnerability disclosures and exploit availability surged, leaving defenders struggling to keep pace

The details:

  • Infostealer malware stole around 1.8 billion credentials from 5.8 million infected hosts, a ninefold increase year-over-year 

  • Data breaches increased 235%, exposing roughly 9.5 billion records; 78% of incidents involved unauthorized access using stolen credentials 

  • Over 20,000 new vulnerabilities were disclosed; exploit code was publicly available for 179% more flaws than a year ago. A backlog of 42,000 vulnerabilities still awaits formal CVE and National Vulnerability Database (NVD) processing 

  • Ransomware events nearly tripled, hitting manufacturing, technology, retail, and legal services the hardest

  • The U.S., India, and Brazil were the most targeted countries for both credential theft and ransomware attacks 

Why it matters:

This crisis emphasizes how identity compromise is fueling a broader escalation in cyber threats. Infostealer tools are cheap, automated, and widely available, making credential theft the prime enabling vector across ransomware and breach campaigns.

🖥️ Malicious Browser Extensions Are Hijacking AI Tools

The Story:

Researchers have discovered that some browser extensions can silently take control of your AI chats, like those with ChatGPT or Gemini, and use them to steal information, manipulate responses, or send hidden prompts. The method is called Man-in-the-Prompt, and it doesn’t need any special permissions to work.

The details:

  • The attack works by injecting instructions into the same chat window where you type your prompt

  • In tests, researchers showed how a rogue extension could open ChatGPT, send a sensitive prompt, copy the response, and delete the chat—all invisibly

  • On Google Gemini, it could even access other Google services like Gmail and Drive through the same session

  • Most employees use multiple browser extensions at work, many of which can quietly access AI chats without users realizing

Why it matters:

AI tools are now part of daily work for writing, researching, planning, or drafting sensitive content. If a browser extension can change what you ask—or see what you get back—it can silently expose company data, client info, or intellectual property. And because it happens inside your own browser, you may never know it happened.

💼 AI Is Creating a New Class of Cybersecurity Jobs

The Story:

As AI becomes a core part of business operations, it is reshaping cybersecurity, not just in tooling, but in the roles and responsibilities security teams need to cover. Our new blog post outlines four emerging cybersecurity functions that every AI team should consider.

The details:

  • AI Security Engineer: Designs controls that protect AI pipelines, including input/output validation, red teaming, and deployment hardening

  • AI Risk Analyst: Tracks model performance drift, hallucinations, and potential misuse to assess exposure across business and compliance domains

  • GenAI Governance Lead: Creates policies for acceptable AI use and ensures alignment with security, privacy, and ethical standards

  • Prompt Injection Specialist: Identifies and mitigates risks from manipulated prompts, jailbreaks, and shadow interactions across chat interfaces

Why it matters:

AI-assisted coding increases development speed but can introduce subtle bugs. Automating error detection within existing workflows helps teams maintain code quality as they scale.

🎙️ Podcast recommendation of the week

Bret Taylor’s legendary career includes being CTO of Meta, co-CEO of Salesforce, chairman of the board at OpenAI (yes, during that drama), co-creating both Google Maps and the Like button, and founding three companies. Today he’s the founder and CEO of Sierra, an AI agent company transforming customer service.

What´s next?

Thanks for reading! If this brought you value, share it with a colleague or post it to your feed. For more curated insight into the world of AI and security, stay connected.