Two weeks ago, the ChatGPT “shared conversations” leak exposed thousands of user chats — some containing names, addresses, and other personally identifiable information — in Google search results. The links, originally created through ChatGPT’s sharing feature, were publicly visible and indexed by search engines, making them accessible to anyone with the right search terms.
OpenAI reacted by retiring the sharing feature altogether and working with search engines to remove the indexed content.
While these actions closed the immediate vulnerability, security experts warn that focusing solely on this incident risks ignoring the much wider problem: systemic security weaknesses across the AI sector.
Security gaps among market leaders
Cybersecurity analysis carried out by the Business Digital Index (BDI) team recently examined 10 leading large language model providers.
Although half earned an “A” cybersecurity rating, the other half performed significantly worse. OpenAI received a D, while Inflection AI scored an F.
The study found:
- Half of the leading AI providers had experienced documented breaches.
- All providers had SSL/TLS configuration weaknesses.
- Most had hosting infrastructure vulnerabilities — only AI21 Labs and Anthropic avoided major issues.
- Credential reuse was widespread, with 35% of Perplexity AI employees and 33% at EleutherAI using previously breached passwords.
AI web tools shouldn’t be trusted
A separate Business Digital Index investigation into 52 popular AI web tools also revealed concerning trends.
These AI tools, which span productivity, research, and creative uses, are often adopted in the workplace without IT approval, and most of them have concerning issues with their external digital security.
Key findings:
- 84% of the analyzed AI web tools had experienced at least one data breach.
- 51% had corporate credentials stolen.
- 93% had SSL/TLS misconfigurations.
- 91% had hosting vulnerabilities linked to weak cloud security or outdated servers.
Žilvinas Girėnas, head of product at nexos.ai, says the real danger lies in how fast AI tools are being deployed without governance:
“This isn’t just about one tool slipping through. Adoption is outpacing governance, and that’s creating a freeway for breaches to escalate. Without enterprise-wide visibility, your security team can’t lock down access, trace prompt histories, or enforce guardrails. It’s like handing the keys to the kingdom to every team, freelancer, and experiment. A tool might seem harmless until you discover it’s leaking customer PII or confidential strategy flows. We’re not just talking theory — studies show 96% of organizations see AI agents as security threats, while barely half can say they have full visibility into agent behaviors.”
Around 75% of employees use AI for work tasks, yet only 14% of organizations have formal AI policies.
Nearly half of sensitive prompts are entered via personal accounts, bypassing company oversight entirely, and a significant portion of users actively conceal their AI use from management.
The weakest link: productivity tools
Within this broader sample, productivity-focused AI platforms were the least secure. These include note-taking, scheduling, and content generation tools widely integrated into daily workflows. Every single productivity AI tool showed hosting and encryption flaws.
Cybernews’ cybersecurity researcher Aras Nazarovas cautions:
“A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. The ChatGPT leak is a reminder of how quickly these weaknesses can become public.”
Recommendations for businesses and users
Cybersecurity experts at Cybernews recommend taking these steps to reduce risk:
- Establish and enforce AI usage policies across all departments.
- Audit all AI vendors and tools for enterprise-grade security compliance.
- Prohibit personal accounts for work-related AI interactions.
- Monitor and revoke all shared or public AI content.
- Educate employees on the risks of unsecured AI tools and credential reuse.
Conclusion: not an isolated event
The ChatGPT leak gained attention because it was visible, searchable, and easy for the public to understand. But similar vulnerabilities — combined with lax governance and unsafe usage habits — are widespread across the AI landscape. Many breaches remain undisclosed or unnoticed, even when they cause more damage than the ChatGPT case.
Without decisive measures, the next breach could be broader, deeper, and far harder to contain. The ChatGPT incident should be viewed not as an outlier but as a warning sign of the risks inherent in today’s rapidly expanding AI ecosystem.