…
Inside the Hidden Market: How Your AI Chats Are Being Harvested and Sold
Millions of people around the world use AI chat tools like ChatGPT, Gemini, Claude, Copilot, and others every day — for personal questions, work help, brainstorming, emotional support, and much more. Most users assume these conversations are private. But recent cybersecurity findings tell a very different story.
Researchers at the security firm Koi have uncovered a growing hidden market built on browser extensions that secretly capture and sell AI chat conversations without users’ knowledge. These extensions don’t just snoop — they harvest entire sessions with some of the most widely used AI assistants and sell that data to third-party buyers claiming “marketing analytics” purposes. (IT Security News)
This blog post explores what’s happening, why it matters, and what questions every AI user should be asking.
1. What is this hidden market and how does it work?
At the core of the problem are browser extensions that appear useful — often marketed as VPNs, privacy tools, or productivity add-ons — but include hidden code that logs all AI chats typed in your browser.
These extensions embed scripts that monitor web traffic on sites where people use AI like ChatGPT or Google’s Gemini. Instead of simply collecting benign analytics, they capture:
- The text you submit
- The responses you receive
- Metadata like timestamps and session details
This data is then aggregated and sold to unknown parties on networks connected to data brokers. (IT Security News)
2. Which extensions and AI platforms are affected?
According to researchers, the issue isn’t limited to one browser extension or one AI provider. The problematic extensions reportedly target a broad set of AI tools, including:
- ChatGPT
- Gemini
- Claude
- Microsoft Copilot
- Perplexity
- DeepSeek
- Grok (xAI)
- Meta AI
This means that almost any AI chatbot you use in the browser could be affected if the wrong extension is installed. (IT Security News)
3. Why is this a privacy disaster?
Most people share sensitive information in AI chats without thinking twice — from financial data and passwords to medical concerns, job-related material, and personal dilemmas. That kind of data is a treasure trove for:
- Data brokers
- Advertisers
- Social engineering attackers
- Identity thieves
The fact that these chats can be captured without user consent means users are effectively exposing their private conversations to unknown third parties — significantly amplifying digital privacy risks. (Forbes)
4. How widespread is the issue?
The problem appears more extensive than just one extension. Multiple reports and research indicate:
- Millions of AI conversations are being scraped and sold to brokers. (Futurism)
- Browser extensions marketed as “privacy” tools are sometimes the culprits. (Kordon)
- Dark web markets have historically traded hundreds of thousands of compromised credentials for platforms like ChatGPT. (Bitdefender)
This means the ecosystem of misuse isn’t an isolated incident — it’s part of a broader trend where data supply chains inadvertently feed hidden markets.
5. How do these hidden markets connect to the broader cybercrime ecosystem?
Stealing or trading AI chats is just one piece of a larger picture:
- Darknet marketplaces have long been used to sell stolen data and criminal services. (Wikipedia)
- Some illicit markets focus on selling credentials and access — including hacked accounts. (Bitdefender)
- Criminal actors also develop their own AI tools for malicious purposes (e.g., scam automation, fraud generation). (WIRED)
This means the hidden market for AI chats feeds into and grows the broader incentive for cybercriminals to exploit AI — whether for identity theft, blackmail, or economic gain.
6. What questions should you be asking as an AI user?
To extend the conversation and empower readers, pose and explore these critical questions:
• Is my AI data really private — and who controls it?
Many generative AI services make privacy promises, but delivery of those promises depends on user behavior and third-party tools.
• Which browser extensions do I trust — and why?
Not all extensions are safe. Some that claim to protect privacy might do the opposite.
• Should AI platforms be held accountable for downstream data misuse?
If unauthorized data harvesting occurs through integrations, should AI companies be liable or required to protect users more aggressively?
• What regulatory frameworks should govern AI data collection and sales?
AI chat logs contain extremely sensitive information. Should explicit consent and transparency laws apply?
7. What steps can you take right now?
While systemic solutions will take time and policy action, here are immediate steps you can take:
- Audit your browser extensions: Remove any you don’t recognize.
- Limit sensitive queries to AI tools until privacy guarantees improve.
- Use trusted, official plug-ins from reputable developers.
- Enable two-factor authentication and secure your accounts.
- Stay updated on cybersecurity advisories. (Norton)
Conclusion
The discovery of a hidden market for AI chat logs is a stark reminder: digital privacy isn’t guaranteed by default. As AI becomes further embedded in personal and professional life, the cyber-economic incentives for harvesting and monetizing user data have grown.
Understanding these hidden markets, asking the right questions, and taking proactive steps can help users safeguard their conversations — and hold the broader AI ecosystem accountable for privacy protections.
🔗 Original article referenced:
Inside the Hidden Market Where Your ChatGPT and Gemini Chats Are Sold for Profit (CySecurity News) — https://www.cysecurity.news/2025/12/inside-hidden-market-where-your-chatgpt.html
Leave a comment