Using ChatGPT as your therapist? Your personal data could be at risk.
[ Watch the video ]
A recent Harvard Business Review survey revealed something unexpected: The number one use of generative AI right now?
Therapy and companionship.
Really. Not writing code. Not summarising reports. But... someone to talk to.
It makes sense, though. We live in a world where we carry loneliness, anxiety, and stress in our pockets—so when a chatbot offers a listening ear 24/7, people are going to use it.
But if you're using AI this way (and many of us are), it's worth pausing for a moment.
Because there are real privacy implications.
You're handing over incredibly personal information—your thoughts, habits, relationships, maybe even your trauma—to companies whose main goal is to maximise shareholder returns.
Yes, most of them let you opt out of having your data used to train their models.
But your data is still being stored. Why? Because they need that data to make the AI seem more “personalised” to you.
So, what can you do if you care about privacy?
Here are your options:
Option 1: Run an Open Source LLM Locally
This is the most secure and private route.
You download a model like Mistral or Llama 3 onto your own machine and run it locally.
Pros:
- Your data never leaves your device.
- No tracking, no storage on remote servers.
Cons:
- Requires some tech knowledge.
- Still a bit fiddly, especially on mobile.
But if privacy matters most, this is the gold standard.
Option 2: Use a Hosted Open Source Model
If local hosting feels like too much, some services offer hosted versions of open source LLMs.
You don’t have to manage infrastructure—but you're still avoiding big centralised AI providers like OpenAI and Anthropic.
Pros:
- Easier to set up than local.
- Often built with privacy-first users in mind.
Cons:
- Still hosted on someone else's servers, so there is some trust involved.
Option 3: Enterprise AI (Like Azure OpenAI)
Big companies that deal with sensitive data (banks, hospitals, law firms) use enterprise-level LLMs via services like Microsoft Azure OpenAI.
These versions offer strong privacy guarantees—your data isn’t used for training or shared externally.
Pros:
- High-level data protection.
- Well-documented compliance and governance.
Cons:
- Expensive.
- Designed for large organisations, not individuals.
What About Journaling Apps, Note-Takers, and Meeting Tools?
Most of these are just wrappers around big-name AIs. They might look different, have a nicer UI, or offer features like summaries and mood tracking—but under the hood, they're still sending your data to OpenAI, Claude, or Grok.
That means you’re still handing over your private thoughts to the same centralised systems.
So What’s the Takeaway?
-
If you use popular AI services like ChatGPT or Claude, your data is being stored—even if it’s not being used to train the models.
-
You do have more private alternatives—like open source LLMs run locally—but they take a bit more effort.
This will get better. Privacy-focused AI is improving fast. But for now, awareness is your best defence.
If you're using AI for personal journaling or emotional support, just know where your data is going—and what your options are.