ChatGPT Is Not Privacy-Friendly – Moxie Marlinspike Offers a Solution
2026-01-19
The rapid adoption of AI chatbots has reshaped how people search for information, write content, and even think through personal problems.
However, this convenience comes with a growing concern: privacy. While platforms like ChatGPT offer impressive capabilities, questions about data collection, conversation storage, and user profiling have become harder to ignore.
As debates intensify around AI ethics and surveillance, one prominent figure from the privacy world is stepping in with an alternative.
Moxie Marlinspike, the creator of Signal, believes ChatGPT is fundamentally not privacy-friendly, and he offers a different path forward through a new AI tool called Confer AI.
Key Takeaways
- ChatGPT prioritizes performance over privacy. ChatGPT delivers powerful AI capabilities, but its centralized design requires user conversations to be processed and potentially stored on remote servers, making it not privacy-friendly for sensitive or confidential use cases.
- Moxie Marlinspike offers a privacy-first AI alternative. Through Confer AI, the Signal founder introduces an AI assistant built on end-to-end encryption and verifiable secure execution, ensuring that user conversations remain private—even from the service provider.
- Privacy-preserving AI is possible, but requires trade-offs. Confer AI demonstrates that AI tools can function without mass data collection, though this approach may limit rapid model improvement or personalization, highlighting an important shift toward ethical and user-controlled AI systems.
There's no time to hesitate; crypto is always about timing. Get the best crypto prices and services only at Bitrue. Register now and discover a range of exciting campaigns.
ChatGPT Is Not Privacy-Friendly
ChatGPT, like most mainstream AI assistants, operates on a centralized infrastructure. User prompts are sent to remote servers, processed, logged, and potentially retained for quality improvement, safety monitoring, or future model training.
Even when companies promise not to “store personal data,” the technical reality is that conversations must be processed in plaintext at some point to generate responses.
From a privacy perspective, this creates several issues. First, users have limited visibility into how long their data is stored and who can access it.
Second, AI conversations are often more personal than search queries; people ask about health, finances, relationships, and sensitive work matters.
Treating these interactions as disposable data points exposes users to unnecessary risk. Third, centralized AI systems are attractive targets for breaches, subpoenas, or misuse by insiders.
This does not mean ChatGPT is malicious. It means its architecture prioritizes performance and scale over privacy by design.
In that sense, the claim that ChatGPT is not privacy-friendly is not an accusation; it is a structural critique of how most AI systems are built today.
Read Also: ChatGPT as a Virtual Therapist
Moxie Marlinspike Offers Confer AI as an Alternative
The concern around AI privacy is exactly what motivated Moxie Marlinspike, widely known as the Signal founder, to build a new kind of assistant.
His solution is Confer AI, a privacy-first chatbot designed to minimize trust in the service operator itself.
Confer AI is built on a radically different philosophy: the AI provider should not be able to read your conversations.
Instead of collecting and storing user prompts, Confer uses end-to-end encryption and modern cryptographic techniques to ensure that only the user can access the content of their chats.
Even the company running the service cannot see what users are asking or what the AI responds.
Technically, Confer relies on secure computing environments and encrypted execution. Computation still happens on servers because advanced AI models require serious processing power, but those servers are locked down using cryptographic attestation.
This allows users to verify that the code running the AI has not been modified to spy on them. In simple terms, the system is designed so that trust is minimized and verification is maximized.
Read Also: ChatGPT Atlas Review
This makes Confer AI a true Moxie Marlinspike alternative for ChatGPT. It does not aim to compete on flashy features or viral prompts. Instead, it competes on values: privacy, user control, and transparency.
Is There Any Privacy in the AI Era?

The rise of AI has forced a difficult question: Is privacy even realistic anymore? Many users have accepted data collection as the price of convenience.
However, history suggests that this trade-off is not inevitable. Secure messaging once faced the same skepticism, yet today, end-to-end encryption is widely accepted thanks to tools like Signal.
AI may be at a similar crossroads. Most current systems treat user data as fuel.
Privacy-focused alternatives like Confer challenge that assumption by proving that useful AI does not require mass data extraction. The implication is significant: AI tools can exist without becoming surveillance engines.
That said, privacy-first AI also comes with trade-offs. Models may improve more slowly without access to user conversations.
Features that rely on personalization may be limited. For some users, this is an acceptable cost. For others, journalists, researchers, activists, developers, or anyone handling sensitive information, it may be essential.
Read Also: When GROK Becomes a DeepFake Tool
The broader issue is choice. Without alternatives, users are forced into ecosystems that do not align with their values.
Confer AI expands that choice and sets a benchmark for what ethical AI infrastructure could look like.
Final Note
The claim that ChatGPT is not privacy-friendly reflects a deeper truth about how most AI systems operate today. Centralized data collection, opaque retention policies, and unavoidable trust in providers are built into their design.
Moxie Marlinspike offers a solution not by tweaking policies, but by rethinking the architecture of AI itself.
Through Confer AI, he introduces a model where conversations remain private, verifiable, and user-controlled, much like secure messaging transformed communication a decade ago.
Whether Confer becomes mainstream or remains a niche tool, its existence sends a powerful message: privacy in the AI era is not impossible, it just requires different priorities.
As AI continues to shape daily life, tools like Confer challenge the industry to move beyond convenience alone and toward systems that respect the fundamental right to private thought.
FAQ
Is ChatGPT safe to use for private conversations?
ChatGPT is designed to be helpful and secure, but it is not built for end-to-end privacy. User conversations may be processed, stored, or reviewed for quality and safety purposes, which makes it unsuitable for highly sensitive or confidential discussions.
Why do people say ChatGPT is not privacy-friendly?
People consider ChatGPT not privacy-friendly because it operates on centralized servers where user inputs must be readable by the system to generate responses. This means conversations are not end-to-end encrypted and may be retained under certain conditions.
What is Confer AI and how does it work?
Confer AI is a privacy-focused AI assistant developed by Moxie Marlinspike, the Signal founder. It uses end-to-end encryption and secure execution environments so that only the user can access chat content—even the service operator cannot read the conversations.
How is Confer AI different from ChatGPT?
The key difference is architecture. ChatGPT prioritizes scale and performance, while Confer AI prioritizes privacy by design. Confer does not store readable chat logs and uses cryptographic verification to ensure conversations remain private.
Is there a truly private AI chatbot available today?
While no AI system is entirely risk-free, Confer AI is currently one of the most serious attempts at a truly private chatbot, as it minimizes data retention, limits operator access, and allows users to verify how the system runs.
Disclaimer: The views expressed belong exclusively to the author and do not reflect the views of this platform. This platform and its affiliates disclaim any responsibility for the accuracy or suitability of the information provided. It is for informational purposes only and not intended as financial or investment advice.
Disclaimer: The content of this article does not constitute financial or investment advice.





