The Most PRIVATE Ways To Use AI

By Naomi Brockwell, Founder and Director of NBTV

AI is transforming the world at an unprecedented pace, becoming an essential part of our daily lives—even if we don’t fully realize it. From productivity boosters to personalized assistance, chatbots like ChatGPT, Bard, Perplexity, and Claude are giving us abilities that once felt out of reach. AI chatbots help us generate content, write code, provide instant advice, and much more.

The Double-Edged Sword

As amazing as these tools are, there are also privacy considerations we can’t ignore:

  • Data use for model training:
    Many chatbots use the data we provide for training, and this data often becomes deeply integrated into their systems, making it nearly impossible to remove. Even though companies like OpenAI and Google claim to anonymize data, it doesn’t take much—just a couple of unique details—for you to become easily identifiable. Once data is integrated into the model, there’s no simple way to remove it.

  • Data collection and storage:

    • Hacking and data breaches

    • Third-party data sharing

    • Government access and subpoenas

Companies store large datasets of user inputs, making these repositories tempting targets for hackers, data brokers, and law enforcement. These centralized databases are gold mines of personal information, available to anyone with the power or means to access them. Do we really want our interactions with AI tools to be stored forever, vulnerable to misuse?

Solutions for Using AI Privately

The good news is that embracing AI tools doesn’t have to mean sacrificing privacy.

Locally-Hosted LLMs

The most private way to use an AI chatbot is to host it locally. Running models on your machine ensures that no data leaves your device, keeping your queries private and preventing them from being used for further training. In two weeks, we’ll release a tutorial on how to set this up—if it’s something you want to explore.

Privacy-Focused Platforms

Brave’s Leo:

Leo AI is integrated into the Brave browser, allowing users to interact with an AI chatbot without installing extra apps or extensions. Leo not only provides real-time answers and content generation but also prioritizes privacy at every step.

  • No logging: Brave does not store or retain user data.

  • Reverse proxy server: Queries are passed through a proxy that strips IP addresses, ensuring the service cannot trace interactions back to you—even when using external models like those from Anthropic.

  • Local LLM compatibility: Brave allows users to connect Leo with locally-hosted models, letting you process queries on your own device while still benefiting from Leo’s AI capabilities during browsing. No data ever leaves your machine, giving you full control over your interactions.

Venice.ai:

Venice.ai is an interface for using AI chatbots that emphasizes censorship resistance and decentralization. It provides users with access to uncensored AI models, allowing for real-time text, code, and image generation.

Think of Venice as the front-end user interface to accessing these tools—it acts as a passthrough service, ensuring no logs are kept on its servers and requiring no personal accounts or identifiers.

On the back end, Venice leverages GPU marketplaces like Akash to provide the computing power needed to run AI models. These marketplaces are like an Airbnb for GPUs—individuals rent out their hardware to process your queries. Keep in mind that because the backend providers control the machines hosting the models, they can see your prompts and make their own logging decisions. While your interactions remain anonymous through Venice’s proxy system that strips your IP address—similar to Brave’s proxies—and there is no centralized service aggregating your prompts across sessions to build a profile on you, you should be careful not to identify yourself within a session if you wish to use them privately.

It’s worth noting that neither Brave nor Venice is using your prompts for training. However, as far as logging is concerned, there is an element of trust required for both: either trusting Brave’s no-logging policy or trusting Venice’s no-logging policy as well as trusting the compute providers on the back end. The privacy for cloud-hosted AI isn’t yet perfect because truly private inference is an unsolved problem. However, as far as using cloud-based AI chatbots goes, Brave and Venice are two of the best options available.

Balancing Privacy and Performance

If absolute privacy is your top priority, hosting models locally is your best option. However, this comes with trade-offs—locally-hosted models may lack the processing power of those available on cloud platforms. Using Brave and Venice allows access to more advanced models while still providing privacy protections.

Privacy Best Practices for AI Chatbots

If you decide you do want to take advantage of these more powerful models on 3rd-party servers, but logging is a big concern for you, you can still use these systems comfortably, you just need to employ some best practices.

  • Local Models: Ask sensitive questions freely since all data stays on your machine.

  • Brave-Hosted Models: Brave’s no-logging policies provide high privacy, so if you trust Brave, you can comfortably use their AI tools. Personally, I have high trust in them, but you’ll have to make your own decision.

  • Third-Party Models: When using models where you’re unsure who is processing your query, avoid sharing identifiers like names or addresses. If your IP is being stripped first, and your prompts across sessions aren’t being aggregated by a single entity, you can feel comfortable asking sensitive questions that don’t directly identify you.

  • Centralized Platforms (e.g., ChatGPT): Be cautious about what you share, as these platforms build detailed profiles that may be accessible or used in unexpected ways in the future.

The Power of Choice in the AI Era

In a world where data is often harvested by default, it’s easy to feel powerless. However, tools like Brave and Venice give us alternatives that prioritize privacy. For those willing to take it a step further, hosting models locally offers the highest level of control over personal data, and we’ll dive into this in our upcoming video tutorial in 2 weeks where we collaborated with The Hated One.

In this era of AI-driven data collection, it’s more important than ever to be thoughtful about what we share online.

Privacy doesn’t mean rejecting technology—it just means making smarter choices.

A version of this article first appeared in video form on NBTV. NBTV is a non-profit educational platform that teaches people how to reclaim control of their lives in the digital age. They give people the tools they need to take back their privacy, money, and free online expression.

Learn more at NBTV.media

Previous
Previous

Happy Patriot Act Anniversary!

Next
Next

Privacy Is About Choice