privacy or bust: why you should move to lumo
the promise that slipped…again
When I first started using large language models, Claude was the only one that felt reliable enough to act like a pseudo-employee. I used it to code, write documentation, and handle repetitive operations. That one tool let me run multiple ventures alone. I built my workflows around it.
Anthropic was founded in 2021 by people who claimed they wanted something different. Dario Amodei had been Vice President of Research at OpenAI, where he helped create GPT-2 and GPT-3. He left because he believed AI was moving too fast without safety built in. The idea was to build AI that was steerable, interpretable, robust, and safe. This profile and this Wikipedia entry describe that origin story. Anthropic was even structured as a Public Benefit Corporation and funded by some of the largest names in technology. That mattered. It suggested they were accountable to the community that believed in them as much as to their investors.
That mission collapsed. The price hikes, hidden caps, forced migration to expensive tiers, and rewrite of the terms of service that allowed Anthropic to read user chat histories are a betrayal of the community that supported them and a violation of the mission they claimed as a PBC. I wrote about that here and the terms change itself is covered in this report. I had already been looking for alternatives since the pricing rug a month ago. Today, when the terms change went live, I cancelled my account.
If a Public Benefit Corporation can rug its users this openly, the model itself may be broken. It is time to ask whether the legal structure of “ethical AI” has any meaning when the mission can be discarded at will.
looking for an alternative
I spent the last month testing everything I could. Hosted open-source models looked promising, but most of them carried the same risks as Claude and ChatGPT: unclear privacy, shifting terms, and unpredictable costs. Running models like Llama or Mistral locally worked for experiments, but not for daily scale unless you are willing to manage hardware, updates, and uptime yourself.
I initially ignored Lumo. It was clunky and missing features like web search. I assumed it was not ready. Then Proton emailed me to announce it again, and my father, who is deeply privacy focused, asked if I had tried it. That was enough to make me take a second look.
I was surprised. The user experience still has rough edges, and their cat mascot is funny, but the architecture is strong. Proton has earned my trust with Mail, VPN, Drive, and Pass. When Proton says end-to-end encryption, I believe it.
privacy that holds
Lumo encrypts every prompt, response, and file on the client before it leaves your device. Even Proton cannot read your conversations. No policy update can undo that.
The code is open source. Anyone can audit it. Proton states that they do not log conversations, do not train on user data, and do not sell usage statistics. OpenAI and Anthropic do the opposite by retaining data and reserving the right to reuse it.
At this point promises are worthless. Proof is the only thing that matters. Proton delivers proof.
pricing you can trust
I began searching for alternatives when Anthropic’s pricing rug landed. Today’s terms change was the final straw. The speed of that shift shows how fragile trust becomes when pricing and policy can be dictated unilaterally.
Proton has been consistent for a decade. Proton Mail has never turned into an ad-driven inbox. Proton VPN has never pivoted to selling user activity. Proton Drive has never locked existing files behind a paywall. The company has held the same line since the beginning. That track record is why I trust Lumo’s subscription model.
If Proton ever betrayed that trust, the reputational damage would outweigh any revenue gain. They have more to lose by breaking their word than they could possibly earn. That pressure is what makes their pricing stable. Stability is about whether you believe the company will honor the deal in a year. With Proton, I do.
the ecosystem
The Proton ecosystem is what makes this work. Apple built its reputation on seamless integration across devices. Proton has done the same across privacy tools.
Mail, Pass, VPN, Drive, Calendar, and now Lumo connect under one account. When I generate code in Lumo, save it in Drive, and share notes over Mail, the entire flow happens inside a single encrypted environment. That improves convenience and reduces risk. There are fewer vendors, fewer policies to monitor, and fewer points of failure where data can leak.
The ecosystem matters because every tool shares the same design principle: privacy first.
open source is the real answer
The long-term answer is open-source AI. Models like Llama, Mistral, Falcon, Mixtral, and even DeepSeek show what is possible. Open-source models can be audited, forked, and improved by the community. They cannot suddenly change their terms or rug you. That is the safeguard.
This is where Anthropic looks the worst. They have never open-sourced Claude. That silence would be one thing for a normal startup chasing profit. But Anthropic is the only one of these companies incorporated as a Public Benefit Corporation. They are the ones who put “safety” and “accountability” in their charter. Yet Meta, Mistral, TII, DeepSeek, and even OpenAI have all published models while Anthropic has kept everything locked away.
Dario Amodei left OpenAI claiming the moral high ground. The irony is that Sam Altman has now released an open-source model (GPT-OSS), and DeepSeek has opened theirs, while Anthropic, the so-called public-benefit company, has done nothing. If anyone in this story deserves to be called the bigger asshole, it is Dario.
Open-source AI will eventually win. In the meantime, running these models yourself is still out of reach for most people. It requires hardware, bandwidth, and constant upkeep. This is where Lumo fits. Proton has already open-sourced its client code, and they have the track record to prove they mean it. Lumo gives everyday users privacy and stability today, while the open-source ecosystem continues to mature.
where this leaves us
The Claude I and many others relied on is dead. OpenAI has ceilings that undermine trust. Other providers have shown they will more often choose profit over privacy or their users. If we continue paying them, nothing changes.
The only way to shift this industry is to support services that prioritize privacy from the start. I have moved my workflows to Lumo and I support open-source models. If enough of us do the same, the industry may follow. At minimum, alternatives will have enough support to compete.
We need AI that cannot leak by design. And cannot rug by design. Lumo is that. At least for now…
Take a look and see for yourself. You can read more about it here.
contact
Feel free to send through a message, you can find my links here.
As always, 'twas nice to write for you, dear reader. Until next time.