Article: https://proton.me/blog/deepseek
Calls it “Deepsneak”, failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.
I can’t speak for Proton, but the last couple weeks are showing some very clear biases coming out.
I don’t see how what they wrote is controversial, unless you’re a tankie.
Given that you can download Deepseek, customize it, and run it offline in your own secure environment, it is actually almost irrelevant how people feel about China. None of that data goes back to them.
That’s why I find all the “it comes from China, therefore it is a trap” rhetoric to be so annoying, and frankly dangerous for international relations.
Compare this to OpenAI, where your only option is to use the US-hosted version, where it is under the jurisdiction of a president who has no care for privacy protection.
TBF you almost certainly can’t run R1 itself. The model is way too big and compute intensive for a typical system. You can only run the distilled versions which are definitely a bit worse in performance.
Lots of people (if not most people) are using the service hosted by Deepseek themselves, as evidenced by the ranking of Deepseek on both the iOS app store and the Google Play store.
Yeah the article is mostly legit points that if your contacting the chatpot in China it is harvesting your data. Just like if you contact open AI or copilot or Claude or Gemini they’re all collecting all of your data.
I do find it somewhat strange that they only talk about deep-seek hosting models.
It’s absolutely trivial just to download the models run locally yourself and you’re not giving any data back to them. I would think that proton would be all over that for a privacy scenario.
It might be trivial to a tech-savvy audience, but considering how popular ChatGPT itself is and considering DeepSeek’s ranking on the Play and iOS App Stores, I’d honestly guess most people are using DeepSeek’s servers. Plus, you’d be surprised how many people naturally trust the service more after hearing that the company open sourced the models. Accordingly I don’t think it’s unreasonable for Proton to focus on the service rather than the local models here.
I’d also note that people who want the highest quality responses aren’t using a local model, as anything you can run locally is a distilled version that is significantly smaller (at a small, but non-trivial overalll performance cost).