

The platform where bot farms are still effective
Spoken like they’re no longer effective on the other platforms.
The platform where bot farms are still effective
Spoken like they’re no longer effective on the other platforms.
Oh, no! Someone’s publishing open models better than our closed ones! How are we going to make profit now? Do something! Quick!!!
How does it compare to NixOS?
I’d be very skeptical of claims that Debian maintainers actually audit the code of each piece of software they package. Perhaps they make some brief reviews, but actually scrutinizing every line for hidden backdoors is just not feasible.
Is that something new? As in, has WaPo not been willing to go after Meta in a similar manner before?
So, essentially, they wanted to enter the Chinese market so much that they were even willing to comply with the local rules and regulations!
This is such a big secret, we really needed a whistleblower to tell us that!
It works fine for me on Hyprland.
I don’t think any kind of “poisoning” actually works. It’s well known by now that data quality is more important than data quantity, so nobody just feeds training data in indiscriminately. At best it would hamper some FOSS AI researchers that don’t have the resources to curate a dataset.
What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.
You got that backwards. They’re other models - qwen or llama - fine-tuned on synthetic data generated by Deepseek-R1. Specifically, reasoning data, so that they can learn some of its reasoning ability.
But the base model - and so the base capability there - is that of the corresponding qwen or llama model. Calling them “Deepseek-R1-something” doesn’t change what they fundamentally are, it’s just marketing.
There are already other providers like Deepinfra offering DeepSeek. So while the the average person (like me) couldn’t run it themselves, they do have alternative options.
A server grade CPU with a lot of RAM and memory bandwidth would work reasonable well, and cost “only” ~$10k rather than 100k+…
To be fair, most people can’t actually self-host Deepseek, but there already are other providers offering API access to it.
The point of it being open is that people can remove any censorship built into it.
The particular AI model this article is talking about is actually openly published for anyone to freely use or modify (fine-tune). There is a barrier in that it requires several hundred gigs of RAM to run, but it is public.
Now, if only the article explained how that killing was related to TikTok. The only relevant thing I saw was,
had its roots in a confrontation on social media.
It’s says “social media”, not “TokTok” though.
I’m confused, isn’t Fedora atomic immutable? Shouldn’t that make it stateless automatically?
Wary reader, learn from my cautionary tale
I’m not sure what to learn exactly. I don’t get what went wrong or why, just that the files hit deleted somehow…
Yes, almost like they have intentionally waited until Trump’s election.
Was broken last I checked - as in, would regularly just crash.