

If only your distro would support more than one desktop.
I will say though, Gnome dropping support for other window managers does suck. That is them removing infrastructure that non-(or semi) Gnome users depend on.
If only your distro would support more than one desktop.
I will say though, Gnome dropping support for other window managers does suck. That is them removing infrastructure that non-(or semi) Gnome users depend on.
300i https://www.bilibili.com/video/BV15NKJzVEuU/
M4 https://github.com/itsmostafa/inference-speed-tests
It’s comparable to an M4, maybe a single order of magnitude faster than a ~1000 euro 9960X, at most, not multiple. And if we’re considering the option of buying used, since this is a brand new product and less available in western markets, the CPU-only option with an EPYC and more RAM will probably be a better local LLM computer for the cost of 2 of these and a basic computer.
That’s still faster than your expensive RGB XMP gamer RAM DDR5 CPU-only system, and you can depending on what you’re running saturate the buses independently, doubling the speed and matching a 5060 or there about. I disagree that you can categorise the speed as negating the capacity, as they’re different axis. You can run bigger models on this. Smaller models will run faster on a cheaper Nvidia. You aren’t getting 5080 performance and 6x the RAM for the same price, but I don’t think that’s a realistic ask either.
I’m not saying you can deploy these in place of Nvidia cards where the tooling is built with Nvidia in mind. I’m saying that if you’re writing code you can do machine learning projects without CUDA, including training.
I agree with your conclusion, but these are LPDDR4X, not DDR4 SDRAM. It’s significantly faster. No fans should also be seen as a positive, since they’re assuming the cards aren’t going to melt. It costs them very little to add visible active cooling to a 1000+ euro product.
You can run llama.cpp on CPU. LLM inference doesn’t need any features only GPUs typically have, that’s why it’s possible to make even simpler NPUs that can still run the same models. GPUs just tend to be faster. If the GPU in question is not faster than an equally priced CPU, you should use the CPU (better OS support).
Edit: I looked at a bunch real-world prices and benchmarks, and read the manual from Huawei and my new conclusion is that this is the best product on the market if you want to run a model at modest speed that doesn’t fit in 32GB but does in 96GB. Running multiple in parallel seems to range from unsupported to working poorly, so you should only expect to use one.
Original rest of the comment, made with the assumption that this was slower than it is, but had better drivers:
The only benefit to this product over CPU is that you can slot multiple of them and they parallelise without needing to coordinate anything with the OS. It’s also a very linear cost increase as long as you have the PCIe lanes for it. For a home user with enough money for one or two of these, they would be much better served spending the money on a fast CPU and 256GB system RAM.
If not AI, then what use case do you think this serves better?
CUDA is not equivalent to AI training. Nvida offers useful developer tools for using their hardware, but you don’t have to use them. You can train on any GPU or even CPU. The projects you’ve looked at (?) just chose to use CUDA because it was the best fit for what hardware they had on hand, and were able to tolerate the vendor lock-in.
What are Steam cards? It says in the article that debit cards are valid.
Top paragraph:
According to reports, debit cards are acceptable too.
It says in the article(/the linked Steam page) as long as you have ever made a purchase, you’re assumed to be over 18, since being over 18 is a requirement to having a card. The third party verification is for sites that only ask for ID or vibes-check photo of your face.
Could you buy premium games from Steam in the UK without a debit card or credit card before this change?
Does it have any sort of on-board NPU to make it AI-oriented?
Bruv, OP asks ‘What’s your stance on Genocide?’ and then refers to Ukraine and China rather than Israel, that’s bad enough, but then the PieFed guy replies with that ‘they ban speech which minimises atrocities committed by Hamas in Gaza’! Sheesh.
They’re probably tagged outside the image. If you rely on watermarks, then someone needs to spot them with their eyes.
Unalive started being widely used 2020-2021.
Very good
Only because Google doesn’t index Chinese sites =P Deepseek had an access control bug when it first launched and Qwen is owned by Jack Ma.
Apple allows you to offer better deals on other platforms. Customers are allowed to buy on other platforms, but sellers aren’t allowed to give you any reason to. Even in cases where those other platforms take a smaller cut, the developer has to artificially raise the price until Steam is the cheapest platform if they want to remain on Steam.
GPUs being able to spy on you is a problem. Linux and OSs should work on ways to minimise this possibility. But specifically worrying about them ratting you out to the CPC is in my opinion much less serious than spying on you for commercial interests. Ultimately it’s possible to avoid visiting China if you aren’t Chinese. Including Chinese websites.
You could watch two YouTube films at once. (No but seriously 2Mb/s is too low even for just YouTube. YouTube recommends 20Mb/s. And that’s probably assuming 30hz. So you probably actually want at double or more. https://support.google.com/youtube/answer/78358)
Don’t you have that backwards? Without TSMC’s outstanding technology, the island’s value decreases, both for China and for the USA. Conventional wisdom is that reduced tensions also reduces the risk of war.