Garuda for me. The reasons are similar; just replace some optimization with some convenience. It’s a bit garish by default but pleasant to use.
Garuda for me. The reasons are similar; just replace some optimization with some convenience. It’s a bit garish by default but pleasant to use.
Flatpak has its benefits, but there are tradeoffs as well. I think it makes a lot of sense for proprietary software.
For everything else I do prefer native packages since they have fewer issues with interop. The space efficiency isn’t even that important to me; even if space issues should arise, those are relatively easy to work around. But if your password manager can’t talk to your browser because the security model has no solution for safe arbitrary IPC, you’re SOL.
Or Garuda. Sure, the theme it applies to KDE by default is pretty garish but nothing keeps you from just going to System Settings and seeing a different theme. Other than that it’s basically just Arch with a bunch of stuff preinstalled and some convenience scripts.
“You finished a computer game, Atticus.”
The truth was a burning green crack through my brain.
Credits scrolling by, a reminder of the talent behind a just-finished journey. The feeling of triumph, slowly replaced by the creeping grayness of ordinary life.
I had finished a computer game. Funny as hell, it was the most horrible thing I could think of.
Hoo boy, you weren’t kidding. I find it amazing how quickly this went from “the kernel team is enforcing sanctions” to an an unfriendly abstract debate about the definition of liberalism. I shouldn’t, really, but I still am.
Oh yeah, the equation completely changes for the cloud. I’m only familiar with local usage where you can’t easily scale out of your resource constraints (and into budgetary ones). It’s certainly easier to pivot to a different vendor/ecosystem locally.
By the way, AMD does have one additional edge locally: They tend to put more RAM into consumer GPUs at a comparable price point – for example, the 7900 XTX competes with the 4080 on price but has as much memory as a 4090. In systems with one or few GPUs (like a hobbyist mixed-use machine) those few extra gigabytes can make a real difference. Of course this leads to a trade-off between Nvidia’s superior speed and AMD’s superior capacity.
These days ROCm support is more common than a few years ago so you’re no longer entirely dependent on CUDA for machine learning. (Although I wish fewer tools required non-CUDA users to manually install Torch in their venv because the auto-installer assumes CUDA. At least take a parameter or something if you don’t want to implement autodetection.)
Nvidia’s Linux drivers generally are a bit behind AMD’s; e.g. driver versions before 555 tended not to play well with Wayland.
Also, Nvidia’s drivers tend not to give any meaningful information in case of a problem. There’s typically just an error code for “the driver has crashed”, no matter what reason it crashed for.
Personal anecdote for the last one: I had a wonky 4080 and tracing the problem to the card took months because the log (both on Linux and Windows) didn’t contain error information beyond “something bad happened” and the behavior had dozens of possible causes, ranging from “the 4080 is unstable if you use XMP on some mainboards” over “some BIOS setting might need to be changed” and “sometimes the card doesn’t like a specific CPU/PSU/RAM/mainboard” to “it’s a manufacturing defect”.
Sure, manufacturing defects can happen to anyone; I can’t fault Nvidia for that. But the combination of useless logs and 4000-series cards having so many things they can possibly (but rarely) get hung up on made error diagnosis incredibly painful. I finally just bought a 7900 XTX instead. It’s slower but I like the driver better.
Speak for yourself. I’m going to migrate all of my 22-bit RSA keys to a longer key length. And not 24 bits, either, given that they’re probably working on a bigger quantum computer already. I gotta go so long that no computer can ever crack it.
64-bit RSA will surely be secure for the foreseeable future, cost be damned.
True, although that has happened with F/OSS as well (like with xz or the couple times people put Bitcoin miners into npm packages). In either case it’s a lot less likely than the software simply ceasing to be supported, becoming gradually incompatible with newer systems, and rotting away.
Except, of course, that I can pick up the decade-old corpse of an open source project and try to make it work on modern systems, despite how painful it is to try to get a JavaFX application written for Java 7 and an ancient version of Gradle to even compile with a recent JDK. (And then finally give up and just run the last Windows release with its bundled JRE in Wine. But in theory I could’ve made it work!)
Note that this specifically talks about proprietary platforms. Locally-run proprietary freeware has entirely different potential issues, mostly centered around the developer stopping to maintain it. Locally-run F/OSS has similar issues, actually, but lessened by the fact that someone might later pick up the project and continue it.
Admittedly, platforms are very common these days because the web is an easily accessible cross-platform GUI toolkit SaaS is more easily monetized.
True. Just this weekend I spent far too much time trying to get a printer to work again on Windows after its IP address got changed. In the end Windows refused to talk to the printer unless I removed and then readded the device from the Settings app, which prompted a reinstallation of the device driver. No, just changing the IP address in the device settings wasn’t enough; Windows insisted on the driver being reinstalled.
Linux didn’t need reconfiguration; it just autodetected that the printer had moved.
I’m not saying that Linux is without issues, not by far. But Windows has never been terribly “it just works” for me either. The closest to “it just works” was (aptly) OS X somewhere around Snow Leopard.
Honestly, it’s still the F310 for me. I have mine since the early 2010s and it’s still working perfectly. Those things are built like tanks and between XInput and DirectInput are compatible with just about any PC game of the last forty years, no extra software required. Also, they’re dirt cheap.
Honorable mention to the F710, the wireless version. While Windows 10’s USB stack unfortunately broke compatibility with it (causing randomly dropped inputs), Linux does not have that problem.
In my experience rear-mounted sensors are the most accurate, closely followed by under-screen sensors. Side-mounted sensors are utter garbage.
Accuracy isn’t even that much of an issue, it’s that the side-mounted ones are far too easy to accidentally trigger just by handling the phone. I can’t count the number of times my last two phones told me I had three incorrect fingerprint attempts after I had just pulled them out of my pocket.
Then I got a Pixel and I have no more such issues and virtually perfect accuracy. Same on a Samsung tablet. Same on an old phone I had where the power button was on the rear and had a full-size sensor.
Basically, I’m perfectly happy with any front- or rear-mounted full-size sensor. Those tiny side-mounted ones suck.
Oh, right. Fast Boot. I forgot about that bundle of joy.
But that’s wasn’t the only instance of an NTFS volume suddenly being broken. Another favorite was when I shrunk a volume on one disk from Linux (and then remembered that Windows correspond done it better) and rebooted to have it fixed and Windows proceeded to repair one on a different disk.
NTFS feels rock solid if you use only Windows and extremely janky if you dual-boot. Linux currently can’t really fix NTFS volumes and thus won’t mount them if they’re inconsistent.
As it happens, they’re inconsistent all the time. I’ve had an NTFS volume become dirty after booting into Windows and then shutting down. Not a problem for Windows but Linux wouldn’t touch the volume until I’d booted into Windows at least once.
I finally decided to use a storage upgrade to move most drives to Btrfs save for the Windows system volume and a shared data partition that’s now on ExFAT because it’s good enough for it.
I’m not sure about the SSD. Has QLC substantially improved since hitting the market? If not, I’d recommend going with something TLC-based.
I gotta be honest, I haven’t used a dedicated sound card since the Vista/7 era when EAX stopped being a thing and onboard sound could handle 5.1 output just fine. The last one I had was a SoundBlaster Audigy.
These days the main uses for dedicated sound interfaces are for when you need something like XLR in/out and then you’ll probably go with something USB.
Port 220.
IRQ 5, port 220h, DMA 1 was what I used for my SoundBlaster 2.
Later I used IRQ 5, port 220h, DMA 1, high DMA 5 for my SoundBlaster 16.
Mind you, the real winner is of course Android. It has a consistent, easy to learn interface and a wide range of applications that integrate nicely.
And we don’t need to speculate; it has already won and is the true face of Linux for the masses. Plenty of young people don’t even own traditional computers anymore and do everything on their smartphone or tablets.
And that’s why this entire discussion is really just a form of fan wank; we don’t need to find a unified UI for Linux because it has already been found and has a massive market share. You may not like it but this is what peak performance looks like.
Everything else can be as complicated, janky, or exotic as it wants because it doesn’t matter.
Ah, so they actually got that implemented. Nice.