

I feel like I did at one point, but I should probably try again
I feel like I did at one point, but I should probably try again
Yeah I’m not super surprised… It used to work well when I bought it back in '17 but it’s become worse and worse with updates.
I’m not a home theater power user, but this is good info to make sure my setup is future proof for when I finally get a new TV. All these different standards get really confusing.
Yeah this tracks, I don’t understand why people recommend Debian so much, especially to new users. Distros that update more regularly like Mint or Fedora (for non nvidia users) are much better options.
KDE 6 has been rock solid for me, I haven’t had any issues with it yet
Removing 3rd party kernel access will probably also make cheating harder. Kernel anticheat is necessary largely in part due to cheat software using exploits in the 3rd party extension system to get kernel privileges itself and evade user mode anticheat.
Image display is an important feature for me. If konsole supported it, I’d just use that. If I’m on a gnome system I’ll pretty much always change the terminal because gnome terminal has a lot of issues with font rendering that I find annoying
I used to prefer Gnome before the KDE 6 update due to the rough edges in KDE. After KDE 6 came out I’ve tried it again, and it’s incredible. The team has spent a lot of time on polish for this major release and it allows KDE’s suite of more fully featured applications to shine. GNOME apps like gedit, nautilus, and gnome terminal tend to provide the minimum level of functionality, whereas KDE’s applications feel like they’re trying to work for power users. Kate goes as far as supporting the LSP for code autocompletion. KDE’s desktop is much more customizable as well, so you don’t really need extensions to get the functionality you’d be looking for in GNOME, stuff like the application launcher are built in. KDE connect is a really useful application you can install on your phone to get file transfers and notification sharing, among other things, between your phone and computer while connect to the same local network. Performance wise they seem pretty equal, even on older hardware, but KDE might have a bit of an edge in terms of RAM usage, YMMV depending on how you customize the desktop. The one thing I miss about GNOME is their “start menu” experience, I haven’t found a way to replicate that in KDE, but I haven’t looked very hard either. Overall I wouldn’t hesitate recommending KDE, plasma 6 makes me actually feel like the Linux desktop is ready for mainstream.
The fediverse could pose a threat to the market dominance of the Facebook platform and instagram, as there are applications that aim to be direct competitors (frendica, plemora, pixelfed) already in the fediverse. If the fediverse grows, there will be no reason for people to stay on Meta’s platforms without them reducing advertisement and increasing user privacy, which is obviously not something they want to do.
I had no idea this was the case, in a sane legal system this should be an open and shut antitrust case.
It’s a fork of Vim but the codebase has been cleaned up to remove complexity due to legacy hardware support. It allows the use of Lua for configuration and plugin implementation instead of VimScript, which allows plugins to be written in a sanely designed, high performance scripting language, allowing plugin developers to build more complex plugins more easily without dragging down editor performance (VimScript comparability is maintained though). It has a built in implementation of LSP. Plugins written in other languages can communicate with the application via a msgpack API so deciding to support other programming languages for plugin development at compile time is not necessary.
Hopefully articles like this get more companies contributing to steamos/proton
Are there any companies making discrete laptop graphics that don’t have proprietary drivers? I don’t think I’ve ever seen an AMD powered laptop unless it used an APU. I shudder to think of what proprietary Linux drivers from a company less resourced than Nvidia are like.
Because until you spend many hours getting used to it, it’s annoying as hell. I’m a longtime bash user, but if I have to do anything in PowerShell, it sucks. Bash is even less friendly to novice/casual users due to tools like awk and sed being totally obtuse. When you’re unfamiliar with the workflow, not being to see everything you’re able to do at a glance is pretty frustrating.
Mullvad (and every other decent VPN) supports WireGuard and OpenVPN configurations that will be supported on any distro through the network settings without the need for additional software. It’s also pretty likely the mullvad client will be in the software center of whatever distro you’re using
NFS is generally the way network storage appliances are accessed on Linux. If you’re using a computer you know you’re going to be accessing files on in the long term it’s generally the way to go since it’s a simple, robust, high performance protocol that’s used by pros and amateurs alike. SSHFS is an abuse of the ssh protocol that allows you to mount a directory on any computer you can get an ssh connection to. You can think of it like VSCode remote editing, but it’ll work with any editor or other program.
You should be able to set up NFS with write caching, etc that will allow it to be more similar in performance to a local filesystem. Note that you may not want write caching specifically if you’re going to suddenly disconnect your laptop from the network without unmounting the share first. Your actual performance might not be the same, especially for large transfers, due to the throughput of your network and connection quality. In my general experience sshfs is kind of slow especially when accessing many different small files, and NFS is usually much faster.
If you’re on Linux I’d recommend using btrfs, or bcachefs with snapshots. It’s basically like time machine on MacOS. That way if you accidentally delete something you can still recover it.
Isn’t a huge part of the point of copy left licences that an author can’t change the license without rewriting the code entirely?
A dedicated server is needed because something needs to keep a catalog of the smart devices available on your network and ideally be accessible to many people in one household. You could make a system that went phone -> device but you would need to set up each device on each phone you wanted to use, which isn’t a great user experience. You could also run into issues where devices would need to handle multiple conflicting commands from different users coming in at once. Since smart devices are usually trying to use as little power as possible, that extra complexity would hurt you in that department. The third reason is that having a separate server enables automated workflows that would depend on an always online server that orchestrates multiple devices. For example, let’s say you have some automatic insulating blinds, a smart thermostat. You want to raise and lower the blinds to maximize your energy efficiency. Since you have the dedicated server, that server can check the temperature set point of your thermostat, current weather, and sunrise\sunset times. If it’s sunny out, and your set point is higher than the outdoor temperature, the server can raise the blinds to let warm sunlight in, and vice versa. If only your phone could control the devices a workflow like this couldn’t work when you were out of the house.
I think this is the most important aspect of Linux accepting more rust contributions. More and more existing maintainers are aging out, and people just don’t learn or want to build large applications in C anymore. From what I understand companies doing proprietary kernel development have largely made the rust transition for new code at this point, so fewer and fewer systems level programmers will be used to C (and C++ over time) for these tasks. Existing maintainers pressure against rust development could become a threat to the long term viability of the kernel.