• 0 Posts
  • 61 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle



    1. I don’t want to use the command window for everything, or really much of anything, at least at the start.

    With many of the modern distros, you can get a long way without a lot of command line work. But, some interaction will likely still be inevitable. However, most distros include either flatpak or snap, which lets you download, install and update software via the Graphical User Interface (GUI). So, there shouldn’t be too much command line work required.

    1. I currently use Proton VPN and I’d like to use it on this new laptop too.

    It looks like Proton officially supports Ubuntu. And I would note that it expects the GNOME desktop, not KDE. So, Kubuntu will likely run into issues (probably the same issues as Mint). That said, they also have a page on installing on Linux Mint which seems to indicate skipping a single step. There are also guides out there for installing Proton VPN, without using the terminal.

    As an aside, unless you need a VPN to securely access a remote network, shift your apparent location or for downloading/sharing copyrighted works, consider saving the money and not paying for a VPN. They are mostly just a waste of money for the average user. Sorry, I’ll get off my soapbox now.

    So, does this mean I should use Ubuntu? And will Kubuntu work or would I have to use a different version of Ubuntu? And is there no way to get Proton without using the console?

    Just going with Ubuntu might be easier and it’s the officially supported distro. If you run into a problem, you may have trouble getting support on an unsupported distro. That said, it looks like getting it running on Mint/Kubuntu seems easy enough and works. I’m personally a fan of the KDE desktop (this is where the “K” in Kubuntu comes from) and think it makes the Windows->Linux transition somewhat better.

    if I’m able to change to a custom mouse pointer (I currently use a cute one that I’d like to also use on the new laptop)

    Yup, you can change the mouse pointer. Not sure if you can import your current one, but that’s going to depend on the format and where you got it.

    if keyboard shortcuts like alt-tabbing work or are easily configurable

    You’ll find many of the shortcuts work the same. Even the ones using the “Windows” key are mostly similar, though you’ll see it referred to as the “Meta” key. Alt-Tab as an example works exactly the same. And yes, they are configurable.

    I’m kind of confused about how updating things works on linux. Will I be able to easily update to a new version of whatever distro I’m using?

    So, edging back onto my soapbox for a sec (you can safely skip this whole paragraph, if you want), the software ecosystem in Linux is a mess at the moment. It’s very much the XKCD Standards situation. First, you will likely have the main OS way to update the OS and software. For Ubuntu, this will be via .deb packages. You’ll update these via a command like sudo apt update && sudo apt upgrade. The you will have one or more other package managers for containerized packages. This will be flatpak or snap. Why do we have one (or both) of these? Well, like a lot of standards fuckery it comes down to some very good technical reasons and nerds thinking that they are going to be the one to provide the “One True Solution”. And of course, that’s why we now have multiple completing standards. And then you get AppImage based software for developers who don’t want to be bothered with package managers and who hate security.

    (non-soapbox answer) Yes, updating is usually pretty easy, but it may involve updating in more than one place. At minimum, you’re likely to need to do OS updates via something like the apt commands and also updating via flatpak.

    Will I be able to easily update to a new version of whatever distro I’m using? Do I even want to update to the newest version?

    Mostly yes and absolutely yes. For the distro upgrade here’s an example (not my blog) for the latest Mint upgrade. Pretty simple stuff. As for “Do I even want to update to the newest version?”, tip number one for keeping your system secure is: install your updates. This is true regardless of what OS you’re on. Please, if you install it, keep it up to date. This is what happens when people neglect updates.

    And is there a way to be notified and set auto-updates for some applications?

    Yes, and probably best to just turn on automatic updates and forget about it.

    I’ve seen quite a few threads and questions about having to manually update things, but if I get an application from the software manager then will it be as easy as a clicking a button?

    Yes, if you install from the software manager (behind the fancy name, this will be either flatpak or snap in Mint or Kubuntu) updates will be a one-click affair. Or better yet, automagically handled, if you turn that on. Turn that on.

    I know I’ll have to adjust and just learn-by-doing some things no matter which distro I pick

    Unfortunately yes, there will be a learning curve. But, I promise it’s not so bad and it’s completely worth it. And there are lots of folks here who will be happy to help (and a few jerks who will scream “RTFM!”, sorry about those, they suck.). If things get too bad, you can always go back to Windows, you have a license and it’s pretty easy to reinstall these days.

    uhhh how easy is it to fuck up the process of trying and then installing a linux distro? Like completely-make-the-computer-unusable fuck up?

    It’s really, really, really hard to get the computer completely fucked up and unusable, just by changing the OS. Seriously, the most likely way you would do this is by dumping your drink of choice in the keyboard because you got distracted. The great thing about software is that it is very rarely permanent. And nothing you’re doing here would be permanent. Go wild and try try a new distro. If things don’t work out, going back to Windows isn’t hard at all.

    So based on all that, should I just go for Linux Mint like most new users? Or would you recommend a completely different distro?

    I’m gonna go out on a limb and say that Mint is great choice and the one I’d recommend. While I don’t use it myself (I hate myself, so I use Arch), it’s got a solid reputation, is designed to make the transition from Windows easier and uses KDE for the interface (don’t worry if that last bit doesn’t make sense, just roll with it). There is also a lot of support available here on Lemmy and across the web.

    Good Luck


  • A couple thoughts. Assuming your motherboard is capable of SATA hot-swap and has it enabled (look in your BIOS), you should be able to umount the game drive, and swap it without shutting down. Assuming the game drives are partitioned using GPT, you should be able to add individual entries in /etc/fstab using the partition UUIDs and control mounting and umounting to specific mount points for different drives. Personally, I would add the noauto option to those entries, so that mounting is done manually and can be controlled easily.

    OS drive swapping may be simpler, depending on your BIOS. With the system powered off, swap the drives and assuming the BIOS picks up the new boot partition cleanly, you’re off to the races. The only issue would be if the BIOS just doesn’t want to recognize one of the drives’ boot partitions. I had this issue with my Arch install and my MSI motherboard. The motherboard won’t recognize the default install location and I had to move the boot files around to work in a fallback mode. Annoying, but solvable.

    Finally, as others have said, this could all be a matter of over-complicating things. Why not just stuff all the drives in the case and always have everything? You can configure the primary drive’s boot loader to let you pick between which OS to boot. And you can have any and all data drives mounted at the same time. Unless you are struggling with physical space or power requirements, it saves on having to muck about with swapping stuff.


  • do any of you hate how self-hosting services like photo- or document-management systems, or even a simple rss tool, forces you to sort your stuff out, and put your decades old files in order?!

    What is this “sort” thing you speak of? I don’t sort anything, I have NextCloud syncing my entire photos, videos and documents folders and they are just as messy as ever. Granted, I do go through my photos and videos once a year and dump them in a folder named for the year they were taken. Occasionally, I’ll go hog wild and try to sort some of a year’s photos/videos into folders named after events. Though, that hasn’t happened in a number of years. I setup NextCloud so I could have everything synced to my own server and just forget, not have to deal with labeling my data.

    As for bookmarks. I already keep those in folders; but, I don’t sync those. I use my desktop far more than I use my phone for web browsing. And the types of things I use my phone for (mostly recipes), I just keep bookmarked there.


  • It’s rather amazing that this one guy keeps churning out fixes for FromSoft’s complete inability to understand multiplayer.

    That said, I do plan to try the vanilla setup first (finishing up Shadow of the Erdtree before we change over). I just worry about my wife and I dropping into a session and having some rando who either wants to faff about; or, we run into the type of toxic behavior which seems to inundate online games. We had pretty good luck with Vermintide 2, back in the day. But, with way too many years of playing WoW, we’ve also run into a lot of assholes. And we just don’t have the patience for that sort of thing anymore.



  • My personal preference is to use FOSS whenever it’s practical. For home use, I’ve switched to FOSS for the vast majority of my computing needs. I run Linux on both my server and desktop. Most of the software on my server is FOSS, with the one exception being a container using the Splunk free license. My desktop is running Linux, and I use LibreOffice for documents and the like. I do run Visual Studio Code, which is technically Open Source, though I would not put it past Microsoft to do a rug-pull on that eventually. And I have an extensive library of games with Steam, basically nothing of which is Open Source.

    I have reached a point, financially, that piracy is not morally defensible. And I’m not willing to get into the mire of if, or where such a line would be. I believe that creators should be rewarded for their work. Though, I also agree that the limits on copyright are way out of whack with the changes Disney has purchased through the years. So, piracy as a moral question is a murky subject, with no clear answers to me. But, the end result is that I buy games, movies or TV shows. For other software, I usually look to FOSS projects (e.g. Gimp vs Photoshop, FreeCAD/OpenSCAD vs Autodesk), free licenses (e.g. Splunk) or just do without. For TV Shows/Movies, if it’s not on one of the streaming services I subscribe to, I may buy it via a digital service; or, I do without.


  • No, if you open a terminal and run:
    sudo dmesg

    You should get a long output which is the kernel log. Assuming the crash happened recently, there may be something in the last few lines (bottom of the output) which could indicate why the process died (or was killed). Keep in mind that this is a running log; so, if it’s been a while since the crash, the entries for it may be higher up in the log. It’s often best (if you can) to trigger the problem then immediately go run the sudo dmesg command and look at the output. With luck, there will be useful logs. If not, you may need to look elsewhere.



  • Docker is just going to be used to run the applications which host your website. What you need to decide first is what your website will be and that will inform the decision on what technologies will be used to host your website. For example, if you are thinking of something like a blog, you might choose WordPress as the main hosting platform. This will need some sort of database behind it, for which you might choose MySQL or Postgres. You would also need some sort of web server software, which you might choose Nginx or Apache. At a basic level, you could now have the entire web stack defined: E.g. WordPress, MySQL, Nginx.

    Ok, so now you need to sort out where those technologies will run. The easy, older solution is to spin up a physical box and load all of the software packages on the native Operating System (OS) of that box. This works perfectly well, until it’s time to start patching and updating the OS and software. And you will want to do those updates. This will probably go well for the first few upgrades, but eventually something will go sideways. Often this will be that several of your software packages will require different version of the same, underlying library. Or, something will just not install right and your website stops working. This is where docker comes in.

    Docker lets you run each software package in it’s own contained environment. Each application runs in it’s own container, and the other containers are only reached via network calls. It’s like having a separate virtual machine for each service (this is how we used to actually run stuff like this); but, without all the overhead of actually having multiple virtual machines. So, even if you upgrade package XYZ in the Nginx container to version 2.1, the MySQL container could have package XYZ still running at version 1.9. Neither container knows or cares about what is running in the other containers.

    The other advantage of containers is that the base OS and software in the container is usually well defined and doesn’t change much. The container will be able to reach permanent storage for any configuration and data files. But, if something goes wrong with the OS or software inside the container, then that container is destroyed and a new copy spun up and attached to the config/data storage. Software upgrades can also take advantage of this, as you can often stop the current container and start a container running the new version of the software, attach it to the config/data storage and maybe run some sort of “upgrade database” command. This makes for less mistakes and chances for things to not go well.

    If your goal is to learn to self host, I would recommend putting those posts over in the [email protected]. They are likely to get a better reception than in the programming and Linux communities you spammed with this post. Though, even there you may run into a bit of the RTFM! vibe you got here if you are posting questions without context and which appear to be low effort “I want to do something but have made no attempt to learn anything on my own”. I’d recommend spending some time reading long form blogs/guides on web hosting and watching YouTube videos. Again, long form stuff. Skip the clik-bait-y crap with titles like “get your website running in 5 minutes! <insert stupid emojis here>”. You’ll want to learn the basics on Docker and what is required to run and host a web site. Once you are able to get containers going, try setting up a web stack on your local system (don’t go paying for anything yet) and see if you can get it working and understand how it works. You’re almost certainly going to screw it up a few times in the process, that’s ok. That’s another great feature of containers, you can bork them up really, really bad and not have to care. You delete the container, maybe wipe the attached storage and try again.

    Good luck.


  • Is it possible to move a windows install to a different drive and then install Linux on the main drive instead?

    It should be possible to clone the current drive to a different drive. First and foremost though, backup any data you care about to a safe place (e.g. an external drive). Data loss is a real possibility. I’ve been in a professional context explaining to a customer just exactly how fucked they were, because they screwed up in cloning a drive. That wasn’t fun for me and it was expensive for them. Don’t be that guy.

    If you have BitLocker enabled, I’d recommend disabling it. It shouldn’t cause problems; but, Microsoft software has a bad habit of giving you the middle finger when you least expect it.

    The last time I did something like this, I used Yumi to create a bootable USB drive and selected a CloneZilla ISO. Once booted, you will want to do a device-device operation (WARNING: be very, very certain about the direction you are copying. If you screw that up, you will lose data. You did make a backup, right?) clone the whole disk and not just the partition. You can expand the partition with the actual OS, if you want, but leave any EFI or recovery partitions alone. There may also be a small amount of free space left on the drive (MS does this by default), leave that free.

    Once the clone is complete, try booting and using it before you overwrite the old drive.

    Second doubt is if I’ll have many issues daily driving Linux if I have an Nvidia card

    I’m running an RTX 3080 myself and it’s been nearly flawless. That said, my next card (probably years off) is likely to be AMD just to avoid possible NVidia driver issues.


  • When I was first switching to Linux, I installed Arch on a USB3 stick and ran from there for a month or two. It worked pretty well, however I did seem to have issues with I/O contention. During some read and write operations and multi-tasking, the whole OS would just hang up until the operations were done. Since moving that installation to an SSD, that issue is gone. So, it does work, it’s a pretty good way to “try before you buy”", but do keep in mind that performance will suffer.

    At the same time, I’d definitely recommend working through the pain of getting it setup right. When you have a problem (and they will crop up), it gives you a better understanding to work from for troubleshooting. You may also want to try our different distros. I used Arch, because I hate myself. But, that may not be the right choice for someone else. Something like PopOS could be a good choice for something that is aimed more at gaming, but is supposed to “just work”. Ubuntu is a good choice for a more “mainstream” look and feel. There is no good reason to do things the hard way, unless you really, really want to. The goal is to have a functional system, don’t tie yourself in knots getting there.





  • It depends on the environment. I’ve been in a couple of places which use Linux for various professional purposes. At one site, all systems with a network connection were required to have A/V, on-access scanning and regular system scans. So, even the Linux systems had a full A/V agent and we were in the process of rolling out EDR to all Linux based hosts when I left. That was a site where security tended to be prioritized, though much of it was also “checkbox security”. At another site, A/V didn’t really exist on Linux systems and they were basically black boxes on the network, with zero security oversight. Last I heard, that was finally starting to change and Linux hosts were getting the full A/V and EDR treatment. Though, that’s always a long process. I also see a similar level of complacency in “the cloud”. Devs spin random shit up, give it a public IP, set the VPS to a default allow and act like it’s somehow secure because, “it’s in the cloud”. Some of that will be Linux based. And in six months to a year, it’s woefully out of date, probably running software with known vulnerabilities, fully exposed to the internet and the dev who spun it up may or may not be with the company anymore. Also, since they were “agile”, the documentation for the system is filed under “lol, wut?”

    Overall, I think Linux systems are a mixed bag. For a long time, they just weren’t targeted with normal malware. And this led to a lot of complacency. Most sites I have been at have had a few Linux systems kicking about; but, because they were “one off” systems and from a certain sense of invulnerability they were poorly updated and often lacked a secure baseline configuration. The whole “Linux doesn’t get malware” mantra was used to avoid security scrutiny. At the same time, Linux system do tend to default to a more secure configuration. You’re not going to get a BlueKeep type vulnerability from a default config. Still, it’s not hard for someone who doesn’t know any better to end up with a vulnerable system. And things like ransomware, password stealers, RATs or other basic attacks often run just fine in a user context. It’s only when the attacker needs to get root that things get harder.

    In a way, I’d actually appreciate a wide scale, well publicized ransomware attack on Linux systems. First off, it would show that Linux is finally big enough for attackers to care about. Second, it would provide concrete proof as to why Linux systems should be given as much attention and centrally managed/secured in the Enterprise. I know everyone hates dealing with IT for provisioning systems, and the security software sucks balls; but, given the constant barrage of attacks, those sorts of things really are needed.


  • This is exactly the problem, they have no accountability for bad updates causing hardware to become unusable. So, Q&A just becomes a needless expense and untested firmware is dropped on users. Sure, you could try and sue, or more likely get fucked by a binding arbitration clause. But, the cost would be far beyond what the device costs. So, it never makes sense. There need to be fines when this shit happens, which are significant percentages of worldwide revenue, to scare companies into actually testing updates before they are released.

    In the end, all we can do is shake our heads and remind folks to never buy HP. They put out great products 30 years ago, but those days are long gone. Now, they just put out crap.