• 0 Posts
  • 74 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.


  • With intermittent errors like that, I’d take the following test plan:

    1. Check for disk errors - You already did this with the SMART tools.
    2. Check for memory errors - Boot a USB drive to memtest86 and test.
    3. Check for overheating issues - Thermal paste does wear out, check your logs for overheating warnings.
    4. Power issues - Is the system powered straight from the wall or a surge protector? While it’s less of an issue these days, AC power coming from the wall should have a consistent sine wave. If that wave isn’t consistent, it can cause a voltage ripple on the DC side of the power supply. This can lead to all kinds of weird fuckery. A good surge protector (or UPS) will usually filter out most of the AC inconsistencies.
    5. Power Supply - Similar to above, if the power supply is having a marginal failure it can cause issues. If you have a spare one, try swapping it out and seeing if the errors continue.
    6. Processor failure - If you have a space processor which will fit the motherboard, you could try swapping that and looking for errors to continue.
    7. Motherboard failure - Same type of thing. If you have a spare, swap and look for errors.

    At this point, you’ll have tested basically everything and likely found the error. For most errors like this, I’ve rarely seen it go past the first two tests (drive/RAM failure), with the third (heat) picking up the majority of the rest. Power issues I’ve only ever seen in old buildings with electrical systems which probably wouldn’t pass an inspection. Though, bad power can cause other hardware failures. It’s one reason to have a surge protector in line at all times anyway.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.




  • I know that, during my own move from Windows to Linux, I found that the USB drive tended to lag under heavy read/write operations. I did not experienced that with Linux directly loaded on a SATA SSD. I also had some issues dealing with my storage drive (NVMe SSD) still using an NTFS file system. Once I went full Linux and ext4, it’s been nothing but smooth sailing.

    As @[email protected] pointed out, performance will depend heavily on the generation of USB device and port. I was using a USB 3.1 device and a USB 3.1 port (no idea on the generation). So, speeds were ok-ish. By comparison, SATA 2 can have a transfer rate of 2 GB/s. And while the SSD itself may not have saturated that bandwidth, it almost certainly blew the transfer rate of my USB device out of the water. When I later upgraded to an NVMe drive, things just got better.

    Overall, load times from the USB drive is the one place I wouldn’t trust testing Linux on USB. It’s going to be slower and have lag compared to an SSD. Read/Write performance should be comparable to Windows. Though, taking the precaution of either dual booting or backing up your Windows install can certainly make sense to test things out.


  • force binary choices that don’t align with household rules or with children’s maturity levels.

    This has been my main experience with “parental controls”. As soon as they are turned on, I lose any ability to manage the experiences available to my children. So, in areas where I see them as mature enough to handle something, the only way I can allow them access to that experience is to completely bypass the controls. In many ecosystems, if I judge that one of my children could handle a game and the online risks associated with it, I can’t simply allow that game. Instead, I need to maintain a full adult account for them to use. You also run into a lot of situations where the reason a game is banned from children is unclear or done in an obvious “better safe than sorry” knee-jerk reaction. Ultimately, parental controls end up being far more frustrating than empowering. I’d rather just have something that just says, “this game/movie/etc your kid is asking for is restricted based on reasons X, Y and Z. Do you want to allow it?” Log my response and go with it. Like damned near any choice in software settings, quit trying to out-think me on what I want, give me a choice and respect that choice.


  • It’s been a few of years since did my initial setup (8 apparently, just checked); so, my info is definitely out of date. Looking at the Ubuntu site they still list Ubuntu 16.04, but I think the info on setting it up is still valid. Though, it looks like they only list setting up a mirror or a stripe set without parity. A mirror is fine, but you trade half your storage space for complete data redundancy. That can make sense, but usually not for a self hosting situation. A stripe set without parity is only useful for losing data, never use this. The option you’ll want is a raidz, which is a stripe set with parity. The command will look like:

    zpool create zpool raidz /dev/sdb /dev/sdc /dev/sdd
    

    This would create a zpool named “zpool” from the drives at /dev/sdb, /dev/sdc and /dev/sdd.

    I would suggest spending some time reading up on the setup. It was actually pretty simple to do, but it’s good to have a foundation to work with. I also have this link bookmarked, as it was really helpful for getting rolling snapshots setup. As with the data redundancy given by RAID, it does not replace backups; but, can be used as part of a backup strategy. They also help when you make a mistake and delete/overwrite a file.

    Finally, to answer your question about hardware, my recollection and experience has been that ZFS is not terribly demanding of CPU. I ran a Intel Core i3 for most of the server’s life and only upgraded when I realized that I wanted to game servers on it. Memory is more of an issue. The minimum requrement most often cited is 8GB, but I also saw a rule of thumb that you want 1GB of memory for each TB of storage. In the end, I went with 8GB of RAM, as I only had 4TB of storage (3 2TB disks in a RAIDZ1). But, also think about what other workloads you have on the system. When built, I was only running NextCloud, NGinx, Splunk, PiHole and WordPress (all in docker containers). And the initial 8GB of RAM was doing just fine. When I started running game servers, I stared to run into issues. I now have 16GB and am mostly fine. Some game servers can be a bit heavy (e.g. Minecraft, because fucking Java), but I don’t normally see problems. Also, since the link I provided mentioned it, skip ECC memory. it’s almost never worth the cost, and for home use that “almost never” gets much closer to “actually never”.

    When choosing disks, keep in mind that you will need a minimum of 2 disks and you effectively lose the storage space of one of the disks in the pool to parity storage (assuming all disks are the same size). Also, it is best for all of the disks to be the same size. You can technically use different size disks in the same pool; but, the larger disks get treated as the same size as the smaller disks. So long as the pool is healthy, read speeds are better than a single disk as the read can be spread out among the pool. But, write speeds can be slower, as the parity needs to be calculated at write time. Otherwise, you’re pretty free to choose any disks which will be recognized by the OS. You mention that 1TB is filling up; so, you’ll want to pick something bigger. I mentioned using spinning disks, as they can provide a lot more space for the money. Something like a 14TB WD Red drive can be had for $280 ($20/TB). With three of those in a RAIDZ1 pool, you get ~28TB of storage and can tolerate one disk failure , without losing data. With solid state disks, you can expect costs closer to $80/TB. Though, there is a tradeoff in speed. So, you need to consider what type of workloads you expect the storage pool to handle. Video editing on spinning rust is not going to be fun. Streaming video at 4k is probably OK, though 8k is going to struggle.

    A couple other things think about are space in the chassis, drive connections and power. Chassis space is pretty obvious, you gotta put the disks in the box. Technically, you don’t have to mount the disks, they can just be sitting at the bottom of the case, but this can cause problems with heat shortening the lifespan of the drives. It’s best to have them properly mounted and fans pushing air over them. Drive connections are one of those, you either have the headers or you don’t. Make sure your motherboard can support 3 more drives with the chosen interface (SATA, NVMe, etc.) before you get the drives. Nothing sucks more than having a fancy new drive only to be unable to plug it into the motherboard. Lastly, drives (and especially spinning drives) can be power hungry. Make sure your power supply can support the extra power requirements.

    Good luck whatever route you pick.


  • Probably the easiest solution would be to just chuck a larger disk in the system and retain the original drive for the operating system. If you do not need the high speed of an SSD, you may be able to get more storage space for the money by going with a spinning disk. 7200RPM drives are fast enough for most applications, though you may run into issues streaming 4K (or higher) resolution video.

    Another option would be to start building out a storage pool using some type of RAID technology. On my own server, I use ZFS for the data partition. It is basically a software RAID. I use a RAID-Z1 configuration, which stripes the data over multiple disks (three in my case) and uses a parity calculation to provide data redundancy. It also has the advantage that it can be expanded to new disks dynamically and does not require that all disks are the same size. Initial setup does require more work and you are now monitoring multiple physical disks, but having a unified storage pool and redundancy is a nice way to go.

    Any way you go, just make sure you have good backups. Drives fail, and sometimes even early in their life. Backblaze reports can be an interesting read when looking at drive options, as they really do put the drives through the wringer.


  • Yes, though depending on the media you are running the OS and game from, the performance could be worse than you would expect from an install on the main system media. For example, when I was testing moving over, I had Arch installed on a USB device and had some issues with I/O bandwidth. But, I also had a folder on my main storage drive to run Steam games from and this performed OK. It was formatted NTFS; so, there were some other oddities. But, it worked just fine and managed to convince me that I’d do OK under Linux. Took the plunge and I’ve been happy with the decision ever since.







    1. I don’t want to use the command window for everything, or really much of anything, at least at the start.

    With many of the modern distros, you can get a long way without a lot of command line work. But, some interaction will likely still be inevitable. However, most distros include either flatpak or snap, which lets you download, install and update software via the Graphical User Interface (GUI). So, there shouldn’t be too much command line work required.

    1. I currently use Proton VPN and I’d like to use it on this new laptop too.

    It looks like Proton officially supports Ubuntu. And I would note that it expects the GNOME desktop, not KDE. So, Kubuntu will likely run into issues (probably the same issues as Mint). That said, they also have a page on installing on Linux Mint which seems to indicate skipping a single step. There are also guides out there for installing Proton VPN, without using the terminal.

    As an aside, unless you need a VPN to securely access a remote network, shift your apparent location or for downloading/sharing copyrighted works, consider saving the money and not paying for a VPN. They are mostly just a waste of money for the average user. Sorry, I’ll get off my soapbox now.

    So, does this mean I should use Ubuntu? And will Kubuntu work or would I have to use a different version of Ubuntu? And is there no way to get Proton without using the console?

    Just going with Ubuntu might be easier and it’s the officially supported distro. If you run into a problem, you may have trouble getting support on an unsupported distro. That said, it looks like getting it running on Mint/Kubuntu seems easy enough and works. I’m personally a fan of the KDE desktop (this is where the “K” in Kubuntu comes from) and think it makes the Windows->Linux transition somewhat better.

    if I’m able to change to a custom mouse pointer (I currently use a cute one that I’d like to also use on the new laptop)

    Yup, you can change the mouse pointer. Not sure if you can import your current one, but that’s going to depend on the format and where you got it.

    if keyboard shortcuts like alt-tabbing work or are easily configurable

    You’ll find many of the shortcuts work the same. Even the ones using the “Windows” key are mostly similar, though you’ll see it referred to as the “Meta” key. Alt-Tab as an example works exactly the same. And yes, they are configurable.

    I’m kind of confused about how updating things works on linux. Will I be able to easily update to a new version of whatever distro I’m using?

    So, edging back onto my soapbox for a sec (you can safely skip this whole paragraph, if you want), the software ecosystem in Linux is a mess at the moment. It’s very much the XKCD Standards situation. First, you will likely have the main OS way to update the OS and software. For Ubuntu, this will be via .deb packages. You’ll update these via a command like sudo apt update && sudo apt upgrade. The you will have one or more other package managers for containerized packages. This will be flatpak or snap. Why do we have one (or both) of these? Well, like a lot of standards fuckery it comes down to some very good technical reasons and nerds thinking that they are going to be the one to provide the “One True Solution”. And of course, that’s why we now have multiple completing standards. And then you get AppImage based software for developers who don’t want to be bothered with package managers and who hate security.

    (non-soapbox answer) Yes, updating is usually pretty easy, but it may involve updating in more than one place. At minimum, you’re likely to need to do OS updates via something like the apt commands and also updating via flatpak.

    Will I be able to easily update to a new version of whatever distro I’m using? Do I even want to update to the newest version?

    Mostly yes and absolutely yes. For the distro upgrade here’s an example (not my blog) for the latest Mint upgrade. Pretty simple stuff. As for “Do I even want to update to the newest version?”, tip number one for keeping your system secure is: install your updates. This is true regardless of what OS you’re on. Please, if you install it, keep it up to date. This is what happens when people neglect updates.

    And is there a way to be notified and set auto-updates for some applications?

    Yes, and probably best to just turn on automatic updates and forget about it.

    I’ve seen quite a few threads and questions about having to manually update things, but if I get an application from the software manager then will it be as easy as a clicking a button?

    Yes, if you install from the software manager (behind the fancy name, this will be either flatpak or snap in Mint or Kubuntu) updates will be a one-click affair. Or better yet, automagically handled, if you turn that on. Turn that on.

    I know I’ll have to adjust and just learn-by-doing some things no matter which distro I pick

    Unfortunately yes, there will be a learning curve. But, I promise it’s not so bad and it’s completely worth it. And there are lots of folks here who will be happy to help (and a few jerks who will scream “RTFM!”, sorry about those, they suck.). If things get too bad, you can always go back to Windows, you have a license and it’s pretty easy to reinstall these days.

    uhhh how easy is it to fuck up the process of trying and then installing a linux distro? Like completely-make-the-computer-unusable fuck up?

    It’s really, really, really hard to get the computer completely fucked up and unusable, just by changing the OS. Seriously, the most likely way you would do this is by dumping your drink of choice in the keyboard because you got distracted. The great thing about software is that it is very rarely permanent. And nothing you’re doing here would be permanent. Go wild and try try a new distro. If things don’t work out, going back to Windows isn’t hard at all.

    So based on all that, should I just go for Linux Mint like most new users? Or would you recommend a completely different distro?

    I’m gonna go out on a limb and say that Mint is great choice and the one I’d recommend. While I don’t use it myself (I hate myself, so I use Arch), it’s got a solid reputation, is designed to make the transition from Windows easier and uses KDE for the interface (don’t worry if that last bit doesn’t make sense, just roll with it). There is also a lot of support available here on Lemmy and across the web.

    Good Luck


  • A couple thoughts. Assuming your motherboard is capable of SATA hot-swap and has it enabled (look in your BIOS), you should be able to umount the game drive, and swap it without shutting down. Assuming the game drives are partitioned using GPT, you should be able to add individual entries in /etc/fstab using the partition UUIDs and control mounting and umounting to specific mount points for different drives. Personally, I would add the noauto option to those entries, so that mounting is done manually and can be controlled easily.

    OS drive swapping may be simpler, depending on your BIOS. With the system powered off, swap the drives and assuming the BIOS picks up the new boot partition cleanly, you’re off to the races. The only issue would be if the BIOS just doesn’t want to recognize one of the drives’ boot partitions. I had this issue with my Arch install and my MSI motherboard. The motherboard won’t recognize the default install location and I had to move the boot files around to work in a fallback mode. Annoying, but solvable.

    Finally, as others have said, this could all be a matter of over-complicating things. Why not just stuff all the drives in the case and always have everything? You can configure the primary drive’s boot loader to let you pick between which OS to boot. And you can have any and all data drives mounted at the same time. Unless you are struggling with physical space or power requirements, it saves on having to muck about with swapping stuff.


  • do any of you hate how self-hosting services like photo- or document-management systems, or even a simple rss tool, forces you to sort your stuff out, and put your decades old files in order?!

    What is this “sort” thing you speak of? I don’t sort anything, I have NextCloud syncing my entire photos, videos and documents folders and they are just as messy as ever. Granted, I do go through my photos and videos once a year and dump them in a folder named for the year they were taken. Occasionally, I’ll go hog wild and try to sort some of a year’s photos/videos into folders named after events. Though, that hasn’t happened in a number of years. I setup NextCloud so I could have everything synced to my own server and just forget, not have to deal with labeling my data.

    As for bookmarks. I already keep those in folders; but, I don’t sync those. I use my desktop far more than I use my phone for web browsing. And the types of things I use my phone for (mostly recipes), I just keep bookmarked there.


  • It’s rather amazing that this one guy keeps churning out fixes for FromSoft’s complete inability to understand multiplayer.

    That said, I do plan to try the vanilla setup first (finishing up Shadow of the Erdtree before we change over). I just worry about my wife and I dropping into a session and having some rando who either wants to faff about; or, we run into the type of toxic behavior which seems to inundate online games. We had pretty good luck with Vermintide 2, back in the day. But, with way too many years of playing WoW, we’ve also run into a lot of assholes. And we just don’t have the patience for that sort of thing anymore.