• 0 Posts
  • 82 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle
  • You could try using Autopsy to look for files on the drive. Autopsy is a forensic analysis toolkit, which is normally used to extract evidence from disk images or the like. But, you can add local drives as data sources and that should let you browse the slack space of the filesystem for lost files. This video (not mine, just a good enough reference) should help you get started. It’s certainly not as simple as the photorec method, but it tends to be more comprehensive.


  • As @[email protected] pointed out, this seems to be a cover for c’t magazine. Specifically it seems to be for November 2004. heise.de used to have a site which let you browse those covers and you could pull any/all of them. But, that website seems to have died sometime in 2009. Thankfully, the internet remembers and you can find it all on archive.org right here. You may need to monkey about with capture dates to get any particular cover, but it looks like a lot of them are there.

    Also, as a bit of “teach a person to fish”, ImgOps is a great place to start a reverse image search. It can often get you from an image to useful information about that images (e.g. a source) pretty quick. I usually use the TinEye reverse image search for questions like this.


  • I can think of a couple of reasons off the top of my head.

    You don’t say, but I assume you are working on-site with your work system. So, the first consideration would be a firewall at your work’s network perimeter. A common security practice is to block outbound connections on unusual ports. This usually means anything not 80/tcp or 443/tcp. Other ports will be allowed on an exception basis. For example, developers may be allowed to access 22/tcp outbound, though that may also be limited to only specific remote IP addresses.

    You may also have some sort of proxy and/or Cloud Access Security Broker (CASB) software running on your work system. This setup would be used to inspect the network connections your work system is making and allow/block based on various policy settings. For example, a CASB might be configured to look at a domain reputation service and block connections to any domain whose reputation is consider suspect or malicious. Domains may also be blocked based on things like age, or category. For this type of block, the port used won’t matter. It will just be “domain something.tld looks sketchy, so block all the things”. With “sketchy” being defined by the company in it’s various access policies.

    A last reason could be application control. If the services you are trying to connect to rely on a local program running on your work system, it’s possible that the system is set to prevent unknown applications from running. This setup is less common, but it growing in popularity (it just sucks big old donkey balls to get setup and maintain). The idea being that only known and trusted applications are allowed to run on the system, and everything else is blocked by default. This looks like an application just crashing to the end user (you), but it provides a pretty nice layer of protection for the network defenders.

    Messing with the local pc is of course forbidden.

    Ya, that’s pretty normal. If you have something you really need to use, talk with your network security team. Most of us network defenders are pretty reasonable people who just want to keep the network safe, without impacting the business. That said, I suspect you’re going to run into issues with what you are trying to run. Something like SyncThing or some cloud based storage is really useful for businesses. But, businesses aren’t going to be so keen to have you backing their data up to your home server. Sure, that might not be your intention, but this is now another possible path for data to leave the network which they need to keep an eye on. All because you want to store your personal data on your work system. That’s not going to go over well. Even worse, you’re probably going to be somewhat resistant when they ask you to start feeding your server’s logs into the businesses log repository. Since this is what they would need to prove that you aren’t sending business data to it. It’s just a bad idea all around.

    I’d suspect Paperless is going to run into similar issues. It’s a pretty obvious way for you to steal company data. Sure, this is probably not your intention, but the network defenders have to consider that possibility. Again, they are likely to outright deny it. Though if you and enough folks at your company want to use something like this, talk with your IT teams, it might be possible to get an instance hosted by the business for business use. There is no guarantee, but if it’s a useful productivity package, maybe you will have a really positive project under your belt to talk about.

    FreshRSS you might be able to get going. Instead of segregating services by port, stand up something like NGinx on port 443 and configure it as a reverse proxy. Use host headers to separate services such that you have sync.yourdomain.tld mapped to your SyncThing instance, office.yourdomain.tld mapped to your paperless instance and rss.yourdomain.tld mapped to FreshRSS. This gets you around issues with port blocking and makes managing TLS certificates easier. You can have a single cert sitting in front of all your services, rather than needing to configure TLS for each service individually.




  • I run Pi-Hole in a docker container on my server. I never saw the point in having a dedicated bit of hardware for it.
    That said, I don’t understand how people use the internet without one. The times I have had to travel for work, trying to do anything on the internet reminded me of the bad old days of the '90s with pop-ups and flashing banners enticing me to punch the monkey. It’s just sad to see one of the greatest communications platforms we have ever created reduced to a fire-hose of ads.


  • Ya, AI as a tool has it’s place. I’m currently working on documentation to meet some security compliance frameworks (I work in cybersecurity). Said documentation is going to be made to look pretty and get a check in the box from the auditors. It will then be stored in a SharePoint library to be promptly lost and ignored until the next time we need to hand it over to the auditors. It’s paperwork for the sake of paperwork. And I’m going to have AI spit out most of it and just pepper in the important details and iron out the AI hallucinations. Even with the work of fixing the AI’s work, it will still take less time than making up all the bullshit on my own. This is what AI is good for. If I actually care about the results, and certainly if I care about accuracy, AI won’t be leaned on all that much.

    The technology actually it pretty amazing, when you stop and think about it. But, it also often a solution in search of a problem.


  • What you are trying to do is called P2V, for Physical to Virtual. VMWare used to have tools specifically for this. I haven’t used them in a decade or more, but they likely still work. That should let you spin up the virtual system in VMWare Player (I’d test this before wiping the drive) and you can likely convert the resulting VM to other formats (e.g. VirtualBox). Again, test it out before wiping the drive, nothing sucks like discovering you lost data because you just had to rush things.


  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.


  • With intermittent errors like that, I’d take the following test plan:

    1. Check for disk errors - You already did this with the SMART tools.
    2. Check for memory errors - Boot a USB drive to memtest86 and test.
    3. Check for overheating issues - Thermal paste does wear out, check your logs for overheating warnings.
    4. Power issues - Is the system powered straight from the wall or a surge protector? While it’s less of an issue these days, AC power coming from the wall should have a consistent sine wave. If that wave isn’t consistent, it can cause a voltage ripple on the DC side of the power supply. This can lead to all kinds of weird fuckery. A good surge protector (or UPS) will usually filter out most of the AC inconsistencies.
    5. Power Supply - Similar to above, if the power supply is having a marginal failure it can cause issues. If you have a spare one, try swapping it out and seeing if the errors continue.
    6. Processor failure - If you have a space processor which will fit the motherboard, you could try swapping that and looking for errors to continue.
    7. Motherboard failure - Same type of thing. If you have a spare, swap and look for errors.

    At this point, you’ll have tested basically everything and likely found the error. For most errors like this, I’ve rarely seen it go past the first two tests (drive/RAM failure), with the third (heat) picking up the majority of the rest. Power issues I’ve only ever seen in old buildings with electrical systems which probably wouldn’t pass an inspection. Though, bad power can cause other hardware failures. It’s one reason to have a surge protector in line at all times anyway.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.




  • I know that, during my own move from Windows to Linux, I found that the USB drive tended to lag under heavy read/write operations. I did not experienced that with Linux directly loaded on a SATA SSD. I also had some issues dealing with my storage drive (NVMe SSD) still using an NTFS file system. Once I went full Linux and ext4, it’s been nothing but smooth sailing.

    As @[email protected] pointed out, performance will depend heavily on the generation of USB device and port. I was using a USB 3.1 device and a USB 3.1 port (no idea on the generation). So, speeds were ok-ish. By comparison, SATA 2 can have a transfer rate of 2 GB/s. And while the SSD itself may not have saturated that bandwidth, it almost certainly blew the transfer rate of my USB device out of the water. When I later upgraded to an NVMe drive, things just got better.

    Overall, load times from the USB drive is the one place I wouldn’t trust testing Linux on USB. It’s going to be slower and have lag compared to an SSD. Read/Write performance should be comparable to Windows. Though, taking the precaution of either dual booting or backing up your Windows install can certainly make sense to test things out.


  • force binary choices that don’t align with household rules or with children’s maturity levels.

    This has been my main experience with “parental controls”. As soon as they are turned on, I lose any ability to manage the experiences available to my children. So, in areas where I see them as mature enough to handle something, the only way I can allow them access to that experience is to completely bypass the controls. In many ecosystems, if I judge that one of my children could handle a game and the online risks associated with it, I can’t simply allow that game. Instead, I need to maintain a full adult account for them to use. You also run into a lot of situations where the reason a game is banned from children is unclear or done in an obvious “better safe than sorry” knee-jerk reaction. Ultimately, parental controls end up being far more frustrating than empowering. I’d rather just have something that just says, “this game/movie/etc your kid is asking for is restricted based on reasons X, Y and Z. Do you want to allow it?” Log my response and go with it. Like damned near any choice in software settings, quit trying to out-think me on what I want, give me a choice and respect that choice.


  • It’s been a few of years since did my initial setup (8 apparently, just checked); so, my info is definitely out of date. Looking at the Ubuntu site they still list Ubuntu 16.04, but I think the info on setting it up is still valid. Though, it looks like they only list setting up a mirror or a stripe set without parity. A mirror is fine, but you trade half your storage space for complete data redundancy. That can make sense, but usually not for a self hosting situation. A stripe set without parity is only useful for losing data, never use this. The option you’ll want is a raidz, which is a stripe set with parity. The command will look like:

    zpool create zpool raidz /dev/sdb /dev/sdc /dev/sdd
    

    This would create a zpool named “zpool” from the drives at /dev/sdb, /dev/sdc and /dev/sdd.

    I would suggest spending some time reading up on the setup. It was actually pretty simple to do, but it’s good to have a foundation to work with. I also have this link bookmarked, as it was really helpful for getting rolling snapshots setup. As with the data redundancy given by RAID, it does not replace backups; but, can be used as part of a backup strategy. They also help when you make a mistake and delete/overwrite a file.

    Finally, to answer your question about hardware, my recollection and experience has been that ZFS is not terribly demanding of CPU. I ran a Intel Core i3 for most of the server’s life and only upgraded when I realized that I wanted to game servers on it. Memory is more of an issue. The minimum requrement most often cited is 8GB, but I also saw a rule of thumb that you want 1GB of memory for each TB of storage. In the end, I went with 8GB of RAM, as I only had 4TB of storage (3 2TB disks in a RAIDZ1). But, also think about what other workloads you have on the system. When built, I was only running NextCloud, NGinx, Splunk, PiHole and WordPress (all in docker containers). And the initial 8GB of RAM was doing just fine. When I started running game servers, I stared to run into issues. I now have 16GB and am mostly fine. Some game servers can be a bit heavy (e.g. Minecraft, because fucking Java), but I don’t normally see problems. Also, since the link I provided mentioned it, skip ECC memory. it’s almost never worth the cost, and for home use that “almost never” gets much closer to “actually never”.

    When choosing disks, keep in mind that you will need a minimum of 2 disks and you effectively lose the storage space of one of the disks in the pool to parity storage (assuming all disks are the same size). Also, it is best for all of the disks to be the same size. You can technically use different size disks in the same pool; but, the larger disks get treated as the same size as the smaller disks. So long as the pool is healthy, read speeds are better than a single disk as the read can be spread out among the pool. But, write speeds can be slower, as the parity needs to be calculated at write time. Otherwise, you’re pretty free to choose any disks which will be recognized by the OS. You mention that 1TB is filling up; so, you’ll want to pick something bigger. I mentioned using spinning disks, as they can provide a lot more space for the money. Something like a 14TB WD Red drive can be had for $280 ($20/TB). With three of those in a RAIDZ1 pool, you get ~28TB of storage and can tolerate one disk failure , without losing data. With solid state disks, you can expect costs closer to $80/TB. Though, there is a tradeoff in speed. So, you need to consider what type of workloads you expect the storage pool to handle. Video editing on spinning rust is not going to be fun. Streaming video at 4k is probably OK, though 8k is going to struggle.

    A couple other things think about are space in the chassis, drive connections and power. Chassis space is pretty obvious, you gotta put the disks in the box. Technically, you don’t have to mount the disks, they can just be sitting at the bottom of the case, but this can cause problems with heat shortening the lifespan of the drives. It’s best to have them properly mounted and fans pushing air over them. Drive connections are one of those, you either have the headers or you don’t. Make sure your motherboard can support 3 more drives with the chosen interface (SATA, NVMe, etc.) before you get the drives. Nothing sucks more than having a fancy new drive only to be unable to plug it into the motherboard. Lastly, drives (and especially spinning drives) can be power hungry. Make sure your power supply can support the extra power requirements.

    Good luck whatever route you pick.


  • Probably the easiest solution would be to just chuck a larger disk in the system and retain the original drive for the operating system. If you do not need the high speed of an SSD, you may be able to get more storage space for the money by going with a spinning disk. 7200RPM drives are fast enough for most applications, though you may run into issues streaming 4K (or higher) resolution video.

    Another option would be to start building out a storage pool using some type of RAID technology. On my own server, I use ZFS for the data partition. It is basically a software RAID. I use a RAID-Z1 configuration, which stripes the data over multiple disks (three in my case) and uses a parity calculation to provide data redundancy. It also has the advantage that it can be expanded to new disks dynamically and does not require that all disks are the same size. Initial setup does require more work and you are now monitoring multiple physical disks, but having a unified storage pool and redundancy is a nice way to go.

    Any way you go, just make sure you have good backups. Drives fail, and sometimes even early in their life. Backblaze reports can be an interesting read when looking at drive options, as they really do put the drives through the wringer.


  • Yes, though depending on the media you are running the OS and game from, the performance could be worse than you would expect from an install on the main system media. For example, when I was testing moving over, I had Arch installed on a USB device and had some issues with I/O bandwidth. But, I also had a folder on my main storage drive to run Steam games from and this performed OK. It was formatted NTFS; so, there were some other oddities. But, it worked just fine and managed to convince me that I’d do OK under Linux. Took the plunge and I’ve been happy with the decision ever since.