• 0 Posts
  • 86 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle



  • I mean, no shit? Part of the Snowden leaks was information that the NSA had intercepted Cisco routers and backdoored them before they were shipped on to international customers. So, even without willing actions by US vendors, there is that to worry about. And the idea that a private company would install a backdoor for US Spy agencies in their infrastructure isn’t new. The fact that any Chinese company is using US hardware/software just seems incredibly stupid. And no one should be using CheckPoint.

    It’s the same reason Huiwei was thrown out of US infrastructure. You cannot build trusted architecture with hardware/software from a nation which you know wants to hack you. I work for a US based company in cybersecurity, we treat WeChat as Chinese State spyware, because it is. We wouldn’t consider a router or firewall from a Chinese based company and we treat any software from China with outright suspicion. Sure that all sucks and we may be missing out on some great stuff which isn’t malicious. But, the risks far outweigh the costs. I’d expect my Chinese counterparts to be making the exact same risk calculation for US based tech.


  • You could try using Autopsy to look for files on the drive. Autopsy is a forensic analysis toolkit, which is normally used to extract evidence from disk images or the like. But, you can add local drives as data sources and that should let you browse the slack space of the filesystem for lost files. This video (not mine, just a good enough reference) should help you get started. It’s certainly not as simple as the photorec method, but it tends to be more comprehensive.


  • As @[email protected] pointed out, this seems to be a cover for c’t magazine. Specifically it seems to be for November 2004. heise.de used to have a site which let you browse those covers and you could pull any/all of them. But, that website seems to have died sometime in 2009. Thankfully, the internet remembers and you can find it all on archive.org right here. You may need to monkey about with capture dates to get any particular cover, but it looks like a lot of them are there.

    Also, as a bit of “teach a person to fish”, ImgOps is a great place to start a reverse image search. It can often get you from an image to useful information about that images (e.g. a source) pretty quick. I usually use the TinEye reverse image search for questions like this.


  • I can think of a couple of reasons off the top of my head.

    You don’t say, but I assume you are working on-site with your work system. So, the first consideration would be a firewall at your work’s network perimeter. A common security practice is to block outbound connections on unusual ports. This usually means anything not 80/tcp or 443/tcp. Other ports will be allowed on an exception basis. For example, developers may be allowed to access 22/tcp outbound, though that may also be limited to only specific remote IP addresses.

    You may also have some sort of proxy and/or Cloud Access Security Broker (CASB) software running on your work system. This setup would be used to inspect the network connections your work system is making and allow/block based on various policy settings. For example, a CASB might be configured to look at a domain reputation service and block connections to any domain whose reputation is consider suspect or malicious. Domains may also be blocked based on things like age, or category. For this type of block, the port used won’t matter. It will just be “domain something.tld looks sketchy, so block all the things”. With “sketchy” being defined by the company in it’s various access policies.

    A last reason could be application control. If the services you are trying to connect to rely on a local program running on your work system, it’s possible that the system is set to prevent unknown applications from running. This setup is less common, but it growing in popularity (it just sucks big old donkey balls to get setup and maintain). The idea being that only known and trusted applications are allowed to run on the system, and everything else is blocked by default. This looks like an application just crashing to the end user (you), but it provides a pretty nice layer of protection for the network defenders.

    Messing with the local pc is of course forbidden.

    Ya, that’s pretty normal. If you have something you really need to use, talk with your network security team. Most of us network defenders are pretty reasonable people who just want to keep the network safe, without impacting the business. That said, I suspect you’re going to run into issues with what you are trying to run. Something like SyncThing or some cloud based storage is really useful for businesses. But, businesses aren’t going to be so keen to have you backing their data up to your home server. Sure, that might not be your intention, but this is now another possible path for data to leave the network which they need to keep an eye on. All because you want to store your personal data on your work system. That’s not going to go over well. Even worse, you’re probably going to be somewhat resistant when they ask you to start feeding your server’s logs into the businesses log repository. Since this is what they would need to prove that you aren’t sending business data to it. It’s just a bad idea all around.

    I’d suspect Paperless is going to run into similar issues. It’s a pretty obvious way for you to steal company data. Sure, this is probably not your intention, but the network defenders have to consider that possibility. Again, they are likely to outright deny it. Though if you and enough folks at your company want to use something like this, talk with your IT teams, it might be possible to get an instance hosted by the business for business use. There is no guarantee, but if it’s a useful productivity package, maybe you will have a really positive project under your belt to talk about.

    FreshRSS you might be able to get going. Instead of segregating services by port, stand up something like NGinx on port 443 and configure it as a reverse proxy. Use host headers to separate services such that you have sync.yourdomain.tld mapped to your SyncThing instance, office.yourdomain.tld mapped to your paperless instance and rss.yourdomain.tld mapped to FreshRSS. This gets you around issues with port blocking and makes managing TLS certificates easier. You can have a single cert sitting in front of all your services, rather than needing to configure TLS for each service individually.




  • I run Pi-Hole in a docker container on my server. I never saw the point in having a dedicated bit of hardware for it.
    That said, I don’t understand how people use the internet without one. The times I have had to travel for work, trying to do anything on the internet reminded me of the bad old days of the '90s with pop-ups and flashing banners enticing me to punch the monkey. It’s just sad to see one of the greatest communications platforms we have ever created reduced to a fire-hose of ads.


  • Ya, AI as a tool has it’s place. I’m currently working on documentation to meet some security compliance frameworks (I work in cybersecurity). Said documentation is going to be made to look pretty and get a check in the box from the auditors. It will then be stored in a SharePoint library to be promptly lost and ignored until the next time we need to hand it over to the auditors. It’s paperwork for the sake of paperwork. And I’m going to have AI spit out most of it and just pepper in the important details and iron out the AI hallucinations. Even with the work of fixing the AI’s work, it will still take less time than making up all the bullshit on my own. This is what AI is good for. If I actually care about the results, and certainly if I care about accuracy, AI won’t be leaned on all that much.

    The technology actually it pretty amazing, when you stop and think about it. But, it also often a solution in search of a problem.


  • What you are trying to do is called P2V, for Physical to Virtual. VMWare used to have tools specifically for this. I haven’t used them in a decade or more, but they likely still work. That should let you spin up the virtual system in VMWare Player (I’d test this before wiping the drive) and you can likely convert the resulting VM to other formats (e.g. VirtualBox). Again, test it out before wiping the drive, nothing sucks like discovering you lost data because you just had to rush things.


  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.


  • With intermittent errors like that, I’d take the following test plan:

    1. Check for disk errors - You already did this with the SMART tools.
    2. Check for memory errors - Boot a USB drive to memtest86 and test.
    3. Check for overheating issues - Thermal paste does wear out, check your logs for overheating warnings.
    4. Power issues - Is the system powered straight from the wall or a surge protector? While it’s less of an issue these days, AC power coming from the wall should have a consistent sine wave. If that wave isn’t consistent, it can cause a voltage ripple on the DC side of the power supply. This can lead to all kinds of weird fuckery. A good surge protector (or UPS) will usually filter out most of the AC inconsistencies.
    5. Power Supply - Similar to above, if the power supply is having a marginal failure it can cause issues. If you have a spare one, try swapping it out and seeing if the errors continue.
    6. Processor failure - If you have a space processor which will fit the motherboard, you could try swapping that and looking for errors to continue.
    7. Motherboard failure - Same type of thing. If you have a spare, swap and look for errors.

    At this point, you’ll have tested basically everything and likely found the error. For most errors like this, I’ve rarely seen it go past the first two tests (drive/RAM failure), with the third (heat) picking up the majority of the rest. Power issues I’ve only ever seen in old buildings with electrical systems which probably wouldn’t pass an inspection. Though, bad power can cause other hardware failures. It’s one reason to have a surge protector in line at all times anyway.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.




  • I know that, during my own move from Windows to Linux, I found that the USB drive tended to lag under heavy read/write operations. I did not experienced that with Linux directly loaded on a SATA SSD. I also had some issues dealing with my storage drive (NVMe SSD) still using an NTFS file system. Once I went full Linux and ext4, it’s been nothing but smooth sailing.

    As @[email protected] pointed out, performance will depend heavily on the generation of USB device and port. I was using a USB 3.1 device and a USB 3.1 port (no idea on the generation). So, speeds were ok-ish. By comparison, SATA 2 can have a transfer rate of 2 GB/s. And while the SSD itself may not have saturated that bandwidth, it almost certainly blew the transfer rate of my USB device out of the water. When I later upgraded to an NVMe drive, things just got better.

    Overall, load times from the USB drive is the one place I wouldn’t trust testing Linux on USB. It’s going to be slower and have lag compared to an SSD. Read/Write performance should be comparable to Windows. Though, taking the precaution of either dual booting or backing up your Windows install can certainly make sense to test things out.


  • force binary choices that don’t align with household rules or with children’s maturity levels.

    This has been my main experience with “parental controls”. As soon as they are turned on, I lose any ability to manage the experiences available to my children. So, in areas where I see them as mature enough to handle something, the only way I can allow them access to that experience is to completely bypass the controls. In many ecosystems, if I judge that one of my children could handle a game and the online risks associated with it, I can’t simply allow that game. Instead, I need to maintain a full adult account for them to use. You also run into a lot of situations where the reason a game is banned from children is unclear or done in an obvious “better safe than sorry” knee-jerk reaction. Ultimately, parental controls end up being far more frustrating than empowering. I’d rather just have something that just says, “this game/movie/etc your kid is asking for is restricted based on reasons X, Y and Z. Do you want to allow it?” Log my response and go with it. Like damned near any choice in software settings, quit trying to out-think me on what I want, give me a choice and respect that choice.