sylver_dragon
- 0 Posts
- 90 Comments
While I don’t know the specific post you are referring to, Malware exists for Linux. Here’s a great overview from last year. If someone wants to argue, “oh it’s from a security company trying to sell a product” then let me point you at the Malware Bazaar and specifically the malware tagged elf. Those are real samples of real malware in the Linux specific ELF executable binary format (warning: yes it’s real malware, don’t run anything from this site). On the upshot, most seem to be Linux variants of the Mirai botnet. Not something you want running, but not quite as bad as ransomware. But, dig a bit and there are other threats. Linux malware exists, it has for a long time and it’s getting more prevalent as more stuff (especially servers) run on Linux.
While Linux is far more secure than Windows by design, it’s not malware proof. It is harder for malware to move from user space into root (usually), but that’s often not needed for the activities malware gets up to today. Ransomware, crypto miners and info stealers will all happily execute in user-land. And for most people, this is where their important stuff lives. Linux’s days of living in “security through obscurity” are over. Attackers are looking at Linux now and starting to go after it.
All that said, is it worth having a bloated A/V engine doing full on-access scanning? That depends on how you view the risk. Many of the drive-by type attacks (e.g. ClickFix, fake tech-support scams) all heavily target Windows and would fail on a Linux system. The malware and backdoors that come bundled with pirated software are likely to fail on a Linux system, though I’ll admit to not having tested that sort of thing with Wine/Proton installed. For those use cases, I’d suggest not downloading pirated software. Or, if you absolutely are going to, run those file through ClamAV at minimum.
Personally, I don’t feel the need to run anything as heavy as on-access file scanning or anything to keep trawling memory for signatures on my home systems. Keeping software up to date and limiting what I download, install and run is enough to manage my risk. I do have ClamAV installed to let me do a quick, manual scan of anything I do download. But, I wouldn’t go so far as to buy A/V product. Most of the engines out there for Linux are crap anyway.
Professionally, I am one of the voices who pushed for A/V (really EDR) on the Linux systems in my work environment. My organization has a notable Linux footprint and we’ve seen attackers move to Linux based systems specifically because they are less likely to be well monitored. In a work environment, we have less control over how the systems get (ab)used and have a higher need for telemetry and investigation.
sylver_dragon@lemmy.worldto
Gaming@lemmy.zip•Fable's evil landlords won't grow devil horns, as reboot ditches classic character morphing due to a lack of belief in objective arseholeryEnglish
171·1 month agoI’d just be happy to see “evil” choices which weren’t cartoonishly silly. So many of these game end up offering you choices like:
- Kiss the baby, donate all your money to an orphanage.
- Kill the baby, cook and eat it. in front of the mother.
There’s never anything like:
- Kiss the baby, take over the orphanage, run an outward front which looks like a fantastic charitable organization while training the orphans to commit crimes for you.
Really well done “evil” should be loved by the people, seem outwardly good while using that as cover to do selfish things. But that is much harder than “Press X to murder an innocent for no reason”.
sylver_dragon@lemmy.worldto
Technology@lemmy.ml•AI PCs aren't selling, and Microsoft's PC partners are scramblingEnglish
5·1 month agoRather than interact with a machine, you’ll just be walking around, sipping coffee, having thoughtful conversations with a bot laughing along with your jokes as it writes your letter and does your taxes.
So, basically the computers from Star Trek: TNG. I’d go for that, but unfortunately, what we’ll get instead is enshitified AI slop which exists to suck a subscription fee out of you every month while pushing ads.
sylver_dragon@lemmy.worldto
Games@lemmy.world•Ubisoft initiates colossal restructure to become a more 'gamer-centric' companyEnglish
8·1 month agoThey are chopping the development teams and titles up into convenient bite-sized chunks. Ubisoft will hang onto the large titles in the Vantage Studios vertical, and the rest will be spun off or sold off. Any spun off studios will be saddled with crippling debt.
sylver_dragon@lemmy.worldto
Gaming@lemmy.zip•The Elder Scrolls Online's smaller expansions are "not in any way" a result of last year's layoffsEnglish
3·1 month agoSo, the layoffs didn’t cause the change to smaller expansions. The change to smaller expansions made the layoffs easier.
sylver_dragon@lemmy.worldto
Technology@lemmy.ml•Exclusive: Beijing tells Chinese firms to stop using US, Israeli cybersecurity software, sources sayEnglish
15·1 month agoI mean, no shit? Part of the Snowden leaks was information that the NSA had intercepted Cisco routers and backdoored them before they were shipped on to international customers. So, even without willing actions by US vendors, there is that to worry about. And the idea that a private company would install a backdoor for US Spy agencies in their infrastructure isn’t new. The fact that any Chinese company is using US hardware/software just seems incredibly stupid. And no one should be using CheckPoint.
It’s the same reason Huiwei was thrown out of US infrastructure. You cannot build trusted architecture with hardware/software from a nation which you know wants to hack you. I work for a US based company in cybersecurity, we treat WeChat as Chinese State spyware, because it is. We wouldn’t consider a router or firewall from a Chinese based company and we treat any software from China with outright suspicion. Sure that all sucks and we may be missing out on some great stuff which isn’t malicious. But, the risks far outweigh the costs. I’d expect my Chinese counterparts to be making the exact same risk calculation for US based tech.
sylver_dragon@lemmy.worldto
Linux@lemmy.ml•Possible to recover a deleted .zst file?English
4·2 months agoYou could try using Autopsy to look for files on the drive. Autopsy is a forensic analysis toolkit, which is normally used to extract evidence from disk images or the like. But, you can add local drives as data sources and that should let you browse the slack space of the filesystem for lost files. This video (not mine, just a good enough reference) should help you get started. It’s certainly not as simple as the photorec method, but it tends to be more comprehensive.
As @[email protected] pointed out, this seems to be a cover for c’t magazine. Specifically it seems to be for November 2004. heise.de used to have a site which let you browse those covers and you could pull any/all of them. But, that website seems to have died sometime in 2009. Thankfully, the internet remembers and you can find it all on archive.org right here. You may need to monkey about with capture dates to get any particular cover, but it looks like a lot of them are there.
Also, as a bit of “teach a person to fish”, ImgOps is a great place to start a reverse image search. It can often get you from an image to useful information about that images (e.g. a source) pretty quick. I usually use the TinEye reverse image search for questions like this.
sylver_dragon@lemmy.worldto
Selfhosted@lemmy.world•Question about accessing my services from corporate NetworkEnglish
8·2 months agoI can think of a couple of reasons off the top of my head.
You don’t say, but I assume you are working on-site with your work system. So, the first consideration would be a firewall at your work’s network perimeter. A common security practice is to block outbound connections on unusual ports. This usually means anything not 80/tcp or 443/tcp. Other ports will be allowed on an exception basis. For example, developers may be allowed to access 22/tcp outbound, though that may also be limited to only specific remote IP addresses.
You may also have some sort of proxy and/or Cloud Access Security Broker (CASB) software running on your work system. This setup would be used to inspect the network connections your work system is making and allow/block based on various policy settings. For example, a CASB might be configured to look at a domain reputation service and block connections to any domain whose reputation is consider suspect or malicious. Domains may also be blocked based on things like age, or category. For this type of block, the port used won’t matter. It will just be “domain something.tld looks sketchy, so block all the things”. With “sketchy” being defined by the company in it’s various access policies.
A last reason could be application control. If the services you are trying to connect to rely on a local program running on your work system, it’s possible that the system is set to prevent unknown applications from running. This setup is less common, but it growing in popularity (it just sucks big old donkey balls to get setup and maintain). The idea being that only known and trusted applications are allowed to run on the system, and everything else is blocked by default. This looks like an application just crashing to the end user (you), but it provides a pretty nice layer of protection for the network defenders.
Messing with the local pc is of course forbidden.
Ya, that’s pretty normal. If you have something you really need to use, talk with your network security team. Most of us network defenders are pretty reasonable people who just want to keep the network safe, without impacting the business. That said, I suspect you’re going to run into issues with what you are trying to run. Something like SyncThing or some cloud based storage is really useful for businesses. But, businesses aren’t going to be so keen to have you backing their data up to your home server. Sure, that might not be your intention, but this is now another possible path for data to leave the network which they need to keep an eye on. All because you want to store your personal data on your work system. That’s not going to go over well. Even worse, you’re probably going to be somewhat resistant when they ask you to start feeding your server’s logs into the businesses log repository. Since this is what they would need to prove that you aren’t sending business data to it. It’s just a bad idea all around.
I’d suspect Paperless is going to run into similar issues. It’s a pretty obvious way for you to steal company data. Sure, this is probably not your intention, but the network defenders have to consider that possibility. Again, they are likely to outright deny it. Though if you and enough folks at your company want to use something like this, talk with your IT teams, it might be possible to get an instance hosted by the business for business use. There is no guarantee, but if it’s a useful productivity package, maybe you will have a really positive project under your belt to talk about.
FreshRSS you might be able to get going. Instead of segregating services by port, stand up something like NGinx on port 443 and configure it as a reverse proxy. Use host headers to separate services such that you have sync.yourdomain.tld mapped to your SyncThing instance, office.yourdomain.tld mapped to your paperless instance and rss.yourdomain.tld mapped to FreshRSS. This gets you around issues with port blocking and makes managing TLS certificates easier. You can have a single cert sitting in front of all your services, rather than needing to configure TLS for each service individually.
sylver_dragon@lemmy.worldto
Linux@lemmy.world•how to defend against embrace & extinguish?English
4·2 months agoLarge companies are already heavily involved in Linux. Based on this data some of the biggest contributors this year were Meta and Google. Both companies are at the forefront of enshitification of the internet, but they built their mountains of shit on a foundation of Linux.
Ya, I actually run both uBlock Origin and NoScript in my browser on my phone and personal machine (desktop). On my work laptop, those are a no-go. So, I get the full ads experience on my work machine when traveling.
I run Pi-Hole in a docker container on my server. I never saw the point in having a dedicated bit of hardware for it.
That said, I don’t understand how people use the internet without one. The times I have had to travel for work, trying to do anything on the internet reminded me of the bad old days of the '90s with pop-ups and flashing banners enticing me to punch the monkey. It’s just sad to see one of the greatest communications platforms we have ever created reduced to a fire-hose of ads.
sylver_dragon@lemmy.worldto
Technology@lemmy.ml•Microsoft AI CEO Puzzled by People Being "Unimpressed" by AIEnglish
2·3 months agoYa, AI as a tool has it’s place. I’m currently working on documentation to meet some security compliance frameworks (I work in cybersecurity). Said documentation is going to be made to look pretty and get a check in the box from the auditors. It will then be stored in a SharePoint library to be promptly lost and ignored until the next time we need to hand it over to the auditors. It’s paperwork for the sake of paperwork. And I’m going to have AI spit out most of it and just pepper in the important details and iron out the AI hallucinations. Even with the work of fixing the AI’s work, it will still take less time than making up all the bullshit on my own. This is what AI is good for. If I actually care about the results, and certainly if I care about accuracy, AI won’t be leaned on all that much.
The technology actually it pretty amazing, when you stop and think about it. But, it also often a solution in search of a problem.
sylver_dragon@lemmy.worldto
Linux@lemmy.world•Best way to image old drive and boot in vm?English
4·4 months agoWhat you are trying to do is called P2V, for Physical to Virtual. VMWare used to have tools specifically for this. I haven’t used them in a decade or more, but they likely still work. That should let you spin up the virtual system in VMWare Player (I’d test this before wiping the drive) and you can likely convert the resulting VM to other formats (e.g. VirtualBox). Again, test it out before wiping the drive, nothing sucks like discovering you lost data because you just had to rush things.
sylver_dragon@lemmy.worldto
Linux@lemmy.ml•Different installation methods and system stabilityEnglish
21·4 months agoIf the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
With that base, I’d tend towards:
Flatpak > Container > AppImageMy reasoning for this being:
- Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
- Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
- Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.
This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.
Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.
The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.
That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.
sylver_dragon@lemmy.worldto
Linux@lemmy.world•Need assistance solving unexpected and random I/O errors in homelabEnglish
3·5 months agoWith intermittent errors like that, I’d take the following test plan:
- Check for disk errors - You already did this with the SMART tools.
- Check for memory errors - Boot a USB drive to memtest86 and test.
- Check for overheating issues - Thermal paste does wear out, check your logs for overheating warnings.
- Power issues - Is the system powered straight from the wall or a surge protector? While it’s less of an issue these days, AC power coming from the wall should have a consistent sine wave. If that wave isn’t consistent, it can cause a voltage ripple on the DC side of the power supply. This can lead to all kinds of weird fuckery. A good surge protector (or UPS) will usually filter out most of the AC inconsistencies.
- Power Supply - Similar to above, if the power supply is having a marginal failure it can cause issues. If you have a spare one, try swapping it out and seeing if the errors continue.
- Processor failure - If you have a space processor which will fit the motherboard, you could try swapping that and looking for errors to continue.
- Motherboard failure - Same type of thing. If you have a spare, swap and look for errors.
At this point, you’ll have tested basically everything and likely found the error. For most errors like this, I’ve rarely seen it go past the first two tests (drive/RAM failure), with the third (heat) picking up the majority of the rest. Power issues I’ve only ever seen in old buildings with electrical systems which probably wouldn’t pass an inspection. Though, bad power can cause other hardware failures. It’s one reason to have a surge protector in line at all times anyway.
sylver_dragon@lemmy.worldto
Selfhosted@lemmy.world•Those who are hosting on bare metal: What is stopping you from using Containers or VM's? What are you self hosting?English
9·5 months agoI started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

This one is a mixed bag. KYC regulations are very useful in detecting and prosecuting money laundering and crimes like human trafficking. But ya, if this data needs to be kept, the regulations around secure storage need to be just as tight. This sort of thing should be required to be kept to cybersecurity standards like CMMC Level 3, audited by outside auditors and violations treated as company and executive disqualifying events (you ran a company so poorly you failed to secure data, you’re not allowed to run such a company for the next 10 years). The sort of negligence of leaving a database exposed to the web should already result in business crippling fines (think GDPR style fines listed in percentages of global annual revenue). A database which is exposed to the web and has default credentials or no access control at all should result in c-level exec seeing the inside of a jail cell. There is zero excuse for that happening in a company tasked with protecting data. And I refuse to believe it’s the result of whatever scape-goat techs they try to pin this on. This sort of failure always comes from the top. It’s caused by executives who want everything done fast and cheap and don’t care about it being done right.