• 1 Post
  • 186 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • So bizarrely the best experience is to self host and pirate. That’s what you get when the entire entertainment industry is hostile to consumers.

    When Netflix first became big, it was popular because it was a one-stop shop for almost all your content. It was like a big library of content in one place, you pay a reasonable monthly fee and it’s all there. Piracy dipped as a result.

    Now all the content is fragmented into numerous walled gardens you have to pay separate fees to access. People can only consume the same amount but now they have to pay 4 or 5 fees as the content is spread out.

    Unsurprisingly piracy is booming again.



  • It sounds like your system clock may be the issue.You have a system clock inside your device. Linux usually uses the internet to set your clock but still refers to your system clock. If the internet provided time is too far off your system clock it may ignore it and display your system time.

    KDE respects the NTP clock settings used by your linux system, while ironically Gnome does not and does its own thing directly with the time date control. This is probably why you’re now noticing a problem.

    So either your system clock is supposed to be UTC and actually set to local time, or your system clock is correct but your timezone in linux is way off.

    If you use timedatectl status in a terminal it’ll show your current local time, UTC and RTC time, as well as your timezone and whether the RTC is set to your local timezone or UTC. RTC is your hardware clock on your device.

    If “RTC is local tz” says no, then the value for RTC and UTC should be the same, as your hardware clock is set to be the UTC time. And if the UTC time is wrong then your system is uaing your hardware clock to incorrectly work out the UTC. UTC is the 0 timezone worldwide and has an absolute value - its the same for everyone and you can esily.find it with a search engine. If the displayed UTC is wrong on your system, then you’re out of sync with everyone.

    So how to fix it if its wrong:

    One way would be to tell your systen what the hardware clock should be and then set it correct. Use “timedatectl set-local-rtc 1” to make it set to be in your local time zone. Or if you want it to be UTC you can use timedatectl set-local-rtc 0. You can use either but UTC is better.

    That should fix the issue as the network time will now come in correctly.

    But if you wanted you can also manually set the local time and date with timedatectl set-time hh:mm:ss. Once that is set then your RTC should also be changed and be back in sync depending on whether you set it up to be also local or UTC. When you set the local tine it will work out the UTC value based on your timezone. Note if the timezone is wrong it’ll still be wrong!

    If you can’t set the time because NTP (network time) is running, you could.leave it and the clock should now sort itself out. But if you want to force mannually set the time you can turn off NTP if you want: “timedatctl set-ntp false” You could leave it off and set the time manually using “timedatectl set-time hh:mm:ss”

    If still getting NTP error messagss you could also disable the NTP system job temporarily: systemctl disable --now chronyd. Turn it back on afterwards with systemctl enable --now chronyd

    Finally do make sure the timezone is correct. I know you say it is but timedatectl shows you what the system thinks it is, and if ita wrong then rtc/utc will still be wrong as the timezone is used to convert from local time to UTC. You can use timedatectl to change the timezone: timedatectl set-timezone name.

    There are loads of valid timezones but only valid ones will work. Get your local timezones official name online or use timedatectl list-timezones to see all the options. You can filter uaing egrep etc.

    Hopefully that’ll fix the issue for you. You can also boot into your bios and manually set the hardware clock if needs be but linux still needs to know whether its supposed to be utc ir local time.


  • I’d recommend either OpenSuSE or Fedora, both with KDE. They’re big, well supported distros, which should install without issue and provide a slick modern experience. I use OpenSuSE, as I find the YaST system tools convenient and user friendly.

    I’d avoid Ubuntu, multiple issues. Mint is a good distro but I think any big mainstream distro “just works” now, so I’d go for something that uses a slicker desktop. I prefer KDE, which is available on Mint but just isn’t as tightly integrated as their own Cinnamon desktop.



  • I’ve tried Arch - it allows you to make a system that is exactly what you want. So no bloat installing stuff you never need or use. It also gives you absolute control.

    On other distros like Fedora, you get a pre configured system set up for a wide range of users. You can reduce down the packages somewhat but you will often have core stuff installed that is more than you’ll need as it caters to everyone.

    Arch allows you to build it yourself, and only install exactly the things you actually want, and configure then exactly how you want.

    Also you learn an awful lot about Linux building your system in this way.

    I liked building an arch system in a virtual machine, but I don’t think I could commit to maintaining an arch install on my host. I’m happy to trade bloat for a “standard” experience that means I can get generic support. The more unique your system the more unique your problems can be I think. But I can see the appeal of arch - “I made this” is a powerful feeling.


  • I think the new device is good news. I can see what you’re saying - the benefit is if Steam Machines expand the PC games market with former console only players. But otherwise the threshold for PC development is already much lower than consoles; there are no dev kit fees, a wide choice of engines to target, relatively greater independence etc.

    The steam machine may help somewhat in having a specific hardware profile to target, but the games are still on steam’s store so still have to be able to run widely on Windows or Linux. That’s always been the complexity of PC development - the steam machine doesn’t change that much. Although admittedly the Steam Verified benchmarks are useful for users to simplify understanding what their kit can actually run which will benefit indie devs.



    • OS - - > Linux OpenSuSE with KDE

    • YouTube - - > Freetube - opensource, private YouTube client for Linux, MacOS and Windows

    • Downloading music/videos --> yt-dlp

    • Downloading videos/images --> gallery-dl

    • Email - - > Thunderbird (really moved forward in last few years)

    • Notes - - > Joplin

    Selfhosting (mine is on raspberry pi) :

    • Streaming library - Jellyfin

    • Photo library - imich

    • Downloads - qbittorrent, prowlaar, radaar, sonaar, lazy librarian in a docker stack with VPN

    • smart home - Homeassistant

    • filesync - - > Syncthing (I don’t have problems with long file names - maybe a Windows issue or Linux FS? I use EXT4 on all my devices and don’t use Windows anymore)


  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlTimeshift
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    29 days ago

    Looking at your error it’s because Rsync is erroring.

    I’d starr by testing Rsync with an individual text file saving to /dev/dm-0 and see what error is returned.

    Timeshift is good but it basically is just a tool to use Rsync to save a copy of your system folders (or other folders if you wish).

    Rsync needs to be able to read the source and write to the destination, so I’d start with testing that Rsync is able to do that.

    Given you’re using an encrypted partition it’s possible you’re trying to read/write to the wrong locations. You’ve provided device UUIDs but you’d probably actually need to be backing up the mounted decrypted locations? I.e. the root file system / will actually be a mounted location in your Linux set up, probably under /run, with symlinka pointing to it for all the different system folder. Similar for /home/ if you want to back up personal files.

    The device UUID would point to the filesystem containing the encrypted file (managed by LUKS) which will have very limited read/write permissions, rather than directly to the decryoted contents / or /home partitions as you’d expect in a normal system. In particular if /dev/dm-0 (looks to be an nvme drive) is an encrypted destination then really you also want to be pointing directly to it’s decrypted mounted location to write your files into, not the whole device.

    Edit: think of it like this, you don’t want to back up the encrypted container with Timeshift, you want to back up the decryoted contents (your filesystem) into amother location in your filesystem (encrypted or decrypted). If the destination is also an encrypted location you need to back up into its file system, not the device where the encrypted file sits. So use more specific filesystem paths not UUIDs. That would be something like /mnt/folder or /run/folder not /dev/anything as that’s hardware location, and not directly mounted in an encrypted filesystem unlike how it can be in a non-encryoted system.


  • Any points and click adventure game, there are loads including old classics and modern good games.

    Monkey Island remasters are fun and can be played with mouse. Broken Sword games are also good.

    Rusty Lake games are great if you prefer more puzzle games than narrative ones. Still has a great somewhat surreal plot just not like a point and click narrative game.

    Also If you havent played dwarf fortress now is the time to learn, the siege update came out this week. Mouse or keyboard, or both, but definitely can be done one handed.

    Vampire Survivor that others have suggested is a good shout, one hand on the keyboard is enough and its very addictive.


  • 100% CPU use doesnt make sense. RAM would be the main constraint not the CPU. Worth looking into - maybe a bug or broken piece of software.

    Also the DE may he more the issue than the distro itself. You could install an even more lightweight desktop environment like Open box. Also worth checking whether youre using x11 or Wayland. Its easy to imagine Wayland has not been optimised or extensively tested on something like your device, and could. Easily be a random bug if the DE is pushing your CPU to 100%

    There are super lightweight distros like Puppy linux.


  • It had to happen eventually. Seems reasonable time to make the moce. It’ll be beneficial for all Linux users, and probably a huge relief for Gnome devs to be be able to focus purely on wayland.

    It just will suck a bit for those on rolling release distros who still experience major issues with Wayland, particularly when its not Gnome or Wayland projects that need to make a fox - looking at you Nvidia.

    I wouldn’t be surprised if other big DEs, such as KDE, start making firmer plans for dropping X11. I’m one of the 30% of KDE users still using X11 - for me it was Nvidia issues, and I do remain anxious about being reliant on drivers from a notoriously bad manufacturer. Having said the drivers have improved massively over the past 18-24 months for me at least, and maybe everyone moving over to Wayland is what’s needed to force Nvidia to act.


  • In terms of KDE dependencies, you’re talking basically about QT. The amount of packages you download shouldnt be too much and likely used for other QT programs which are common.

    However there is also GSconnect which is a Gnome extension and uses the KDE connect protocol.

    I would say that your concerns regarding the KDE Connect dependencies should be balanced against the good Android and iOS support, and the wide use of KDE connect means it is well maintained, supported and responsive to security updates. These considerations may outweigh the installation of packages that you otherwise won’t be using? It may be better to go mainstream and accept the dependencies than hunt down a lesser supported alternative and deal woth the associated shortcomings.


  • So in terms of hardware, I use a Raspberry Pi 5 to host my server stack, including Jellyfin with 4k content. I have a nvme module with a 500gb stick and an external HDD with 4tb of space via USB. The pi5 is headless and accessed directly via SSH or RDC.

    The Raspberry Pi 5 has H.265 hardware decoding and if you’re serving 1 video at a time to any 1 client you shouldn’t have any issues, including up to 4k. It will of course use resources to transcode if the client can’t support that content directly but the experience should be smooth for 1 user.

    For more clients it will depend on how much heavy lifting the clients do. I my case I have a mini PC plugged into my TV, I stream content from my pi5 to the mini PC and the mini PC is doing the heavy lifting in terms of decoding. The hardware on the pi5 is not; it just transfer the video and the client does the hard work. If all your clients are capable then such a set up would work with the pi5.

    An issue would come if you wanted to stream your content to multiple devices at the same time and the clients don’t directly support H.265 content. In that case, the pi5 would have to transcode the content to another format bit by but as it streams it to the client. It’d cope with 1 user for sure but I don’t know how many simultanous clients it could support at 1440p.

    The other consideration is what other tools are being use on the sever at the same time. Again for me I live alone so I’m generally the only user of my pi5 servers services. Many services are low powered but I do find things like importing a stack of PDFs into Paperless NGX is surprisingly CPU intense and in that case the device could struggle if also expected to transcode content.

    I think from what you describe the pi5 could work but you may also want to look at higher powered mini PC as your budget would allow that.

    For reference I use dietpi as the distro on my server, and I use a mix of dietpi packages (which are very well made for easy install and configuration) and docker. I am using quite a few docker stacks now due to the convenience of deploying. Dietpi is debian based, and has a focus on providing pre configured packages to make set up easy, but it is still a full debian system and anything can be deployed on it.

    Obviously the other consideration in the pi5 is an ARM device and a mini PC would be X86_64. But so far I’ve not found any tools or software I’ve wanted that aren’t compiled and available for the Pi5 either via dietpi or docker; ARM devices are popular in this realm. I have come across a bug in docker on ARM devices which broke my VPN set up - that was very frustrating and I had to downgrade docker a few months ago while awaiting the fix. That may be worth noting given docker is very important in this realm and most servers globally are still x86.

    If I were in your position and I had $200 I’d buy the maximum CPU and GPU capability I could in 1 device, so I’d actually lean to a mini PC. If you want to save money then the Pi5 is reasonabkr value but you’d need to include a case and may want to consider a nvme or ssd companion board. Those costs add up and the value of the mini PC may compare better as an all in one device; particularly if you can get a good one second hand. There are also other SBC that may offer even better value or more power than a pi5.

    Also bear in mind for me I have a mini PC and pi5; they do different things with the pi5 is the server but the mini PC is a versatile device and I play games on it for example. If you will only have 1 server device and pre exisiting smart tvs etc you’ll be more reliant on the servers capabilities so again may want to opt for the most powerful device you can afford at your price point.


  • Having experienced instability I’d say that is a pretty good reason. It’s one of those things that don’t matter until it happens to you, and I think everyone assumes won’t happen to them.

    Having said that it can be managed. It’s infuriating when your OS just stops working, but if you have good backups and can roll back the system quickly it’s fine.

    Rolling releases are great for having the latest versions of software, but it’s also like constantly being a beta tester. And the distros approach to rolling release makes a big difference.

    Manjaro does have a small development team compared to other big name rolling releases, so it just isn’t able to do the same level of testing and prep as a better resourced distro like Fedora for example. It does a reasonably good job with a small team but it inevitably makes things more difficult.

    Manjaro is also Arch based but it’s not Arch, and one source of breakages can be using AUR. I think people think of Manjaro as just a more convenient version of Arch but Manjaro is it’s own distro, and using the packages in the AUR can break things. People seem to forget that Arch is bleeding edge while Manjaro does hold packages back for testing, so the two distros are not in sync.

    If Manjaro is used as Manjaro and not treated as arch-light then it’s a fine distro. But it’s somewhat pushed as an easier to use version of Arch, so then inexperienced users in particular can get into trouble trying to use things like the AUR. But Manjaro itself is generally fine.

    I personally don’t recommend Manjaro to people. That’s because for me there are better rolling release distros which are better resourced (such as OpenSuSE or Fedora), better options for systems stable systems, or if users want Arch then Arch itself is the way to go. Manjaro is absolutely fine but I wouldn’t say it’s the best option in any category, including Arch based distros.


  • Quite a bad compromise of Xubuntu’s and Canonical’s security and also embarrassing.

    They’re being a bit vague and dismissive of the hack at the moment, as far as I can see there is now only the 24.04 version linked on the downloads page (not even sure the download link works). The recent 25.10 release (released 10th Oct) is no longer visible and the blog posts visible talk about testing for 21.04 (posts from 2021).

    So presumably they’ve reverted to an archived version of their site while they investigate?


  • I have played with Arch in a VM - I learnt a lot about how Linux works setting it up. But the tutorials and guides are good, and you end up with a lean system with just what you want in it, and pretty much all configured directly by you.

    I can see why Arch is a popular distro and base for other distros (like Manjero and currently rapidly growing CachyOS).

    But I’m not at the point I’d want to main it. My issue is the concern that because everything is set up by me, it’s a much more unique system so if something breaks it could be a whole myriad of my own choices that are the cause. I’m nervous about having to problem solve things when they break and solutions not working because of how my particular system is configured. It’s probably a bit irrational but I do quite like being on an distro that lots of other people have the exact same configuration as me, so when things break there is lots of generic help out there.

    That said I would consider arch based distros like Manjaro or CachyOS as they are in that vain of mostly standardised distro.


  • BananaTrifleViolin@lemmy.worldtoLinux@programming.devDo I dare say it 🥺
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    2 months ago

    Zorin has laudable aims but it’s delivered in a flawed way. It’s essentially Gnome with extensions to make it look and feel like other GUIs. Problem is, Gnome is not a good base for this type of approach - it is fundamentally not flexible and not designed for this. So Zorin is basically deliberating breaking Gnome to make it into something it’s just not meant to be under the hood.

    Zorin looks very nice graphically and seems good at at first but then niggles come along. Minor but constantly present.

    I think it’s probably OK for a Linux newbie but not ideal long term and doesn’t have the user base to make it as easy to get support as Mint for example.

    If you do want to mimick other GUIs then really don’t start with Gnome. You can achieve much better results using KDE on any distro; KDE by design is flexible and it doesnt require breaking fundamental design decisions made for Gnome to mimick something else. Only downside to do-it-yourself with KDE is if you do want to perfectly mimick another GUI then it is a manual process of finding themes and skins that match the aesthetic you want.

    That’s becuase Linux is it’s own thing and not focused on trying to mimick other DEs (even if some GUis have superifical similarities to Windows or MacOS).

    I get what Zorin is trying to do, but I think using Gnome is a mistake but also for me the basic idea of “familiar to ease you in” doesn’t really work. Better for people to learn how Linux is different - there is a choice in design philosophies but all of them are shaped around what Linux is and how it works rather than what Windows or MacOS are.


  • I think it’s good they’re making a new desktop environment, but I personally wouldn’t want to be beta testing an environment on my new laptop.

    I personally don’t get the hype around Cosmic - I’m not clear what makes it so exciting for people? It seems to be a reaction to the restrictive design philosophy of Gnome but not moving too far from it at the moment. It’ll be interesting to see how far it moves from Gnome and if moving to Rust is actually meaningful to the end user.

    I can see it’s good for the Linux world that a new and modern DE is being developed. It gives users choice and may prompt innovation in the other DEs too. But maybe I’m beyond the age where new is exciting - I value stable and familiar environment, KDE in my case.

    I’m not against Cosmic in any sense - I just don’t quite get the level of hype I see surrounding it. Maybe it’d be more exciting if I was a Gnome user? Maybe it’s solving problems I don’t seems to have in KDE?