• 0 Posts
  • 65 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • Oh, oh I know this one!

    If your keyboard shortcut contains control characters, it will be interpreting the keypresses with the control characters you’re holding for the shortcut. Alt+a super+b etc.

    Some keyboard shortcuts trigger on press, they can also trigger on release. This is why you need the sleep statement, to give you time to release the keys before it starts typing. You want the shortcuts to trigger after release.

    I can set the difference in my window manager, but I’m not sure about doing it in (GNOME?) Ubuntu. Even assuming you can set the shortcut to only run on release, you still need to let go of all the keys instantly, so chaining with sleep is probably the best approach.

    Chaining bash sleep and ydotool works for me in my window manager. Consider using “&&” instead of “;” to run the ydotool type command. Whatever is written after the “&&” only executes if the previous command (sleep 2) succeeds. The “;” might be interpreted by the keyboard shortcut system as an end of the statement:

    sleep 2 && ydotool type abcde12345

    Or perhaps the shortcut system is just executing the programs, not necessarily through a bash shell. In that case we would need to actually run bash to use its sleep function and the “;” or “&&” bit. Wrapping the lot in a bash command might look like this:

    bash -c "sleep 2 && ydotool type abcde12345"

    Assuming that doesn’t work, I see nothing wrong with running a script to do it. You just need to get past whatever in the shortcut system is cutting off the command after the sleep statement.

    Running ydotoold at user level is preferred and recommended. It keeps it inside your user context, which is better for security.





  • So the package is a specific driver version, which will keep you on the 580 diver version through updates. This package would be installed to provide the drivers and requires the matched utils package.

    You would install this, rather than just installing the meta-package from the official repositories. As shown in the AUR page:

    Conflicts:	nvidia, NVIDIA-MODULE, nvidia-open-dkms
    Provides:	nvidia, NVIDIA-MODULE
    

    This is also a DKMS package. This will let it build against whatever kernel you’re running, so you can keep using the module through regular system qns kernel upgrades.

    So, the idea would be, remove the nvidia drivers you have, install this one, and it’ll be like the upgrade and support drop never happened. You won’t get driver upgrades, but you wouldn’t anyway. It’s the mostly safe way to version pin the package without actually pinning it in pacman. That would count as a partial upgrade, which is unsupported



  • I was trying to finalize a backup device to gift to my dad over Christmas. We’re planning to use each other for offsite backup, and save on the cloud costs, while providing a bridge to each other’s networks to get access to services we don’t want to advertise publicly.

    It is a Beelink ME Mini running arch, btrfs on luks for the os on the emmc storage and the fTPM handling the decryption automatically.

    I have built a few similar boxes since and migrated the build over to ansible, but this one was the proving ground and template for them. It was missing some of the other improvements I had built in to the deployed boxes, notably:

    • zfs on luks on the NVMe drives
    • the linux-lts kernel (zfs compatibility)
    • UKI for the secureboot setup

    I don’t know what possessed me, but I decided that the question marks and tasks I had in my original build documentation should be investigated as I did it up, I was hoping to export some more specific configuration to ansible to the other boxes once done. I was going to migrate manually to learn some lessons.

    I wasn’t sure about bothering with UKI. I wanted zfs running, and that meant moving to the linux-lts kernel package for arch.

    Given systemd-boot’s superior (at current time) support for owner keys, boot time unlocking and direct efi boot, I’ve been using that. However, it works differently if you use plain kernels, compared to if you use UKI. Plain kernels use a loader file to point to the correct locations for the initramfs and the kernel, which existed on this box.

    I installed the linux-lts package, all good. I removed the linux kernel package, and something in the pacman hooks failed. The autosigning process for the secure-boot setup couldn’t find the old kernel files when it regenerated my initramfs, but happily signed the new lts ones. Cool, I thought, I’ll remove the old ones from the database, and re-enroll my os drive with systemd-cryotenroll after booting on the new kernel (the PCRs I’m using would be different on a new kernel, so auto-decrypt wouldn’t work anyway.)

    So, just to be sure, I regenerated my initram and kernel with mkinitcpio -p linux-lts, everything worked fine, and rebooted. I was greeted with:

    Reboot to firmware settings
    

    as my only boot option. Sigh.

    Still, I was determined to learn something from this. After a good long while of reading the arch wiki and mucking about with bootctl (PITA in a live CD booted system) I thought about checking my other machines. I was hoping to find a bootctl loader entry that matched the lts kernel I had on other machines, and copy it to this machine to at least prove to myself that I had sussed the problem.

    After checking, I realised no other newer machine had a loader configuration actually specifying where the kernel and initram were. I was so lost. How the fuck is any of this working?

    Well, it turns out, if you have UKI set up, as described, it bundles all the major bits together like the kernel, microcode, initram and boot config options in to one direct efi-bootable file. Which is automatically detected by bootctl when installed correctly. All my other machines had UKI set up and I’d forgotten. That was how it was working. Unfortunately, I had used archinstall for setting up UKI, and I had no idea how it was doing it. There was a line in my docs literally telling me to go check this out before it bit me in the ass…

    • [x] figure out what makes uki from archinstall work ✅ 2025-09-19
    • It was systemd-ukify

    So, after that sidetrack, I did actually prove that the kernel could be described in that bootctl loader entry, then I was able to figure out how I’d done the UKI piece in the other machines, and applied it to this one, so it matched and updated my docs…

    • IT WASN’T ukify

    UKI configuration is in mkinitcpio default configs, but needs changing to make it work.

    vim /etc/mkinitcpio.d/linux-lts.preset 
    

    Turns out my Christmas wish came true, I learned I need to keep better notes.





  • med@sh.itjust.workstoLinux@lemmy.mlIs my apt bugged?
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Looks like that might have changed, libc-gconv-modules-extra has an i386 package for 2.42-5 added at like midnight UTC+1. Given the sources only update every 6 hours, might be you found an unlucky update in between?

    Struggled to find a time for the release, but the changelog has one, unsure how true to package-available time that is:

    glibc (2.42-5) unstable; urgency=medium
    
      [ Martin Bagge ]
      * Update Swedish debconf translation.  Closes: #1121991.
    
      [ Aurelien Jarno ]
      * debian/control.in/main: change libc-gconv-modules-extra to Multi-Arch:
        same as it contains libraries.
      * debian/libc6.symbols.i386, debian/libc6-i386.symbols.{amd64,x32}: force
        the minimum libc6 version to >= 2.42, to ensure GLIBC_ABI_GNU_TLS is
        available, given symbols in .gnu.version_r section are currently not
        handled by dpkg-shlibdeps.
    
     -- Aurelien Jarno <aurel32@debian.org>  Sat, 06 Dec 2025 23:02:46 +0100
    
    glibc (2.42-4) unstable; urgency=medium
    
      * Upload to unstable.
    
     -- Aurelien Jarno <aurel32@debian.org>  Wed, 03 Dec 2025 23:03:48 +0100
    

  • I thought about this for a long while, and realised I wasn’t sure why, just that most of my work has gravitated towards Arch for a while.

    Eventually, I’ve decided the reason for the move is because of three specific issues, that are really all the same problem - namely I don’t want to learn the nix config language to do the things I want to do right now.

    I’ve read lots of material on flakes, even first modified then wrote a flake to get not-yet-packaged nvidia 5080 modules installed (for a corporate local llm POC-turned-PROD, was very glad I could use nix for it!) I still just don’t really get how all the pieces hang together intuitively, and my barrier is interest and time.

    Lanzaboote for secure boot. I’m going to encrypt disks, and I’m going to use the TPM for unlocking after measured uki, despite the concerns of cold-boot attacks, because they aren’t a problem in my threat model. Like the nvidia flake, I don’t really get how it hangs together intuitively.

    Home management and home-manager. Nix config language is something I really want to get and understand, but I’ve been maintaining my home directory since before 2010, and I have tools and methods for dealing with lots of things already. The conversion would take more time than I’m prepared to devote.

    Most of the benefits of nix are things I already have in some format, like configuration management and package tracking with git/stow, ansible for deployment, btrfs for snapshots, rollback and versioning. It’s not all integrated in one system, but it is all known to me, and that makes me resistant to change.

    I know that if I had a week of personal time to dig in and learn, to shake off all the old fleas and crutch methods learned for admin on systems that aren’t declarative, I’d probably come away with a whole new appreciation for what my systems actually look like, and have them all reproducible from a readable config sheet. I’m just not able to make that time investment, especially for something that doesn’t solve more problems than I’ve already solved.


  • You are right to be afraid. I had a similar story, and am still recovering and sorting what data is recoverable. Nearly lost age 0.5-1.5 years of media of my daughters life this way.

    As others have said, don’t replicate your existing backup. Do two backups. Preferably on different mediums, spinning disk/ssd eg.

    If one backup is corrupted or something nasty is introduced, you will lose both. This is one of the times it is appropriate to do the work twice.

    I’ve built two backup mini PCs, and I replicate to them pretty continuously. Otherwise, look at something like Borg base/alternatives.

    Remember, 3-2-1 and restore testing. It’s not a backup unless you can restore it.


  • This is the most important thing. Over time, you develop opinions about software and methods of solving problems. I have strong opinions on how I want to manage a system, but almost no opinions on flags I want to switch when I compile software. This is why I’m on arch not gentoo. I’m sure I’ll make the leap eventually…

    Before I switched back to Arch for my daily driver, I’d frankensteined my Fedora install on my laptop to replace power management, all the GUI bits, most of the networking stack and a fair chunk of the package system. Fedora, and Gnome in that case is opinionated software. That’s a good thing as far as I’m concerned, having a unified vision helps give the system direction and a unique feel. These days, I have my own opinions that differ in some ways from available distros.

    I wanted certain bits to work a certain way, and I kept having to replace other parts to match the bits I was changing. When you ask the question, can I swap daemon X out for Y, the answer on fedora was, sure, but you’ll have to replace a, b and c too, and figure out the rest for yourself. Good luck when updates come along.

    The answer on arch is, yeah, sure, you can do that - and here’s a high level wiki naming some gotchas you’ll want to watch out for.

    I’ve also reached a stage in my computer usage that I don’t want things to happen automatically for me unless I’ve agreed them or designed them. For example, machines don’t auto-mount usb drives, even in gui user sessions, or auto connect to dhcp. I understand what needs to be done, and do it the way I want to do it, because I have opinions on networking and usb mounting.

    My work laptop is a living build that I just keep adding to and changing every day. Btrfs snapshots are available for rollback…

    I’ve got two backup machines - beelink mini me’s running reproducible builds created using archinstall. It’s running on internal emmc, and they have have a 6 disk zfs raidz2 on internal nvme drives, all of which are locked behind luks encryption,with the keys in the fTPM module, without the damn Microsoft key shim. On is off site. Trying to get secureboot working on Debian was an exercise in frustration.

    I’ve modified a version of that same build for my main docker host on another mini PC.

    My desktop runs nixos, but will be transfered to arch next rebuild.

    I’ve got a steamdeck, which runs an arch based distro.

    I used to run raspberry pi’s on arch because the image to flash the SD cards used to be way smaller than what was offered by the default pi is.

    That’s all using arch. It’s flexible, has the tool sets I need, and almost never tells me ‘No, you can’t do that’.


  • med@sh.itjust.workstoLinux@lemmy.mlGPG Key Managing
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 months ago

    I’d agree that a hardware solution would be best. Something designed specifically to do it. I’ve been eyeing up the biometric yubikey for a while.

    I do this for ssh keys, VPN certs and pgp keys. My solution is pretty budget, I generate the keys on a LUKS encrypted USB and run a script that loads them in to agents, and flushes them on sleep. The script unlocks and mounts the LUKS partition, adds the keys to agents, unmounts and locks the USB. The passwords I just remember for the unlock and load into memory, but they’re ripe for stuffing in to keepass-xc - I need to look at the secret service api and incorporate that in to the script to fetch the unlock passwords directly from keepass.

    I have symlinks in the default user directories to the USB’s mount points, like ~/.ssh/id_ed25519 -> /run/media/<user>/<mount>/id_ed25519. By default, when you run ssh-agent, it tries to add keys in the default places.

    The way it works for me is:

    • plug the USB in to the laptop after a restart or wake-up
    • run script
    • enter passwords for luks key, ssh-agent, gpg agent etc.
    • Unplug USB.

    I keep break-glass spares in a locked cabinet in my house and office, both with different recovery keys

    I do this because it’s my historical solution, and I haven’t evaluated the hardware options seriously yet.


  • I have never understood this fork argument. All it takes to make it work is a clear division for the project.

    If you want to make something, and it requires modification of the source for a GPL project you want to include, why not contribute that back to the source? Then keep anything that isn’t a modification of that piece of your project separately, and license it appropriately. It’s practically as simple as maintaining a submodule.

    I’d like to believe this is purely a communication issue, but I suspect it’s more likely conflated with being a USP and argued as a potential liability.

    These wasteful practices of ‘re-writing and not-cloning’ are facilitated by a total lack of accountability for security on closed source commercialised project. I know I wouldn’t be maintaining an analogue of a project if there were available security updates from upstream.



  • They’ve snapified coreutils too, and rewritten them in rust (uutils). It’s proving to be a challenging transition…

    Edit: While the article mentions rust’s vaunted memory safety as a driver, I can’t help but notice that uutils is licensed MIT, as opposed to GNU’s coreutils license being GPL v3.

    While snapd is licensed GPL v3, it’s important to note that despite the ‘d’ suffix, it’s barely a daemon. It’s mostly a client for the snap backend - which is proprietarially licensed and only hosted with Canonical. The snapd client could be replaced at any time.