I followed the wiki tutorials for that. Make sure iommu is working, blacklist drivers on host, etc.
I followed the wiki tutorials for that. Make sure iommu is working, blacklist drivers on host, etc.
It’s on my lift of projects. I build a Proxmox+Ceph cluster and I have GPU passthrough working for LLM inference. I was planning to get docker headless Steam going and try to steam via Steam In Home Streaming as my first attempt then pivot to a full VM with Sunshine as a last resort.
Honestly, I would back up all of your downloads, documents, pictures, videos, browser history/passwords/bookmarks, and anything else you want to save to an external drive or to The Cloud (or multiple, e.g., most/all browsers have a sync function, and OneDrive/Google Drive/Dropbox, etc.), and then download and test drive multiple different distros until you find one that you like and has good community support. Nearly all distros today will let you test it out without installing (kind of a try before you buy). Once you find one, install it while wiping the Windows install, then load your graphics drivers and Steam. Steam will handle the rest as far as running your games (some caveats apply, i.e., some multiplayer games will not work because the developers are assholes).
I always called that a soft brick
when it could still power on, but couldn’t load the main operating system, but it could receive a reinstall of the system.
Hard brick
was the one where it was permanently disabled either from not able to power on or was incapable of reloading the main OS without essentially “brain surgery” of the device.
I guess brick could extend to broken components until a reboot (such as a broke WiFi driver or such). What type of “brick” would it be? How about glitch brick
?
Isn’t the context of that quote around the kernel and kernel space vs user space? I don’t see how that thought really extends to distros that simply implement the kernel as one of their packages.
They should have just called it appleOS 26 since they are bumping all of it to 26 and unifying the look and feel between all of their OSs.
How difficult is it for an adversary to get in the middle of the TPM releasing the keys to LUKS? That’s why I would want attestation of some sort, but that makes it more complicated and thinking about how that would work in practice makes my head spin…
Is clevis using an attestation server or is it all on a single machine? I’m interested in getting this set up but the noted lack of batteries included for this in the common distros makes it a somewhat tall order.
please
share the script?
I’m really not sure. I’ve heard of people using Ceph across datacenters. Presumably that’s with a fast-ish connection, and it’s like joining separate clusters, so you’d likely need local ceph cluster at each site then replicate between datacenters. Probably not what you’re looking for.
I’ve heard good things about Garbage S3 and that it’s usable across the internet on slow-ish connections. Combined with JuiceFS is what I was looking at using before I landed on Ceph.
I know Ceph would work for this use case, but it’s not a lighthearted choice, kind of an investment and a steep learning curve (at least it was, and still is, for me).
I went and edited my hosts file and added all of my devices, but I only have a handful. Tailscale on macOS has a lot of bugs, this being one of many.
Even just providing specifications and some documentation about the devices, someone might write a new driver. Reverse engineering is hard, having something to go off of means they can probably extend support from an existing driver fairly easily.
Maybe they will band together to support a common base system that is more open? Wishful thinking I know…
I’ve used this in some bash scripts, very useful!
It depends on the container I suppose. There are some that are very difficult to rebuild depending on what’s in it and what it does. Some very complex software can be ran in containers.
I’ve been wanting to tinker with NixOS. I’ve stuck in the stone ages automating VM deployments on my Proxmox cluster using ansible. One line and about 30 minutes (cuda install is a beast) to build a reproducible VM running llama.cpp with llama-swap.
Typically, the container image maintainer will provide environment variables which can override the database connection. This isn’t always the case but usually it’s as simple as updating those and ensuring network access between your containers.
A lot of times it is necessary to build the container oneself, e.g., to fix a bug, satisfy a security requirement, or because the container as-built just isn’t compatible with the environment. So in that case would you contract an expert to rebuild it, host it on a VM, look for a different solution, or something else?
This is pretty rad! Thanks for sharing. I went down the same road with learning k3s on about 7 Raspberry Pis and pivoted over to Proxmox/Ceph on a few old gaming PCs / Ethereum miners. Now I am trying to optimize the space and looking at how to rack mount my ATX machines with GPUs lol… I was able to get a RTX 3070 to fit in a 2U rack mount enclosure but having some heat issues… going to look at 4U cases with better airflow for the RTX 3090 and various RX480s.
I am planning to set up Talos VMs (one per Proxmox host) and bootstrap k8s with Traefik and others. If you’re learning, you might want to start with a batteries-included k8s distro like k3s.