I once had a directory in /tmp
called etc
which contained subdirectories for something I was migrating.
I thought that I was in /tmp
when I ran rm -rf etc
… I was actually in /
I once had a directory in /tmp
called etc
which contained subdirectories for something I was migrating.
I thought that I was in /tmp
when I ran rm -rf etc
… I was actually in /
Look up the GPU on these charts to find out what codecs it will support: https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
NVENC support will tell you what codecs your GPU can generate for client devices, and NVDEC support determines the codecs your GPU can read.
Then compare it with the list of codecs that your Intel can handle natively.
If you want to move your containers to a different location, look into configuring docker’s data-root
: https://stackoverflow.com/questions/24309526/how-to-change-the-docker-image-installation-directory
You copy /var/lib/docker
to a new location and update /etc/docker/daemon.json
I will say: Moving data-root to an NFS mount isn’t going to work well. I’ve tried it, and docker containers rely on filesystem features to run their overlays. On an NFS, this feature isn’t present, so your services will duplicate the container’s entire filesystem. This will tank your performance and is basically unusable for anything but trivial examples. Docker data-root basically needs to be a “physical” disk.
I’ve had no issues using NFS shares mounted as docker volumes. It’s just the data-root where it’ll fail.
If you’re doing it from scratch, I’d recommend starting with a filesystem that has parity checks and filesystem scrubs built in: eg BTRFS or ZFS.
The benefit of something like BRTFS is that you can always add disks down the line and turn it into a RAID cluster with a couple commands.
Yep, the problem was that docker started before the NFS mount. Adding the dependency to my systemd docker unit did the trick!
isn’t it an annoyance having to connect to your home network all the time?
It’s less annoying than the gnawing fear that my network might be an easy target for attackers.
So many bad-faith arguments being made about this.
Independent of any arguments about who asked for this to happen and why: A free software project always has the right to choose which contributors it trusts and which it doesn’t. I’ve seen no evidence that these people are banned from submitting patches due to their nationality. They’ve been remove from a particular role in the project due to political reasons. An organization is an inherently political entity.
Remember when codes of conduct destroyed all of free software and nothing ever got built again? Me neither. It’s the same thing.
Proliant G9 is an EoL server that hasn’t been sold since 2018. Meanwhile, Debian bookworm released last year. I’d be surprised if the problem were that your installer gave you a kernel that’s too old.
What is the output of ip addr show
?
It might also be worth ruling out low-level issues:
Surely this could be good, right?
If celebrities need to be accessible to their biggest fans, maybe it would induce them to leave the birdsite? And if this is as big a migration as the article suggests, it has the potential to snowball in network effects, giving other influential users one less reason to feel chained to a dumpster fire.
Last suggestion: This document suggests that there may be an rclone volume plugin for docker, which could run the mount only when your specific container starts up: https://rclone.org/docker/
And is docker running via a systemd service also?
In that case, you can add an After=
line to the docker unit file, telling it to wait until after your mount service is running:
https://stackoverflow.com/questions/21830670/start-systemd-service-after-specific-service
You can use systemctl edit docker
to create an override file with this property:
https://askubuntu.com/questions/659267/how-do-i-override-or-configure-systemd-services#659268
How are you mounting the network drive? On my docker machine, network drive mounts are in /etc/fstab. I’ve not had an issue where docker starts before everything is mounted.
Yes, OP I highly recommend a GL.iNet device. It’s pocket sized and always does the job.
It’s also great for shitty wifi that tries to limit how many devices you can connect. The router will appear as one MAC and then all your other devices can route traffic through it.
As someone who has owned enterprise servers for self-hosting, I agree with the previous comment that you should avoid owning one if you can. They might be cheap, but your longterm ownership costs are going to be higher. That’s because as the server breaks down, you’ll be competing with other people for a dwindling supply of compatible parts. Unlike consumer PCs, server hardware is incredibly vendor locked. Hell, my last Proliant would keep the fans ramped at 100% because I installed a HDD that the BIOS didn’t like. This was after I spent weeks tracking down a disk that would at least be recognized, and the only drives I could find were already heavily used.
My latest server is built with consumer parts fit into a 2U rack case, and I sleep so much easier knowing I can replace any of the parts myself with brand new alternatives.
Plus as others have said, a 1U can be really loud. I don’t care about the sound of my gaming computer, but that poweredge was so obnoxious that despite being in the basement, I had to smother it with blankets just so the fans didn’t annoy me when I was watching TV upstairs. I still have a 1U Dell Poweredge, but I specifically sought out the generation that still let you hack the fan speeds in IPMI. From all my research, no such hack exists for the Proliant line.
The problem with chromebooks is that the base specs are pretty shit. A lot of them have 4 GiB of RAM and maybe 16GiB of disk if you’re lucky.
They were designed to be thin clients to connect students to the internet, and little else. Maybe they could be hacked into something useful, but I don’t think it’ll ever make a good PC. They were always destined for the landfill.
Meanwhile, the best thinkpads were quality machines back when they came out. IMO, that’s why they’re still so versatile today. Free software can’t fix bad fundamentals.
Not sure what motherboard you have: Most consumer boards only support “FakeRAID”, which requires a kernel driver to actually function. Good luck finding a vendor who wrote a driver for Linux.
I’d definitely recommend software RAID instead, as you’ll have better support. I like btrfs, so I’d recommend you set up your new drives to use a btrfs RAID configuration. mdadm is another option, if you really like ext4.
For me, at least, all the “Morrowind-like” comparisons set me up for disappointment. I’d say that it has visual aspects inspired by Morrowind.
Maybe I didn’t play it enough. I got 2h in and then hit a bug where I kept falling through the environment, and then my save got corrupted. It didn’t feel Morrowindy enough for me to want to start over.
On Linux, I run fwupdmgr
to periodically check for firmware updates. Not every manufacturer supports it yet, but I’ve had good results with a few laptops. Not sure if it supports BIOS.
Also though, I generally try to leave my BIOS alone if everything is working fine. Unless I hear of a reason to update, I’d rather stay on a stable version.
Programmable condoms which make the user look like a bad dragon dildo.