I know for many of us every day is selfhosting day, but I liked the alliteration. Or do you have fixed dates for maintenance and tinkering?
Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.
This post is proudly sent from my very own Lemmy instance that runs at my homeserver since about ten days. So far, it’s been a very nice endeavor.
Finally upgrading my Plex server from Ubuntu 22.04 to 24.04! I’ve been putting it off out of habit, as I always wait for the *.1 releases but I’ve done several of these for clients and every single one went flawlessly. But I still waited it out.
Also thinking about switching my Ext4 mirrored softRAID to ZFS… Since Ubuntu has the only acceptable ZFS implementation outside of UNIX proper (Ubuntu’s is in-kernel, everyone else uses kernel modules, which i hate). But that’s going to be extra work I may not be in the mood for. But damn would compression and deduplication be nice! So still maybe
Wait, you mean you host plex servers for clients? Or that you work with Ubuntu in general? And for the ZFS thing, it doesn’t really matter if it’s in-kernel or something else, at the end of the day, they all work the same. I’m using zfs on my arch machine for example, and everything works just fine (dkms). And zfs is super easy in general, you should definetly try it
That is one thing I still need to do, upgrade my Ubuntu server from 22.04 to 24.04. laat time I tried this I noticed many python packages were missing or failing. Reverted to the backup. Maybe now is the time to do the switch and iron out the crinks that may be left after.
Yesterday i managed to successfully host a simple html safely (its more of a network test)
The path is nginx->openwrt->router to internet Now i only need to:- backup
- set up domain (managing via cloudflare)
- set up certificates
- properly documentbthe setup + some guides on stuff that i will repeat
and then i can throw everything i want on it :D
I also finally set up Lemmy on my home lab, as well as moving Authelia from Docker to bare metal.
Other than that, I’ve been struggling to find any other self-hosted apps that would actually be useful to me.
Finally setup Synology surveillance station and got my local cameras all hooked in with motion events. Very swish.
Attempted and failed to set up some sort of fail2ban between my Cloudflared container and my website I host at home.
I finally got IPv6 working in Docker Swarm…by moving from Docker Swarm to regular Docker.
Traefik now properly gets IPv6 addresses and forwards them to the backend.
What’s the big benefit of moving to IPv6 for a LAN? Just wondering if there is any other benefits over addresses? My unifi kit can convert us to IPv6 but I’m hesitant without knowing what devices it will break.
Copying from an older comment of mine:
IPv6 is pretty much identical to IPv4 in terms of functionality.
The biggest difference is that there is no more need for NAT with IPv6 because of the sheer amount of IPv6 addresses available. Every device in an IPv6 network gets their own public IP.
For example: I get 1 public IPv4 address from my ISP but 4,722,366,482,869,645,213,696 IPv6 addresses. That’s a number I can’t even pronounce and it’s just for me.
There are a few advantages that this brings:
- Any client in the network can get a fresh IP every day to reduce tracking
- It is pretty much impossible to run a full network scan on this amount of IP addresses
- Every device can expose their own service on their own IP (For example: You can run multiple web servers on the same port without a reverse proxy or multiple people can host their own game server on the same port)
There are some more smaller changes that improve performance compared to IPv4, but it’s minimal.
My unifi kit can convert us to IPv6 but I’m hesitant without knowing what devices it will break.
You don’t usually “convert” to IPv6 but run in dual stack, with both IPv4 and IPv6 working simultaneously. Make sure your ISP supports IPv6 first, there is little use to only run IPv6 internally.
For the first time I configured ssh with pubkey auth.
Auth between windows (agent) and alpine (host) to use as a helper/backup proxy in veeam (helper is used to mount file level restore assistant)
Took me 3 hours to find out that
Windows didnt know the private key
Pubkey auth wasnt active
Fucked up pubkey auth
Alpine isnt supported by Veeam so it didnt work
Needed to install a small debian VM.:|
At least I did my first pubkey auth setup.It gets better.
I use Mend Renovate to keep up with the latest and greatest container images in my private repo.
I just spent a good few hours optimizing my LLM rig. Disabling the graphical interface to squeeze 150mb of vram from xorg, setting programs cpu niceness to highest priority, tweaking settings to find memory limits.
I was able to increase the token speed by half a second while doubling context size. I don’t have the budget for any big vram upgrade so I’m trying to make the most of what ive got.
I have two desktop computers. One has better ram+CPU+overclocking but worse GPU. The other has better GPU but worse ram, CPU, no overclocking. I’m contemplating whether its worth swapping GPUs to really make the most of available hardware. Its bee years since I took apart a PC and I’m scared of doing somthing wrong and damaging everything. I dunno if its worth the time, effort, and risk for the squeeze.
Otherwise I’m loving my self hosting llm hobby. Ive been very into l learning computers and ML for the past year. Crazy advancements, exciting stuff.
I recently setup Music Assistant and have been trying to make it work in my VLANs with my esp32 devices. It has been slow going. Nothing has the level of logging required to easily debug the issues I’ve encountered but I’m slowly working through it all.
What should I do next?
-
Set up peertube in a proxmox, difficulty: My hosting provider doesn’t allow 443 or 80, I have cloudflare working for other things but I think this invades their TOS
-
Set up immich in a proxmox. Difficulty: I need regular backups off site and it’s going to be pretty large.My wife is a professional photographer.
-
Set up my Coral TPU with frigate replacing my aging win10 blue iris.
I am also struggling with off-site backups. Mainly because I don’t have a cheap and regular way of doing it.
You could have a friend to them for you, and viceversa.
That would be the idea, but then my friend would need to have a server running at his place. And there is still the problem of how to transfer the data securely over the network to my friend, without poking (too many) holes in the firewall
-
I’m patiently (cf impatiently) awaiting the arrival of an Aoostar WTR Pro and components to build my first NAS and full Arr stack for Linux ISO’s.
I completed a proof of concept and learning a month ago on a Pi 5, and I can’t wait to get my hands dirty with something more real!
I’ll take any advice anyone throws my way :D and thanks to this community for the learning and inspiration since I joined Lemmy!
I migrated my whole native service infrastructure to Docker services this weekend. I prepared for it the previous weeks; basically looking up information about details I wasn’t sure about. The services were mailing, file cloud, and traccar with modoboa, ownCloud respectively. I moved to mailcow and Nextcloud and replaced my feedly account with NextCloud News as a bonus. So far pretty happy with it, had a couple set-backs but also learned a lot in the process. This was the first time for me doing something productive with Docker
I spent two hours last night beating myself over the head with RAM sticks. Got an ewasted server that had the alarm misconfigured, figured I’d upgrade it and put in a valid configuration since it was just off my size. Slapped in some matching size sticks and it wouldn’t boot. It took my embarrassingly long to realize that the speeds werent the same and that the server really cared about the speeds being the same, more than it cared about sizes being the same incidentally.
I work in IT that should have been the first fuckin thing I checked smh
I remember when I worked in a data center and there was a custom server order that needed something like 64 sticks per server, and procurement didn’t bother to make sure that we had sets that were the same speed, timing, or brand. Thankfully I caught it before we wasted a ton of time troubleshooting.
I’m building services out for my family as things enshittify. Moved the family over to an immich instance, run a family blog on Wordpress (working on rolling my own since it’s over complicated and with all the Wordpress shenanigans…), plex (lifetime account, works for now). I have a number of self-built projects as well, a “momboard” like system that is integrated with my Wordpress blog for access and control, a pi based backup server that lives at my friends house and nails a VPN connection to my router and I’m playing with Meshtastic as an offline communication system for my kids scout troop when we’re camping without cell signal. Lots of home automation with home assistant as well.
I host it all on Debian servers, raspberry pi’s and esp32 devices (Meshtastic and home automation). I used to run kubernoodles but it was more complicated than needed and for my use case, docker, ansible and bash scripts manage it all just fine.
How’s your experience with meshtastic been? I’ve just started experimenting with it. There are very few nodes in my area, so my potential use cases seem limited.
Very limited so far. I don’t have much near me but there has been enough sproradic connectivity that I pick up the occasional chatter in the default channel and have about 145 nodes it’s aware of.
Mostly been my son and I playing around. He wants to get his neighborhood friends involved :).
what’s maintenance? is that when an auto-update breaks everything and you spend an entire weeknight looking up tutorials because you forgot what you did to get this mess working in the first place?
I do love how little maintenance is needed until you have to re-learn everything you forgot
I know you’re half joking. But nevertheless, I’m not missing this opportunity to share a little selfhosting wisdom.
Never use auto update. Always schedule to do it manually.
Virtualize as many services as possible and take a snapshot or backup before updating.
And last, documentation, documentation, documentation!
Happy selfhosting sunday.
I think auto update is perfectly fine, just check out what kind of versioning the devs are using and pin the part of the version that will introduce breaking changes.
I just like it when things break on scheduled maintenance and I have time to fix it or the possibility to roll back with minimal data loss, instead of an auto update forcing me spend a week night fixing it or running a broken system till I have the time.
You can have the best of both worlds - scheduled auto updates on a time that usually works for you.
With growing complexity, there are so many components to update, it’s too easy to miss some in my experience. I don’t have everything automated yet (in fact, most updates aren’t) but I definitely strive towards it.
In my experience, the more complex a system is, the more auto updates can mess things up and make troubleshooting a nightmare. I’m not saying auto updates can’t be a good solution in some cases, but in general I think it’s a liability. Maybe I’m just at the point where I want my setup to work without the risk of it breaking unexpectedly and having to tinker with it when I’m not in the mood. :)
There’s a fine line between “auto-updates are bad” and “welp, the horribly outdated and security hole riddled CI tool or CMS is how they got in”. I tend to lean toward using something like renovate to queue up the updates and then approve them all at once. I’ve been seriously considering building out a staging and prod env for my homelab. I’m just not sure how to test stuff in staging to the point that I’d feel comfortable auto promoting to prod.
Yes
I’ve had this happen twice in two weeks since installing Watchtower and have since scheduled it to only run on Friday evening…
Nothing greater than crashing your weekend evening just trying to watch a movie on a broken jellyfin server :'D
No you just continue updating until it’s fixed again.