I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.

Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    76
    ·
    edit-2
    7 months ago

    Huge amounts of daily maintenance because I lack self control and keep changing things that were previously working.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      arrow-up
      21
      arrow-down
      1
      ·
      7 months ago

      highly recommend doing infrastructure-as-code, it makes it really easy to git commit and save a previously working state, so you can backtrack when something goes wrong

      • Kaldo@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        7 months ago

        Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.

          • Kaldo@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd

            • kernelle@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              7 months ago

              I get it, the inventory is just a list of all servers and PC you are trying to manage and the playbooks contain every step you would take if you would configure everything manually.

              I’ll be honest when you first set it up it’s daunting but that’s the thing! You only need to do it once, then you can deploy and redeploy anything you have in minutes.

              Edit: found this useful resource

        • webhead@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          I opted weekly so I could store longer time periods. If I want to go a month back I just need 4 instead of 30. At least that was the main Idea. I’ve definitely realized I fucked something up weeks ago without noticing before lol.

          • I’ve got PBS setup to keep 7 daily backups and 4 weekly backups. I used to have it retaining multiple monthly backups but realized I never need those and since I sync my backups volume to B2 it was costing me $$.

            What I need to do is shop around for a storage VM in the cloud that I could install PBS on. Then I could have more granular control over what’s synced instead the current all-or-nothing approach. I just don’t think I’m going to find something that comes in at B2 pricing and reliability.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    7 months ago

    Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.

    Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?

    I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.

    • Footnote2669@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      +1 for docker and minimal maintenance. Only updates or new containers might break stuff. If you don’t touch it, it will be fine. Of course there might be some container specific problems. Depends what you want to run. And I’m not a devops engineer like Max 😅

    • MBV ⚜️@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Same same - just one update a week on Friday btw 2 yawns of the 4VMs and 10-15 services i have + quarterly backup. Does not involve much + the odd ad-hoc re-linking the reverse proxy when containers switch ips on the docker network when the VM restarts/resets

  • 0110010001100010@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 months ago

    Typically, very little. I have ~40 containers in my Docker stack and by in large it just works. I upgrade stuff here and there as needed. I am getting ready to do a hardware refresh but again with Docker that’s pretty painless.

    Most of the time spent in my lab is trying out new things. I’ll find a new something that looks cool and go down the rabbit hole with it for a while. Then back to the status quo.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 months ago

    It’s bursty; I tend to do a lot of work on stuff when I do a hardware upgrade, but otherwise it’s set it and forget it for the most part. The only servers I pay any significant attention to in terms of frequent maintenance and security checks are the MTAs in the DMZ for my email. Nothing else is exposed to the internet for inbound traffic except a game server VM that’s segregated (credential-wise and network-wise) from everything else, so if it does get compromised it would be a very minimal danger to the rest of my network. Everything either has automated updates, or for servers I want more control over I manually update them when the mood strikes me or a big vulnerability that affects my software hits the news.

    TL;DR If you averaged it over a year, I maybe spend 30-60 minutes a week on self hosting maintenance tasks for 4 physical servers and about 20 VM’s.

  • dlundh@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 months ago

    A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.

  • mikyopii@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 months ago

    For some reason my DNS tends to break the most. I have to reinstall my Pi-hole semi-regularly.

    NixOS plus Docker is my preferred setup for hosting applications. Sometime it is a pain to get running but once it does it tends to run. If a container doesn’t work, restart it. If the OS doesn’t work, roll it back.

  • Opisek@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 months ago

    As others said, the initial setup may consume some time, but once it’s running, it just works. I dockerize almost everything and have automatic backups set up.

  • matcha_addict@lemy.lol
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 months ago

    It’s as much or as little as you want to. If you don’t want to change anything, you can use something like debian and only maintain once every 5 years (and you could even skip that).

    I personally spend a little more, by choice, because I use gentoo. But if I’m busy, I can avoid maintenance by only running routine updates every couple of weeks or so.

  • smileyhead@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    I spend a huge amount of time configuring and setting up stuff as it’s my biggest hobby. But I got good enough that when I set something up it can stay for months without any mainainence. Most I do for keeping it up is adding more storage if it turn out to be used more than planned.

  • N-E-N@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke

  • CronyAkatsuki@lemmy.cronyakatsuki.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    7 months ago

    Minimal, I have to force myself to check the servers for updates atleast once a week.

    Main problem for me is I automated podman and docker updates with their respective autoupdate mechanisms and use ntfy for push notifications so I know if a service stops working and I had an update recently on it that it’s an update issue.

    Also have uptime monitor wih uptime kuma to monitor state of my services to catch them not working before I do, also ntfy for push notifications.

    Also have grafana+prometheus seted up on my biggest server for monitoring and alerting with alertmanager+mail to get notifications on even more errors.

    So in general I only have to worry about occasional once every few months error and updates of the host system (debian).

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    7 months ago

    Not heaps, although I should probably do more than I do. Generally speaking, on Saturday mornings:

    • Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack then, over coffee when I get up:
      • For containers I don’t care about, Watchtower auto-updates them as well, at which point I simply check the service is running and purge the old images
      • For mission-critical containers (Pi-hole, Home Assistant, etc), I manually update the containers and verify functionality, before purging old images
    • I then check for updates on my OPNsense firewall, and do a controlled update if required (needs me to jump onto a specific wireless SSID to be able to do so)
    • Finally, my two internet-facing hosts (Nginx reverse proxy and Wireguard VPN server) auto-update their OS and packages using unattended-upgrades, so I test inbound functionality on those

    What I still want to do is develop some Ansible playbooks to deploy unattended-upgrades across my fleet (~40ish Debian/docker LXCs). I fear I have some tech debt growing on those hosts, but have fallen into the convenient trap of knowing my internet-facing gear is the always up to date, and I can be lazy about the rest.