Yup, this - batteries are consumables. They have a service life of ~2-5 years depending on load. If the manual doesn’t tell you how to replace them then it’s basically ewaste already
Yup, this - batteries are consumables. They have a service life of ~2-5 years depending on load. If the manual doesn’t tell you how to replace them then it’s basically ewaste already
Depends on what you need:
Good thing there hasn’t been any remotely exploitable security bugs in any of the mail system components in the 6 years since Debian 7 went EoL
Looks like it’s an x86_64 kernel though? So this is a VM - it’s not running as a paravirtualised system, it’s having to emulate everything from the CPU up?
If a project is hosted on sourceforge then its a pretty good sign that the developer hasn’t progressed their craft since about 2005, which is a pretty big red flag for anything
Keycloak to provide OIDC, although in hindsight I should have gone with Authelia Authentik
At the rates I’m paying for 4G data, there are very few places in the world where it wouldn’t be cheaper for me to get on a plane and sneakernet that much data
There are very few things more obnoxious than an asshole with unsolicited parenting advice
https://www.servethehome.com/everything-homelab-node-goes-1u-rackmount-qotom-intel-review/ would probably be a better bet for a router
I moved just about everything to Route53 for registration - I run my own DNS so I don’t need to pay for that, and it’s ~40% cheaper than Gandi for better service.
Now I just need to move my .nz domain (R53 supports .{co,net,org}.nz, but not .nz itself?) and the 2 .xyz domains that are “premium” for some reason so R53 won’t touch
Don’t disagree with you, but yeah - good luck with that
As long as someone is willing and able to maintain it.
It’s open source. All the work is either done by volunteers or by corporate sponsors. If it’s worth it for you to keep a GPU from the 90s running on modern kernels and you can submit patches to keep up with API changes, then no reason to remove it. The problem isn’t that the hardware is old, it’s that people don’t have the time to do the maintenance
For anything that is related to my backup scheme, it’s printed out hard copy, put in an envelope in a fire safe in my house. I can tell you from experience there is nothing more stressful than “oh fuck I need my backups but the key to unlock the backups is in the backups fuck fuck fuck”.
And for future reference, anyone thinking about breaking into my house to get access to my backups just DM me, I’m sure we can come to an arrangement that’s less hassle for both of us
I was in the same place as you a few years ago - I liked swarm, and was a bit intimidated by kubernetes - so I’d encourage you to take a stab at kubernetes. Everything you like about swam kubernetes does better, and tools like k3s make it super simple to get set up. There _is& a learning curve, but I’d say it’s worth it. Swarm is more or less a dead end tech at this point, and there are a lot more resources about kubernetes out there.
They are, but I think the question was more “does the increased speed of an SSD make a practical difference in user experience for immich specifically”
I suspect that the biggest difference would be running the Postgres DB on an SSD where the fast random access is going to make queries significantly faster (unless you have enough ram that Postgres can keep the entire DB in memory where it makes less of a difference).
Putting the actual image storage on SSD might improve latency slightly, but your hard drive is probably already faster than your internet connection so unless you’ve got lots of concurrent users or other things accessing the hard drive a bunch it’ll probably be fast enough.
These are all Reckons without data to back it up, so maybe do some testing
Debian. When I have time to mess about with server stuff, I want to be doing the thing I want to do rather than fixing whatever broke in the most recent set of updates
Pretty much - I try and time it so the dumps happen ~an hour before restic runs, but it’s not super critical
pg_dumpall
on a schedule, then restic to backup the dumps. I’m running Zalando Postgres in kubernetes so scheduled tasks and intercontainer networking is a bit simpler, but should be able to run a sidecar container in your compose file
If you figure it out, I know several companies that would be more than willing to drop 7 figures a year to license the tech from you
smartctl -t long
- if it doesn’t pass, then the drive is trash. If it does, then it might limp along a bit longer before catastrophically failing