I’m not great with Docker or networking, so when I picked up an n100 mini pc for self hosting I installed Ubuntu and Tipi to get started.
I used Tipi to install Immich and forwarded my ports, then setup cloudflare tunneling to expose it to the internet. Currently I’m migrating from Google Photos.
But since I’m new to this I’m worried about exposing Immich to the internet without really knowing what I’m doing. Any suggestions on ways to monitor my setup to make sure nothing goes wrong or gets hacked? Ideally any application suggestions would come from the Tipi app store but I’m willing to learn if there’s no other option. Thanks!
Have a look at Tailscale for your devices, this will prevent you from having to expose anything to the Internet, but rather having it behind your own VPN solution. Tailscale is the kinda service that is stupid easy to get going with too. HIGHLY recommend it!
It’s not a simple task, so I won’t list many specifics, but more general principles.
First, some specifics:
- disable remote root login via ssh.
- disable password login, and only permit ssh keys.
- run fail2ban to lock people out automatically.
Generally:
- only expose things you must expose. It’s better to do things right and secure than easy. Exposing a webservice requires you to expose port 443 (https). Basically everything else is optional.
- enable every security system that you don’t have reason to disable. Selinux giving you problems? Don’t turn it off, learn how to write rules to let your application do the specific things it needs. Only make firewall exceptions where needed, rather than disabling the firewall.
- give system users the minimum access they require to function.
- set folder permissions as restrictively as possible. FACLs will help, because it lets you be much more nuanced.
- automatic updates. If you have to remember to do it, it won’t happen. Failure to automate updates means your software is out of date.
- consider setting up a dedicated authentication setup like authellia or keycloak. Applications tend to, frankly, suck at security. It’s not what they’re making so it’s not as good as a dedicated security service. There are other follow on benefits.
- if it supports two factor, enable it.
You mentioned using cloud flare, which is good. You might also consider configuring your firewall to disallow outbound connections to your local network. That way if your server gets owned, they can’t poke other things on your network.
only expose things you must expose. It’s better to do things right and secure than easy. Exposing a webservice requires you to expose port 443 (https). Basically everything else is optional.
Not sure if it’s always possible but I setup an auth portal via port 443 where I’m using authelia and fail2ban, and using traefik to route authenticated users to other ports from there. So for example Plex 32400 is not exposed, only 443. But you get there via 443 and authentication.
Yup, that’s a really good pattern to follow. Not only does it minimize your exposure behind a secured entry, it also makes sure that all of your access is uniformly authenticated.
You have to do some shenanigans to do something similar with other, non-http based services, but it’s possible with most of them.
First, I would caution against exposing services to the internet. It would be far better to leave everything behind a VPN that only you or trusted peers can access.
Past that you can use tools like OSSEC, Snort, and fail2ban.
Thank you. Is leaving everything behind a vpn what Tailscale does?
Tailscale is a mesh network. It’s all encrypted, like a VPN, but not exactly the same thing.
It’s kind of like each member of the network having a VPN connection to every other member of the network.
Tailscale has a neat feature called Funnel, which funnels specified inbound traffic from the internet to a specific resource/service/device.
That traffic is encrypted too, starting from the entry point (which is hosted by Tailscale).
This can be useful for example, for something like Nextcloud, so clients don’t have to run the Tailscale app to get access.
Yes
Ayy, nice work getting started down the selfhosting route! Start by remembering that security is a maturity process. To find out if you’re doing the right things at the right time, ask yourself:
- Do I know it needs to be done
- Have I done enough (this day, week, etc)
- Do I have it to give
If you’re just one person and it’s a self-hosted home setup, remember you can’t patch all the things all at once. Asking yourself regularly if you maturing your environment over time is essential. Do a little work each week and you’ll make good progress.
When I think of security, I think of a few things
Authentication & Access - each system should have just enough accounts with just enough permissions to get work done. Change default passwords. Make them long and unique. Use MFA whenever possible (often impractical for self-hosted; cut yourself slack when this is the case!). A note on logging - if you can, while you’re doing this homework, check how long it saves logs. Shoot for keeping logs longer if possible; I like 30 days, but you might want more. Also make sure you have a time server, or at least that you’re getting accurate time stamps. If something weird happens and you’re investigating, having timestamps on logs that line up and make sense helps you recreate what happened, so you can decide if you need to wipe something and reload it.
Patching - automated scanning of your stuff for vulns would be fantastic if you’re interested in going that route, but a Saturday morning checklist to run updates on everything works too.
Attack Surface Management - if you’re not sure you’re exposed, scanning externally can be a big help. I have a Racknerd server ($40/yr, it’s amazing) in San Diego and I periodically run scans of my home network to see what’s forwarded. This is using nmap, although I could also use a free version of Nessus Essentials on there. This gives me an idea of what I look like from outside my network.
Inventory - do you know what you have, and what’s it doing? Even a pencil drawing of your network, IP addresses, and services they have can come in super handy. While big orgs have an index of critical data and where it’s stored, just knowing what containers are running on which VM or physical box can help if stuff goes sideways. I redraw mine periodically, yes it’s hand drawn because it’s fast and does the job lol. Do what works for you, though, to keep an inventory of your stuff. You need to know what you have, what it does, and where it’s supposed to be going.
You just don’t and pray for the best /j
- create empty debit account
- place credentials to account in server’s home directory
- if you get a call from your new account’s bank, they’ve got your server
This is honeypot security and is a best practice
/s
Thanks to everyone who took the time to answer. How do I check if my server has been accessed?
trough ssh when you connect to your machine run :
lastb -10
This will show you the last 10 login failed attemps you can change to 20 or whatever
you can also run: last -10 to see the last successful logged in
use :
more history
to see all the commands that someone have typed
on the dir /var/logs you have a lot of another logs too
for more paranoid level use
netstat -a
This will show you all incoming and outgoing communications
and like the others said considere using firewall and fail2ban
Note: don’t relly to much on firewalls since they are easy to bypass
keep all softwares updated
read frequently about new vulnerabilities if there is some vulnerability that affects your software until gets patched turn of that service.
Set up a weekly or at least monthly reminder to check for updates. That’s the most important thing to do. Outdated packages may have known security vulnerabilities.
Better yet, setup automatic upgrades. The occasional breakage are more than worth it.