I wonder if there is a nptocable difference between a HDD and SSD. Did someone already test it? I run it off a good SSD but wonder if a HDD would be enough.
I run mine with the actual photos on HDD but the database on SSD. So far everything has been near instantaneous for loading, downloading, uploading, you name it.
Same.
Noticeable difference loading the page? Loading photos? Uploading photos?
Photo files are relatively small, so an HDD is absolutely fine.
relatively small
Until you dump 2000 RAW photos on there
Meh, even then. If they’re 60MB each that’s only 120GB.
We don’t need to know your pron prefs, pal!
Usually* there is a database for the file meta-data that will benefit from faster access times of a SSD, the files themselves can be on a HDD.
*not sure how Immich specifically does it.
Unless somethkng changed in the last few years, SSDs are much much faster.
They are, but I think the question was more “does the increased speed of an SSD make a practical difference in user experience for immich specifically”
I suspect that the biggest difference would be running the Postgres DB on an SSD where the fast random access is going to make queries significantly faster (unless you have enough ram that Postgres can keep the entire DB in memory where it makes less of a difference).
Putting the actual image storage on SSD might improve latency slightly, but your hard drive is probably already faster than your internet connection so unless you’ve got lots of concurrent users or other things accessing the hard drive a bunch it’ll probably be fast enough.
These are all Reckons without data to back it up, so maybe do some testing
I use a HDD for my immich instance. I have a feeling it might have made the initial import process quicker (complete google photos dump). However, general usage, I have found zero bottlenecks.
It depends on the load on the disk. My main docker host pretty well has to be on the SSD to not complain about access times, but there are a dozen other services on the same VM. There’s some advisory out there that things with constant IO should avoid SSDs to not wear out the read/write too fast, but I haven’t seen anything specific on just how much is too much.
Personally I split the difference and run the system on SSD and host the bulk data on a separate NAS with a pile of spinning disks.
Considering the database itself is relatively small, PostgreSQL could end up largely caching it in memory, so even hosting the DB on an HDD might not feel much slower.