

I don’t think you need to involve Linux at all if you boot the official windows installer. I would just install the SSD as the only drive internally and install to it, then put it back in its enclosure.


I don’t think you need to involve Linux at all if you boot the official windows installer. I would just install the SSD as the only drive internally and install to it, then put it back in its enclosure.


It looks like it’s about helping to audo deploy docker-compose.yml updates. So you can just push updated docker-compose.yml to a repo and have all your machines update instead of needing to go into each machine or set up something custom to do the same thing.
I already have container updates handled, but something like this would be great so that the single source of truth for my docker-compose.yml can be in a single repo.
I use gluetun to connect specific docker containers to a VPN without interfering with other networking, since it’s all self contained. It also has lots of providers built in which is convenient so you can just set the provider, your password, and your preferred region instead of needing to manually enter connection details manage lists of servers (it automatically updates it’s own cached server list from your provider, through the VPN connection itself)
Another nice feature is that it supports scripts for port forwarding, which works out of the box for some providers. So it can automatically get the forwarded port and then execute a custom script to set that port in your torrent client, soulseek, or whatever.
I could just use a wireguard or openvpn container, but this also makes it easy to hop between vpn providers just by swapping the connection details regardless of whether the providers only support wg or openvpn. Just makes it a little more universal.


Sounds like a job for a pair of second hand nanobeams or something similar.
I second the other commenter who suggested using WISP gear. If you have clear fresnel zones it should work a treat.


I second this. Gluetun makes it so easy, working with docker’s internal networking is such a pain.
FYI the codename for the Xiaomi Redmi Note 13 5G is “gold”. You’ll usually see stuff for your phone labeled with that codename since it’s much shorter and easier to check than the whole name where you have to check pro vs non-pro, 5G version, etc as other variants will have completely different codenames.
If roms don’t have official support, then basically your other main option is to look for unofficial builds made by random people on XDA. I’ve used unofficial builds for many years in the past and they’re generally fine, but it’s up to you.
I don’t see any rom threads in the XDA forum for gold, so unfortunately I can’t really help any more. Good luck!
(Skimming around the XDA threads, it appears that the lack of roms is due to mediatek not releasing necessary source code, so if you want custom roms, it’ll be a lot easier to find them for a different phone)


Luckily they are on 2.0.1 now so there has been 2 stable version by now


Is external libraries maybe what you’re looking for?
There’s already an issue open for it: https://github.com/immich-app/immich/issues/1713
Be sure to thumbs it up!


Or alternatively:
- Does this mean unverified sideloading is going away on Android?
Yes


If you search for pfsense alias script, you’ll find some examples on updating aliases from a script, so you’ll only need to write the part that gets the hostnames. Since it sounds like the hostnames are unpredictable, it might be hard as the only way to get them on the fly is to listen for what hostnames are being resolved by clients on the LAN, probably by hooking into unbound or whatever. If you can share what the service is it would make it easier to determine if there’s a shortcut, like the example I gave where all the subdomains are always in the same CIDR and if one of the hostnames is predictable (or if the subdomains are always in the same CIDR as the main domain for example, then you can have the script just look up the main domain’s cidr). Another possibly easier alternative would be to find an API that lets you search the certificate transparency logs for the main domain which would reveal all subdomains that have SSL certificates. You could then just load all those subdomains into the alias and let pfsense look up the IPs.
I would investigate whether the IPs of each subdomain follow a pattern of a particular CIDR or unique ASN because reacting to DNS lookups in realtime will probably mean some lag between first request and the routing being updated, compared to a solution that’s able to proactively route all relevant CIDRs or all CIDRs assigned to an ASN.


I think the way people do it is by making a script that gets the hostnames and updates the alias, then just schedule it in pfsense. I’ve also seen ASN based routing using a script, but that’ll only work on large services that use their own AS. If the service is large enough, they might predictably use IPs from the same CIDR, so if you spend some time collecting the relevant IPs, you might find that even when the hostnames are new and random, they always go to the same pool of IPs, that’s the lazy way I did selective routing to GitHub since it was always the same subnet.
That’s what I do. 1.6TB currently on rsync.net, only my personal artifacts excluding all media that can be reacquired and it’s a reasonable $10/mo. Synced daily at 4am.
If I wanted my backups to include my media collection or anything exceeding several TB, I would build a second NAS and drop it at my parents’.


My homelab has been mostly on autopilot for a while. Synology 6 bay running most lighter weight docker stuff (arrstack, immich, etc) and an Intel nuc running heavy stuff (quicksync transcodes for Plex+jf, ollama). Both connected to digitalocean via WG for reverse proxy due to CGNAT.
I had my router SSD either die or get corrupted this past week, haven’t looked much at the old SSD besides trying to extract the config off of it. I ended up just fresh installing opnsense because I didnt have any recent backups (my Synology and nuc back up to rsync.net, but I haven’t gotten around to automated backups for my router since it’s basically a plain config, and my cloud reverse proxy which is just a basic docker compose + small haproxy config). Luckily my homelab reaching out to the cloud reverse proxy means there’s basically no important config on my router anymore, they just need DHCP and a connection.
Besides that the arrstack just chugs along on its own.
I recently figured out I can load jellyfin playback URLs into vrchat video players, either direct stream or through the transcoding pipeline as an m3u8 that live transcodes based on the url parameters you set. This is great because the way watch parties in VRChat works is that everyone in an instance loads the same URL pasted into media players and syncs the playback. That means you need to have a publicly accessible url (preferably with a token of some sort) that can be loaded by an arbitrary number of unique IP addresses simultaneously, which I don’t think is doable with Plex.
I’m now working on a little web app to let me log into Jellyfin, search/browse media, and generate the links with arbitrary or pre-set transcode settings for easy copy/pasting into VRChat. The reason it’s needed is that Jellyfin only provides the original file without transcoding when you use the “copy stream” option, so I believe the only way to get a transcoded stream url currently is to set the web interface to specific settings and grab the URL from the network. But that doesn’t let you set arbitrary stuff like codecs and subtitle burn in and overriding what it thinks you support. So a simple app to construct the URL will make VRChat watch parties a lot easier.


The new beta timeline is sooo smooth! I finally don’t hate scrolling back to find a specific old photo. The scrolling performance feels completely native to me now.


I think what you want is an EDID emulator with passthrough or whatever it’s called. EDID is how a monitor tells a device what resolution to send and other info. Some cheap HDMI splitters, adapters, audio extractors, etc will let you emulate a specific EDID. One of my audio extractors lets you fake stereo vs surround support to trick the source into sending surround - I think that’s also through EDID - since if you’re trying to extract surround, it might be because your real TVs EDID is for stereo I assume. So you probably want something like that in before the switch so that the laptop always thinks something is plugged in. Your switch seems to be too smart in actually passing through the real monitor’s EDID so the laptop is able to see when it switches.


Open webUI connected to ollama can do this. In openwebui, if you edit any one of your responses, it forks the conversation. You can flip between each branch using the arrows below any of your responses. If you click the 3 dot menu and click overview, it opens a graph view that shows the branches of the conversation visually.



As a point of reference, I have a 5070 ti oc (300W tdp, suggested PSU 700W according to techpowerup) with a ryzen 7 7700 (65W tdp) and I use a Silverstone SFX 700 W 80+ platinum and it works great. I’ve monitored the GPU wattage and it generally doesn’t go above 200ish in practical usage.


Fwiw Anubis is adding a nojs meta refresh challenge that if it doesn’t have issues will soon be the new default challenge
If it still boots from the internal disk then you may just need to set the boot priority to prefer your external drive. That’ll be mobo specific unfortunately so I can’t give any tips. I’ve had systems set up to boot from external media when plugged in so it should work.
Back in the day there was also an issue with running full windows installs from USB drives where you needed to prevent it from reinitializing USB devices during bootup since that would interfere with itself, but I’m not seeing anything recent about that so hopefully that’s not an issue anymore.