

I’ve been using it on my Amazon firetv stick, and I’m loving it. I first tried it because I cannot use revanced on the fire stick, but it’s actually pretty good.
Also, you can link it to your official YouTube app and control it from your phone


I’ve been using it on my Amazon firetv stick, and I’m loving it. I first tried it because I cannot use revanced on the fire stick, but it’s actually pretty good.
Also, you can link it to your official YouTube app and control it from your phone


Mainly two reasons, one about architecture, and one about vendors
In the PC world, the entire ecosystem is designed to be modular, and people expect to be able to put windows/Linux on any pc and have it work despite the manufacturer. The kernel just wakes up on one of the cores, figures out the CPU, wakes the rest of the cores, and from there it figures out the rest of the computer. By contrast arm systems are tightly integrated, each SoC is unique and there’s no way to figure out the rest of the system, the kernel wakes up on one of the cores, reads out what SoC this is, and mostly has to already know the chip and any additional hardware connected to it.
But, sure, there are only so many SoCs (kinda), and displays, cameras, and touchscreens are mostly similar, you are bound to find a way to tell the kernel what hardware is running on and have it work, right? Except a lot of phone hardware is proprietary (duh) and requires bespoke proprietary drivers, google pretends to encourage vendors to submit their drivers upstream, but this doesn’t really happen. Now, if you are familiar with running external drivers on Linux, you probably know how picky the kernel is in what to load, but android’s kernel is specifically modified to be less picky, to allow vendors more slack. Mind you, the API is not more stable, the kernel is just less picky.
Bonus: running Linux on arm laptops is indeed proving kind of a challenge (nothing impossible, but resources are limited), that’s because they are built like a mobile phone.


First of all, they were developed around the same time; second, no one said that a protocol should remain unchanged for 35 years. And lastly, the people in “what’s wrong with these people” are the people pretending gopher is any good today, and a reasonable alternative to the web, which factually isn’t the case as apparently it did remain unchanged for 35 years. And if it didn’t remain unchanged but did not add certificates, it would just make things look even worse.


Wait, gopher didn’t use certificates? What’s wrong with these people? And of course these are going to be just gpg certificates, not authoritative I imagine, or it would defeat the entire decentralised thing.
I really don’t get this stuff. If you want pure text websites, just make them, you are allowed to use pure html, you don’t have to use JavaScript if you don’t want to. You can get real certificates for free from Let’s Encrypt, and you can use any free DNS service you want


Is this just yet another gopher protocol? Or does it come with anything interesting


Fucking finally
I hope everyone else follows soon. If you like it when you are trying to open a link on a new tab and your system randomly decides to spew a selection from another app into a random text box you are free to configure that yourself. Remember to configure in a cilice wrapping your thigh while you are at it, it’s unix-compliant and has been around for centuries.


It will hurt less being disabled


You can look for a second hand office pc with a newer socket, so you can comfortably upgrade the CPU without having to buy new motherboard and ram, but that would still leave you with an old GPU.
Maybe look for a second hand server/workstation on eBay. The CPU might not be the best for gaming, but you might upgrade that later, and you would get an upgraded gpu. Or you could just delay the GPU upgrade


Wow, so unexpected. Who could have seen this coming? 🙄
At least Google had the decency to write “sponsored” on the sponsored results, but with this it’s not even an option.


I’m in your same predicament. I think the long term path is to fuck ourselves until an Oracle comes to Faith Ekstrand (or another maintainer) in a dream and tells her how to make pascal work properly in nouveau. Or until the spirit of Christmas Past visits Jensen Huang.
Is that the issues your project is solving?
That’s exactly it, and also the fact that git doesn’t follow symlinks. Just a word of warning, If you are still inexperienced I suggest you run my tool manually instead of automating it with git hooks, as it is inherently less secure. In the post I linked in the description you can see some of the precautions I took to make it more secure. Still, running it manually is fine.
Feel free to give some feedback if you start using the tool 🙂
Yes, that was one of the tools I considered before making this. I do not remember the precise detail on why, but much like gnu stow is only good for versioning user dotfiles and not system config. Etckeeper is good for storing either your system config files or user’s dotfiles, but not both at the same time. copicat doesn’t care what you use it for because you explicitly tell it all the locations and permissions that you want.
Yeah, it’s cool, people are mostly looking for something like your usecase. I got suggested stow or stow-like tools a lot when exploring this. And when they understood what I wanted, they just suggested ansible… Which would work when starting from scratch, but wasn’t right for me. I made copicat mostly because I am actually using it, and then decided to make it public because really I didn’t find anything like it.
Say you want to store /etc/ufw/sysctl.conf which is owned by root:adm and has permission 644 in your repo, but also /etc/ntfy/server.yml which is owned by ntfy:ntfy with permissions 664. How do you keep track of this with gnu stow?
That is a good question. I have considered using gnu stow before building this. But there’s a couple of problems with that.
Git doesn’t follow symlinks, it stores them as links in the repo, so your only option is to keep the files in the repo, and symlink from the config file location to the repo. This is fine for user config files (like from your .config folder), but if you want to keep system config files (like those from /etc) then the git process needs to run as root to modify those files, because symlinked files share permissions and ownership. And even then, git will always create everything as root because it only tracks permission bits, not ownership, so you will need to constantly fix up ownership of your files.
With this tool instead you explicitly tell it the ownership and permission of files, and it takes care of that for you (it still needs root permissions of course).
Then, the year of the freebsd desktop came many years ago when apple released MacOS


What disconnecting problem?


My system should be fully updated, I will try an Xbox 360 cabled controller


That’s a flattering thought, but I think that kind of improvement is a pipedream.
The os shenanigans might be the reason tho
👀👀 uh? Do tell me more!