

With apparmor, you could enable and disable profiles that could restrict access to files and paths by name.
For network traffic, it’s possible to use dnsmasq to blacklist or whitelist some domains.
Just a regular Joe.
With apparmor, you could enable and disable profiles that could restrict access to files and paths by name.
For network traffic, it’s possible to use dnsmasq to blacklist or whitelist some domains.
Fake it 'til you make it… or not, whatever.
Heh. Tax returns and music should have been the giveaways, although I know someone who takes great satisfaction in taking every tax deduction they legally can, down to the last cent. :-P
TV and games sure, but embrace music - (try to) learn to play an instrument, and you will appreciate listening so much more!
I use labwc … it’s basically OpenBox as a Wayland Compositor. Some things/programs work better than Hyprland, other things worse. No animations - just get out of your way functionality.
I found a patch that allows manual tiling and focus (eg. alt-tabbing just for windows in the left half of the screen), which is cool.
Scriptability isn’t there, but the code looks pretty clean.
The config file is similar to OpenBox. I miss multi-layer keybindings though.
Another technique that helps is to limit the amount of information shared with clients to need to know info. This can be computationally intensive server-side and hard to get right … but it can help in many cases. There are evolving techniques to do this.
In FPS games, there can also be streaming input validation. eg. Accurate fire requires the right sequence of events and/or is used for cheat detection. At the point where cheats have to emulate human behaviour, with human-like reaction times, the value of cheating drops.
That’s the advanced stuff. Many games don’t even check whether people are running around out of bounds, flying through the air etc. Known bugs and map exploits don’t get fixed for years.
ALSA is lowest level, and is the kernel interface to audio hardware. Pipewire provides a userspace service to share limited hardware.
Try setting “export PIPEWIRE_LATENCY=2048/48000” before running an audio producing application (from the same shell).
Distortion can sometimes be related to the audio buffers not getting filled in time, so increasing the buffering as above gives it more time to even out. You can try 1024 instead of 2048 too.
There is no doubt a way to set it globally, if it helps.
Good luck!
But not Fire tablets (kids profile) or Samsung TV or many others that Plex currently supports.
JellyFin android phone app’s UI is a little weird at times, but does work pretty well for me.
…
What I would adore from any app would be an easy way to upload specific content and metadata via SFTP or to blob storage and accessible with auth (basic, token, or cloud) to more easily share it with friends/family/myself without having to host the whole damn library on the Internet or share my home Internet at inconvenient times.
Client-side encryption would be a great addition to that (eg. password required, that adds a key to the key ring). And of course native support in the JellyFin/other apps for this. It could even be made to work with a JS & WASM player.
And contributions to codebases that have developed with the goal of meeting the team’s own needs, and who similarly don’t have the time or space to refactor or review as needed to enable effective contributions.
Enabling Innersource will be a priority for management for only two weeks, anyway, before they focus on something else. And if it even makes it into measurable goals, it will probably be gamed so it doesn’t ruin bonuses.
Do you also work for $GenericMultinationalCompany, per-chance? Do you also know $BillFromFinance?
Yeah, at that point I wouldn’t worry. If someone has docker access on the server, it’s pretty much game over.
Encryption will typically be CPU bound, while many servers will be I/O bound (eg. File hosting, rather than computing stuff). So it will probably be fine.
Encryption can help with the case that someone gets physical access to the machine or hard disk. If they can login to the running system (or dump RAM, which is possible with VMs & containers), it won’t bring much value.
You will of course need to login and mount the encrypted volume after a restart.
At my work, we want to make sure that secrets are adequately protected at rest, and we follow good hygiene practices like regularly rotating credentials, time limited certificates, etc. We tend to trust AWS KMS to encrypt our data, except for a few special use cases.
Do you have a particular risk that you are worried about?
Normally you wouldn’t need a secrets store on the same server as you need the secrets, as they are often stored unencrypted by the service/app that needs it. An encrypted disk might be better in that case.
That said, Vault has some useful features like issuing temporary credentials (eg. for access to AWS, DBs, servers) or certificate management. If you have these use-cases, it could be useful, even on the same server.
At my work, we tend to store deployment-time secrets either in protected Gitlab variables or in Vault. Sometimes we use AWS KMS to encrypt values in config files, which we checkin to git repositories.
It typically takes a small core team to build the framework/architecture that enables many others to contribute meaningfully.
Most OSS projects get bugger all contributions from outside the initial core team, having limited ability to onboard people. The biggest and most active (out of necessity or by design) have a contribution friendly software architecture and process, and often deliberately organized communities (eg. K8S & CNCF) or major corporate sponsors filling the role.
Free Software and resulting ecosystems seem to have a better chance of contributing to the common good over the long term. This is simply because most companies are beholden to their shareholders, and at some point the urge to squeeze every last cent out of an opportunity comes to the forefront, and many initially well intentioned efforts get poisoned.
Free Software licenses like the GPL help to protect our freedom and to set open standards, and are essential for the core technology stack.
When someone can get annoyed with some shitty software or its license-terms and reimplement the core functionality in a few days/weeks/months … eventually someone will get annoyed and create some decent free software that will kill off the shitty alternatives, or even just a better commercial alternative. This only works because of the open platforms & protocols.
One of the major challenges for consumers is finding good software today in the grey goo of projects and appstores. This harks back to OP’s point about curated collections of software. It’s also where the various foundations add value (CNCF, Linux Foundation, Apache) … along with “awesome X” gitlab repos, which are far better than random youtube videos or ad-riddled blogs or magazine articles.
The true strength is in the open interfaces and common protocols that enable competition and choice, followed by the free-to-use libraries that establish a foundation upon which we can build and iterate. This helps us to stay in control of our hardware, our data, and our destiny.
Practically speaking, there is often more value in releasing something as free software than there is to commercialising it or otherwise tightly controlling the source code… and for these smaller tools and libraries it is especially the case.
Many bigger projects (eg. linux kernel, firefox, kubernetes, apache*) help set the direction of entire industries, building new opportunities as they go, thanks to the standardization that comes from their popularity.
It’s also a reason why many companies release software as open source too, especially in the early days, establishing themselves as THE leader…for a while at least (eg. Docker Inc, Hashicorp).
The Rancher or Kubernetes slack servers might be the best place to target your questions. It’s more interactive, which would probably be more effective than posting Qs all over the Internet.
wg-quick takes a different approach, using an ip rule to send all traffic (except its own) to a different routing table with only the wireguard interface. I topped it up with iptables rules to block everything except DNS and the wireguard udp port on the main interface. I also disabled ipv6 on the main interface, to avoid any non-RFC1918 addresses appearing in the (in my case) container at all.
edit: you can also do ip rule matching based on uid, such that you could force all non-root users to use your custom route table.
It might be a simple issue like ip forwarding not being enabled, or host-level iptables configuration, or perhaps weird and wonderful routing (eg. wireguard or other VPNs).
Your k3s/calico networking is likely screwed. Try creating a new cluster with flannel instead.
What do you have against the project and the people behind it? It sounds personal.
There are plenty of non-commercial Linux distributions. Some managed better than others. Some generic, some with niches. OpenWRT is a favourite of mine.