

Does anyone out there still use a 32-Bit Computer as their daily driver? The most recent 32-Bit hardware I’ve used as a Desktop was an RPi3 and running a modern web browser on that thing would almost cook the chip.
(_____(_____________(#)~~~~~~


Does anyone out there still use a 32-Bit Computer as their daily driver? The most recent 32-Bit hardware I’ve used as a Desktop was an RPi3 and running a modern web browser on that thing would almost cook the chip.


You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…
You’re acting as if setting up a ppc64 VM requires insane amounts of effort, when in reality it’s really trivial. It took me like a weekend to figure out how to set up a PowerPC QEMU VM and install FreeBSD in it, and I’m not at all an expert when it comes to VMs or QEMU or PowerPC. I still use it to test software for big endian machines:
#!/usr/bin/env sh
if [ "$(id -u)" -ne 0 ]; then
printf "Must be run as root.\n"
exit 1
fi
# Note: The "-netdev" parameter forwards the guest's port 22 to port 10022 on the host.
# This allows you to access the VM by SSHing the host on port 10022.
qemu-system-ppc64 \
-cpu power9 \
-smp 8 \
-m 3G \
-device e1000,netdev=net0 \
-netdev user,id=net0,hostfwd=tcp::10022-:22 \
-nographic \
-hda /path/to/disk_image.img \
# -cdrom /path/to/installation_image.iso -boot d
Also you don’t usually compile stuff inside VMs (unless there is no other way). You use cross-compilation toolchains which are just as fast as native toolchains, except they spit out machine code for the architecture that you’re compiling for. Testing on real hardware is only really necessary if you’re like developing a device driver, or the hardware has certain quirks to it that are just not there in VMs.
I feel like the people who don’t look at PKGBUILDs and install hooks and just hit Y on everything are the same people who spam “Next” and “Accept” on Windows Installers from random websites.


I don’t use Gentoo but I still frequent the Gentoo Wiki and pick apart packages because it’s such a great resource for OpenRC.


But most importantly, it won’t work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.
A lot of people don’t seem to be able to comprehend this. Even the most basic Server Hardware that these companies have access to is many times more powerful than the best Gaming PC you can get right now. And if things get too slow they can always just spin up more nodes, which is trivial to them. If anything, they could use this as an excuse to justify higher production costs, which would make resulting datasets and models more valuable.
If this PoW crap becomes widespread it will only make the Internet more shitty and less usable for the average person in the long term. I despise the idea of running completely arbitrary computations just so some Web Admin somewhere can be relieved to know that the CPU spikes they see coming from their shitty NodeJS/Python Framework that generates all the HTML+CSS on-the-fly, does a couple of roundtrips and adds tens of lines of log on every single request, are maybe, hopefully caused by a real human and not a sophisticated web crawler.
My theory is people like to glaze Anubis because it’s linked to the general “Anti-AI” sentiment now (thanks to tech journalism), and also probably because its mascot character is an anime girl and the Developer/CEO of Techaro is a streamer/vtuber.


NVK doesn’t support older cards though last time I checked. Pretty funny how I ended up with a stack of paperweights because NVidia dropped support and Nouveau/NVK can’t get their shit together and instead of focusing on existing hardware they rather keep chasing the “latest and greatest”.


AI? Look, I helped a friend fix a new install. It wasn’t Linux fault, it was a setting in the bios that needed to be changed. But the AI had them trying all sorts of things that were unrelated, and was never going to help. Use with a grain of salt.
I have the same experience but sometimes it was even worse; Sometimes the AI would confidently recommend doing things that might lead to breakage. Personally I recommend against using AI to learn Linux. It’s just not worth it and will only give new users a false impression of how things work on Linux. People are much better off reading documentation (actual documentation, not SEO slop on random websites) or asking for help in forums.
arch-meson is a small wrapper script for meson:
$ cat /usr/bin/arch-meson
#!/bin/bash -ex
# Highly opinionated wrapper for Arch Linux packaging
exec meson setup \
--prefix /usr \
--libexecdir lib \
--sbindir bin \
--buildtype plain \
--auto-features enabled \
--wrap-mode nodownload \
-D b_pie=true \
-D python.bytecompile=1 \
"$@"
deleted by creator


This reads like it was written by some LLM.
Enable journaling only if needed:
tune2fs -O has_journal /dev/sdX
Don’t ever disable journaling if you value your data.
Disk Scheduler Optimization
Change the I/O scheduler for SSDs:
echo noop > /sys/block/sda/queue/scheduler
For HDDs:
echo cfq > /sys/block/sda/queue/scheduler
Neither of these schedulers exist anymore unless you’re running a really ancient Kernel. The “modern” equivalents are none and bfq. Also this doesn’t even touch on the many tunables that bfq brings.
Also changing them like they suggest isn’t permanent. You’re supposed to set them via udev rules or some init script.
SSD Optimization Enable TRIM:
fstrim -v /
Optimize mount settings:
mount -o discard,defaults /dev/sdX /mnt
None of this changes any settings like they imply.
Optimized PostgreSQL shared_buffers and work_mem.
Switched to SSDs, improving query times by 60%.
No shit. Who would’ve thought that throwing more/better hardware at stuff will make things faster.
EDIT: More bullshit that I noticed:
Use ulimit to prevent resource exhaustion:
ulimit -n 100000
Again this doesn’t permanently change the maximum number of open files. This only raises the limit for the user who runs that command. What you’re actually supposed to do is edit /etc/security/limits.conf and then relog the affected user(s) (or reboot) to apply the new limits.
Use compressed swap with zswap or zram:
modprobe zram echo 1 > /sys/block/zram0/reset
This doesn’t even make any sense.
Imagine defending this guy. I will never understand people who like influencers.


That’s literally what I’m saying; It’s fine as long as there wasn’t any unwritten data in the cache when the machine crashes/suddenly loses power. RAID controllers have a battery backed write cache for this reason, because traditional RAID5/6 has the same issue.
How’s the performance compared to other filesystems? Last benchmark I’ve seen it performed pretty poorly compared to btrfs.


I had a drive where data would get silently corrupted after some time no matter what filesystem was on it. Machine’s RAM tested fine. Turned out the write cache on the drive was bad! I was able to “fix” it by disabling the cache via hdparm until I was able to replace that drive.


BTRFS RAID5/6 is fine as long you don’t run into a scenario where your machine crashes and there was still unwritten data in the cache. Also write performance sucks and scrubbing takes an eternity.
I agree. There is literally 0 reason to buy anything from Apple when there are much better and much cheaper options that are already well supported by GNU/Linux. I will never understand people who will go out of their way to waste money on the next big thing from Apple only to get Linux on it.


On distros w/o systemd there is always syslog-ng. s6 also has its own log system.


It’s not necessary, but a good thing to have if something goes wrong and you want to debug/monitor something. It’s really up to you and your needs.
It’s crazy how many people are just OK with running completely proprietary code that monitors everything that happens on the machine and phones home all the time, all with the promise to “catch cheaters”.
Fortunately every game I’ve seen so far with such malware is just a generic competitive multiplayer dopamine farm that targets the Streamer crowd.
“But all my friends are playing it!” - Is it really worth it to run omnipresent malware on your machine just to play the currently trending game for a few weeks until you move on to the next?