(_____(_____________(#)~~~~~~

  • 0 Posts
  • 21 Comments
Joined 3 years ago
cake
Cake day: April 11th, 2022

help-circle

  • You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…

    You’re acting as if setting up a ppc64 VM requires insane amounts of effort, when in reality it’s really trivial. It took me like a weekend to figure out how to set up a PowerPC QEMU VM and install FreeBSD in it, and I’m not at all an expert when it comes to VMs or QEMU or PowerPC. I still use it to test software for big endian machines:

    start.sh
    #!/usr/bin/env sh
    
    if [ "$(id -u)" -ne 0 ]; then
        printf "Must be run as root.\n"
        exit 1
    fi
    
    # Note: The "-netdev" parameter forwards the guest's port 22 to port 10022 on the host. 
    # This allows you to access the VM by SSHing the host on port 10022.
    qemu-system-ppc64 \
        -cpu power9 \
        -smp 8 \
        -m 3G \
        -device e1000,netdev=net0 \
        -netdev user,id=net0,hostfwd=tcp::10022-:22 \
        -nographic \
        -hda /path/to/disk_image.img \
    #    -cdrom /path/to/installation_image.iso -boot d
    

    Also you don’t usually compile stuff inside VMs (unless there is no other way). You use cross-compilation toolchains which are just as fast as native toolchains, except they spit out machine code for the architecture that you’re compiling for. Testing on real hardware is only really necessary if you’re like developing a device driver, or the hardware has certain quirks to it that are just not there in VMs.




  • But most importantly, it won’t work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.

    A lot of people don’t seem to be able to comprehend this. Even the most basic Server Hardware that these companies have access to is many times more powerful than the best Gaming PC you can get right now. And if things get too slow they can always just spin up more nodes, which is trivial to them. If anything, they could use this as an excuse to justify higher production costs, which would make resulting datasets and models more valuable.

    If this PoW crap becomes widespread it will only make the Internet more shitty and less usable for the average person in the long term. I despise the idea of running completely arbitrary computations just so some Web Admin somewhere can be relieved to know that the CPU spikes they see coming from their shitty NodeJS/Python Framework that generates all the HTML+CSS on-the-fly, does a couple of roundtrips and adds tens of lines of log on every single request, are maybe, hopefully caused by a real human and not a sophisticated web crawler.

    My theory is people like to glaze Anubis because it’s linked to the general “Anti-AI” sentiment now (thanks to tech journalism), and also probably because its mascot character is an anime girl and the Developer/CEO of Techaro is a streamer/vtuber.



  • AI? Look, I helped a friend fix a new install. It wasn’t Linux fault, it was a setting in the bios that needed to be changed. But the AI had them trying all sorts of things that were unrelated, and was never going to help. Use with a grain of salt.

    I have the same experience but sometimes it was even worse; Sometimes the AI would confidently recommend doing things that might lead to breakage. Personally I recommend against using AI to learn Linux. It’s just not worth it and will only give new users a false impression of how things work on Linux. People are much better off reading documentation (actual documentation, not SEO slop on random websites) or asking for help in forums.




  • This reads like it was written by some LLM.

    Enable journaling only if needed:
    tune2fs -O has_journal /dev/sdX

    Don’t ever disable journaling if you value your data.

    Disk Scheduler Optimization
    Change the I/O scheduler for SSDs:
    echo noop > /sys/block/sda/queue/scheduler
    For HDDs:
    echo cfq > /sys/block/sda/queue/scheduler

    Neither of these schedulers exist anymore unless you’re running a really ancient Kernel. The “modern” equivalents are none and bfq. Also this doesn’t even touch on the many tunables that bfq brings.

    Also changing them like they suggest isn’t permanent. You’re supposed to set them via udev rules or some init script.

    SSD Optimization Enable TRIM:
    fstrim -v /
    Optimize mount settings:
    mount -o discard,defaults /dev/sdX /mnt

    None of this changes any settings like they imply.

    Optimized PostgreSQL shared_buffers and work_mem.
    Switched to SSDs, improving query times by 60%.

    No shit. Who would’ve thought that throwing more/better hardware at stuff will make things faster.

    EDIT: More bullshit that I noticed:

    Use ulimit to prevent resource exhaustion:
    ulimit -n 100000

    Again this doesn’t permanently change the maximum number of open files. This only raises the limit for the user who runs that command. What you’re actually supposed to do is edit /etc/security/limits.conf and then relog the affected user(s) (or reboot) to apply the new limits.

    Use compressed swap with zswap or zram:
    modprobe zram echo 1 > /sys/block/zram0/reset

    This doesn’t even make any sense.