𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
  • 3 Posts
  • 148 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2022

help-circle




  • As Linux is a multi-user system, stuff you install can either run a system process, or a user process. Most other comments are assuming you installed a process that’s running as a user. On Arch, this could either be an autostart process (which is desktop agnostic) or something attached to Gnome or KDE’s startup.

    On Arch,systemd controls system services. There are two key CLI commands for working with systemd (and some GUIs, but you’ll have to find those). The first is systemctl, and the second is journalctl. The second gets you logs. The first controls services.

    systemctl status will give you an overview of all the services on your system.

    sudo systemctl stop <service name> will temporarily stop a service; ... start ... starts it again. ... disable ... will stop it from starting when you reboot – this does not stop the service, it only prevents it from being started again in reboot. As you’ve guessed, ... enable ... re-enables the service. ... status ... gives you a status for the process, and the last few lines of the log for it.

    systemd services can also be run at the user level; the commands are all the same, but you add --user every time to control the user services.

    journalctl -xe gives you a system log since boot. You can also look at logs for previous boots, look at logs only for a single process (-u <servicename>), look at user processes (same --user argument), tail a log to watch new messages roll in (--tail) and a bunch of other stuff.

    Systemd also controls scheduled jobs (that used to be handled by cron) with timers. Really, most Linux distros these days should be known as systemd/Linux.

    I suspect what you’re looking for is sudo systemd disable <service>, but if it’s a user processes, check ~/.config/autostart and your desktop config tool section for auto-start settings.

    It will help if you can say which desktop you’re using (Gnome? KDE? LXDE? Or just a window manager?) and what the package is. If you give the package name, we can explain exactly how to disable it. Otherwise, you have the hodge-podge of answers below.




  • I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.

    Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.

    I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.



  • They can’t, tho. There are two reasons for this.

    Geolocating with cell towers requires trilateration, and needs special hardware on the cell towers. Companies used to install this hardware for emergency services, but stopped doing so as soon as they legally could as it’s very expensive. Cell towers can’t do triangulation by themselves as it requires even more expensive hardware to measure angles; trilateration doesn’t work without special equipment because wave propegation delays between the cellular antenna and the computers recording the signal are big enough to utterly throw off any estimate.

    An additional factor in making trilateration (or even triangulation, in rural cases where they did sometimes install triangulation antenna arrays on the towers) is that, since the UMTS standard, cell chips work really hard to minimize their radio signal strength. They find the closest antenna and then reduce their power until they can just barely talk to the tower; and except in certain cases they only talk to one tower at a time. This means that, at any given point, only one tower is responsible for handling traffic for the phone, and for triangulation you need 3. In addition to saving battery power, it saves the cell companies money, because of traffic congestion: a single tower can only handle so much traffic, and they have to put in more antennas and computers if the mobile density gets too high.

    The reason phones can use cellular signal to improve accuracy is because each phone can do its own triangulation, although it’s still not great and can be impossible because of power attenuation (being able to see only one tower - or maybe two - at a time); this is why Google and Apple use WiFi signals to improve accuracy, and why in-phone triangulation isn’t good enough: in any sufficiently dense urban or suburban environment, the combined informal of all the WiFi routers the phone can see, and the cell towers it can hear, can be enough to give a good, accurate position without having to turn on the GPS chip, obtain a satellite fix (which may be impossible indoors) and suck down power. But this is all done inside and from the phone - this isn’t something cell carriers can do themselves most of the time. Your phone has to send its location out somewhere.

    TL;DR: Cell carriers usually can’t locate you with any real accuracy, without the help of your phone actively reporting its calculated location. This is largely because it’s very expensive for carriers to install the necessary hardware to get any accuracy of more than hundreds of meters; they are loath to spend that money, and legislation requiring them to do so no longer exists, or is no longer enforced.

    Source: me. I worked for several years in a company that made all of the expensive equipment - hardware and software - and sold it to The Big Three carriers in the US. We also paid lobbyists to ensure that there were laws requiring cell providers to be able to locate phones for emergency services. We sent a bunch of our people and equipment to NYC on 9/11 and helped locate phones. I have no doubt law enforcement also used the capability, but that was between the cops and the cell providers. I know companies stopped doing this because we owned all of the patents on the technology and ruthlessly and successfully prosecuted the only one or two competitors in the market, and yet we still were going out of business at the end as, one by one, cell companies found ways to argue out of buying, installing, and maintaining all of this equipment. In the end, the competitors we couldn’t beat were Google and Apple, and the cell phones themselves.



  • You want Upspin. I want Upspin. But Upspin never went anywhere (it’s at least 7 years old… ever heard of it?), and I personally believe that it was because it’s a royal PITA to set up, and because the tutorial had instructions that expected you to be using GCS. If you wanted to do everything on your LAN, it was even harder.

    It’s got all the of the features you mention, and it’s really the only system that does what it does; I really did try in the early days to get it running, and failed. It still has the caveat:

    Upspin has rough edges, and is not yet suitable for non-technical users.

    and, at 7 years old, if it hasn’t gotten anywhere yet, I think it never will. Commits trickle in, but there’s really no significant progress in usability.

    Read the mission statement. It’s glorious. And then wallow in despair that nothing else does this, and it’s a zombie project.



  • I second Mint.

    Linux is a kernel; a distribution is a kernel plus user space tools. Most distributions are mainly configurations tuned for specific use cases - work, gaming, servers, etc. For example, the GUI part of any base OS constitutes over half of the disk space and memory use; if you’re running a server to serve web pages, you don’t need all that crap.

    Unlike Windows or OSX, there are literally dozens of GUIs you can choose from, and most distros focus on setting up one really well as the default.

    Note that you can add, and for the large part, remove any Linux on any distro, so you could start with a server distro or a gaming distro and by adding or removing end up with essentially the same system.

    The most significant difference between most distros is the package manager, the thing you use to install software and manage dependencies. Honestly, that’s not important at this point, but it will be the biggest distinction after you’ve been using Linux for a while.

    So: Mint. It’s a desktop/laptop distro, it’s designed to be easy to install and use, and you can mostly use it completely without ever havingy to drop to the command line. When my dad, who’s approaching 80, bought a laptop last year and didn’t want to register with Microsoft or give them his credit card, I walked him through over the phone downloading Mint, burning it to a USB stick, and installing it. Most of his questions were things like finding an image burner, which keyboard/layout to choose (during install), which type of install to chose (HD partitioning); nothing he couldn’t have figured out by making guesses and mostly choosing the defaults. Since then, I’ve received one call about setting up the printer, which turned out too be a printer issue because his son-in-law had changed the WiFi password and not updated the printer (he obviously doesn’t use his printer much).

    Mint is an excellent first distro. It may not be your last distro, but it’s an easy conversion option. You don’t have to update the software on it often, it’s easy to use - familiar, for Windows folks - and really just an all-around great first choice.

    Three things I do recommend:

    1. Do not yield to the temptation to dual-boot. This is the single biggest source of problems, mainly b/c Windows likes to dick around with the boot partition and screw up Linux. If you can, just dedicate the machine to Linux.
    2. Do not use vfat or NTFS, thinking you can maximize Windows comparability. You can use it on USB sticks, but just don’t put it on any of your HD partitions.
    3. Do not using the default partitioning, which puts (almost) everything in one big partition. Instead, make separate “root” and “home” partitions. You may need to find a tutorial - it isn’t hard, but I can’t explain it here. You’ll want to leave 500GB for root, if you have it, and everything else for home. Root can be smaller, but no less than 100GB is my recommendation. Choose btrfs for the filesystem for both.

    Suggestion 3 gets you two things: first, it makes changing distributions in the future far easier; all you’ll do is replace root and you’ll keep your home partition - all your personal, user files: music, docs, pictures, etc. Second, btrfs will let you use snapper, which is a tool that takes snapshots of your filesystem. Snapper is similar to Time Machine on OSX; there’s even a Time Machine-like GUI tool for browsing and accessing snapshots.

    Start with Mint. You can always change later, and if you partition your drive like I suggest, it is pretty easy to switch.




  • This.

    I use single partitions, because since everything is SSD now, partition failures are almost nonexistent. I don’t know why; I don’t understand the mechanics of why disks are more prone to partition failures, but now when SSD start to fail, it seems as if it is anything except something that can be isolated by position.

    But I do isolate by subvolume, and for the reason you give: snapshots. I snapshot root only when something changes, but do hourly snapshots of home. It keeps data use more manageable. Nightly backups, and I never have more than 24 home snapshot at a time.



  • I’m 100% with you. I want a Light Phone with a changeable battery and the ability to run 4 non-standard phone apps that I need to have mobile: OSMAnd, Home Assistant, Gadget Bridge, and Jami. Assuming it has a phone, calculator, calendar, notes, and address book - the bare-bones phone functions - everything else I use on my phone is literally something I can do probably more easily on my laptop, and is nothing I need to be able to do while out and about. If it did that, I would probably never upgrade; my upgrade cycle is on the order of every 4 years or so as is, but if you took off all of the other crap, I’d use my phone less and upgrade less often.

    The main issue with phones like the Light Phone is that there are those apps that need to be mobile, and they often aren’t available there.


  • You point out that the comparison is unequal, but do you realize how unequal? Stockholm is the 102nd most expensive city in the world; San Francisco is the 13th. If you’re going to compare salaries, at least pick a city closer to Stockholm, like Cleveland, OH (still more expensive at rank 84). 2.4M people live in Stockholm’s greater metro area; San Fran is close to double that size at 4.6M. Cleveland has 3.6M in the greater metro area. Larger populations mean statistically larger employee pools, although economic focus plays a large part.

    But regardless, my point was that $1M doesn’t go very far, no matter where you hire your devs, if you at all care about quality. Even your $137k Swedish devs only get the Foundation one more developer, and less pocket change.

    And Sweden losses if we’re playing the “cheapest devs” game. If you at hiring and you want to get the most resources for the least money, you’re going to look in Mexico or South America and get the advantage of more time zone overlap for the rest of your organization (if you’re in the US); or you’re going to look in one of the less-well-off EU countries, or even Africa if you’re in the EU. Ukraine was a fantastic place to get great developers at good prices, although they’re unfortunately being fucked over by Russia at the moment. Heck, if your leadership is in the EU, SE Asia doesn’t look so bad time-shift wise, and India has a ton of tech hubs, still relatively cheap labor, and shitty labor laws. China has a great labor pool with highly skilled developers and relatively inexpensive prices.

    But we don’t want to play the cost game, right? It isn’t about minimizing salaries - although it’s certainly a consideration, it shouldn’t be the main decision factor. Pool size of quality developers is near the top, but vying for that (IMHO) is time zone overlap. Maximizing within reason the number of hours your team has for meetings, so that nobody has to work outside of hours to hand meetings is critical. Language skill overlap is up there, too. Cost is no higher than fourth, and there might be other things that weigh higher - such as, do you already have a presence in that country. Adding another Ukrainian developer to the couple of guys you already have there might make more sense than hiring someone isolated in Portugal, even if they’re cheaper.

    I keep straying off topic, though. Again: the fact is $1MM doesn’t buy you a lot of time. 6 US devs for a year, maybe. Getting them in Sweden might buy an extra two of months of time, which you might very well lose because those people geographically detached, and now you have to contend with cross-national tax and labor laws, which makes your payroll and HR more expensive.