𞋴𝛂𝛋𝛆

  • 17 Posts
  • 150 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle

  • Just be aware that W11 is secure boot only.

    There is a lot of ambiguous nonsense about this subject by people that lack a fundamental understanding of secure boot. Secure Boot, is not supported by Linux at all. It is part of systems distros build outside of the kernel. These are different for various distros. Fedora does it best IMO, but Ubuntu has an advanced system too. Gentoo has tutorial information about how to setup the system properly yourself.

    The US government also has a handy PDF about setting up secure boot properly. This subject is somewhat complicated by the fact the UEFI bootloader graphical interface standard is only a reference implementation, with no guarantee that it is fully implemented, (especially the case in consumer grade hardware). Last I checked, Gentoo has the only tutorial guide about how to use an application called Keytool to boot directly into the UEFI system, bypassing the GUI implemented on your hardware, and where you are able to set your own keys manually.

    If you choose to try this, some guides will suggest using a better encryption key than the default. The worst that can happen is that the new keys will get rejected and a default will be refreshed. It may seem like your system does not support custom keys. Be sure to try again with the default for UEFI in your bootloader GUI implementation. If it still does not work, you must use Keytool.

    The TPM module is a small physical hardware chip. Inside there is a register that has a secret hardware encryption key hard coded. This secret key is never accessible in software. Instead, this key is used to encrypt new keys, and hash against those keys to verify that whatever software package is untampered with, and to decrypt information outside of the rest of the system using Direct Memory Access (DMA), as in DRAM/system memory. This effectively means some piece of software is able to create secure connections to the outside world using encrypted communications that cannot be read by anything else running on your system.

    As a more tangible example, Google Pixel phones are the only ones with a TPM chip. This TPM chip is how and why Graphene OS exists. They leverage the TPM chip to encrypt the device operating system that can be verified, and they create the secure encrypted communication path to manage Over The Air software updates automatically.

    There are multiple Keys in your UEFI bootloader on your computer. The main key is by the hardware manufacturer. Anyone with this key is able to change all software from UEFI down in your device. These occasionally get leaked or compromised too, and often the issue is never resolved. It is up to you to monitor and update… - as insane as it sounds.

    The next level key below, is the package key for an operating system. It cannot alter UEFI software, but does control anything that boots after. This is typically where the Microsoft key is the default. It means they effectively control what operating system boots. Microsoft has issued what are called shim keys to Ubuntu and Fedora. Last I heard, these keys expired in October 2025 and had to be refreshed or may not have been reissued by M$. This shim was like a pass for these two distros to work under the M$ PKey. In other words, vanilla Ubuntu and Fedora Workstation could just work with Secure Boot enabled.

    All issues in this space have nothing to do with where you put the operating systems on your drives. Stating nonsense about dual booting a partition is the stupid ambiguous misinformation that causes all of the problems. It is irrelevant where the operating systems are placed. Your specific bootloader implementation may be optimised to boot faster by jumping into the first one it finds. That is not the correct way for secure boot to work. It is supposed to check for any bootable code and deplete anything without a signed encryption key. People that do not understand this system, are playing a game of Russian Roulette. There one drive may get registered first in UEFI 99% of the time due to physical hardware PCB design and layout. That one time some random power quality issue shows up due to a power transient or whatnot, suddenly their OS boot entry is deleted.

    The main key, and package keys are the encryption key owners of your hardware. People can literally use these to log into your machine if they have access to these keys. They can install or remove software from this interface. You have the right to take ownership of your machine by setting these yourself. You can set the main key, then you can use the Microsoft system online to get a new package key to run W10 w/SB or W11. You can sign any distro or other bootable code with your main key. Other than the issue of one of the default keys from the manufacturer or Microsoft getting compromised, I think the only vulnerabilities that secure boot protects against are physical access based attacks in terms of 3rd party issues. The system places a lot of trust in the manufacturer and Microsoft, and they are the owners of the hardware that are able to lock you out of, surveil, or theoretically exploit you with stalkerware. In practice, these connections are still using DNS on your network. If you have not disabled or blocked ECH like cloudflare-ech.com, I believe it is possible for a server to make an ECH connection and then create a side channel connection that would not show up on your network at all. Theoretically, I believe Microsoft could use their PKey on your hardware to connect to your hardware through ECH after your machine connects to any of their infrastructure.

    Then the TMP chip becomes insidious and has the potential to create a surveillance state, as it can be used to further encrypt communications. The underlying hardware in all modern computers has another secret operating system too, so it does not need to cross your machine. For Intel, this system is call the Management Engine. In AMD it is the Platform Security Processor. In ARM it is called TrustZone.

    Anyways, all of that is why it is why the Linux kernel does not directly support secure boot, the broader machinery, and the abstracted broader implications of why it matters.

    I have a dual boot w11 partition on the same drive with secure boot and have had this for the last 2 years without ever having an issue. It is practically required to do this if you want to run CUDA stuff. I recommend owning your own hardware whenever possible.




  • Supply chain is important for broad scope adoption, but it is an unsolvable problem.

    I was the buyer for a chain of bike shops. Unfortunately, distribution is the market bottleneck that is nearly impossible to break through.

    So, at scale, no one is capable of predicting global demand accurately for any type of retail. Almost all products that are sold by small retailers are made and sold by the real manufacturer to distributors for 30-35% of MSRP. These distributors then wholesale the inventory to retailers with a 15-20% markup. This is absolutely necessary because it distributes the burden of inventory commitment to a hierarchy where local conditions are accounted for. The distributor is actually buying the inventory and taking on the risk of overburden that does not sell.

    Likewise with retail. The markup is called keystone which means 50% margin. Most retailers will barely break even if the whole store averages 40% margins. Retail property and labor are extremely expensive and hard. In almost all small businesses, overburden is what kills them eventually. Overburden is what does not sell and becomes unmarketable over time.

    Another aspect that is not intuitive here is that no matter how you select inventory, you will never sell that entire selection on a single platform. If you are not actively attempting to recuperate cash flow from overburden, the business will slowly drown. Sales in retail are not about overburden at all. Statistically, getting new people in the front door is the only metric that matters. Loss leaders and sales are about traffic not overburden. A good buyer plans and negotiates their loss leaders for sales within their preseason ordering.

    Over the last couple of decades, more and more products have been created that bypass the big distributors. Most of it is because the product is just not worth the markup required for scaled independent distribution and middlepersons margins. However, now there is an issue of global demand where the manufacturer has the impossible task of financing scale and the inherent risk. If the product is not made at very large scale, it is uncompetitive to manufacture. You need someone willing to take that risk. As a person that has made these types of decisions at smaller scales of a few million dollars, go bet all that money on a hand of single deck blackjack because those 47-48% winning odds are outstanding by comparison.

    Retailers place preseason order commitments to get slightly better margins, but primarily because the distributors are more like banks in retail. They offer credit and repayment options that mean the retailer is not required to pay up front in cash. With bicycle stuff, I placed all of my preseason orders between September and October for the following year. Stuff started arriving between December and January. I then had a first payment due in April, but I had to pay it back by the end of July. So I had to predict the summer market a year in advance and have all of my plan detailed by autumn.

    This is how mom and pop independent retail actually works. It was not competitive with big box retail because those are not actually retailers. Those are rogue distributors selling directly to the public. The actual products are still the same 30-35% of MSRP.

    The worst product trends in retail have been the tendency for companies to market themselves as exceptions. Like I despised GoPro in my stores. The margin on the cameras was 20% and each one costs a fortune. They constantly tried to deprecate models too. They tried to pitch that all the accessories were keystone and it made up for the terrible return on investment. In reality that inventory of accessories was overburden suicide of niche garbage for special use cases.

    All electronic devices people want have fallen into this trap of low margins that are impossible for sustainable retail. When you see factory direct stores, that means the product has no margin for scale distribution. It is a neo feudalistic, brute force approach where someone is dumb enough to believe they will be able to predict global demand indefinitely without making any major errors. The public is dumb enough to follow along. Few realize the enormous power that is consolidated from cutting out the democracy of distributors and retailers. This consolidated monolith will eventually enslave everyone when they must overcome the inevitable mistakes they make. They will not just eat the loss or go out of business because they own your right to choose in a market without competition. It is surrendering choice to the dictator that makes their own demand by force.

    Yeah, so, we don’t want that. - said no one. What you want is irrelevant. The lowest common denominator dictates the market. Democracy requires a well informed and skeptical citizenry. We live in an era with the smallest information bottleneck in several centuries. Search results are not deterministic and there are only two relevant web crawlers that all providers query. These are not deterministic. Two people searching on separate devices with identical queries will get different results. All major media is owned by less than a dozen people. You have absolutely no chance of informing the citizenry to make better decisions that may cost a good bit more money. People cringe if you tell them they are slaves, but do nothing if the word citizen is redefined as functionally equivalent.

    The only way you will ever see such a product sold in any traditional independent retail scenario, is if some exceptionally altruistic billionaire were to chose to fund the thing with no concern over the loss. The only way to be competitive in price is to build at competitive scale of manufacturing. If someone else is doing this and using factory direct retail to stay in business with just a 30% gross margin in total, you will never find the necessary slice for regional distribution and retail. Your device will be $1000 at MSRP to their $600 equivalent. There is no solution to this issue. It is raw capitalism where the biggest fish makes the rules. The only counter balance in the system is an informed citizenry. This is why information and education are all that really matter. If the average person is too stupid for independent thought, it is the ultimate pwn as citizen means slave, and the peasantry are too stupid to recognize the situation where they own nothing and have no outlet to tell anyone or hear the plight of all the others.



  • The article… Locking the bootloader is ceasing to sell a product that can be owned. It is a rental controlled by someone else actively, not just passively through a proprietary orphan kernel. It is action taken to filter, manipulate, and exploit. That is actually a soft coup against democracy itself, if you grasp the role of informed autonomous citizens and the reason ignorance is never an excuse in a democracy. The mechanism of trust is a fascist tool that is diametrically opposed to a liberal democracy. Trust in the chain of information flowing to a citizen is to subjugate and steal the right of citizenship. This is fundamentally simple with enormous implications. The naive stupidity of people blind to this fundamental issue is truly sad. The transition to actively exploiting the device, is to surrender democracy to a traitorous pirate if you purchase one of these. It is not small thing to shrug off. This is a pivotal moment and issue that will create an unimaginatively dark dystopia. The problem with coups is only the speed at which they are executed. This is a foundational cornerstone taken slowly where people are far to stupid to see it and resist in time to make a difference.






  • The UEFI boot system is tricky and you need to get along with Secure Boot to do this. Secure Boot is outside of the Linux kernel. Both Fedora and Ubuntu have systems for this. Fedora uses the Anaconda system and I believe they do it best. I have had a W11 partition for 2 years and never used it once. It can’t even get on the internet with my firewall setup, but it is there and never had any issues the 3 times I logged into it.

    I think all of the Fedora systems support the shim key and secure boot but I know Workstation does. For Ubuntu I think it is just the regular vanilla Ubuntu desktop that the shim supports. This may be somewhat sketchy with Nvidia or maybe not. Nvidia “”““open sourced””“” their kernel code but the actual nvcc compiler required to build the binaries is still proprietary crap.

    I have a 3080Ti gaming laptop. It isn’t half bad with 16 GB of video RAM from all the way back in 2021. Nvidia is artificially holding back the vram because of monopoly nonsense. The new stuff has very little real consumer value as a result, at least with AI stuff I run. The hardware is a little faster, but more vram is absolutely critical and new stuff that is the same or worse than what I have from 3 generations and nearly 5 years ago is ridiculous.

    The battery life blows and the GPU likely won’t even work on battery. It will get donkey balls hot with AI workloads, especially any kind of image gen. This results in lots of thermal throttling. All AI packages run as servers on your network. If you are thinking along these lines if running your own models, get a tower and run the thing remotely.

    I manage, and need the ergonomics for physical disability reasons, but I still would prefer to have a separate tower to run models from.

    Anyways, you can sign your own UEFI keys to use any distro, but this can be daunting for some people. The US defense department has a good PDF guide on setting your own keys. The UEFI bootloader for the machine may not have all key signing features implemented. There is a way to boot into UEFI directly and set the keys manually but this is not easy to find great guides on how to do it step by step. Gentoo has a tutorial on this, but it assumes a high level of competency.

    Other than signing your own keys, the shim keys mentioned are special keys signed by Microsoft for the principal maintainer of the distro. These slide under the Microsoft key to keep secure boot enabled.

    If you boot any secure boot enabled OS, the bootloader is required to delete any bootable unsigned code it finds. It does not matter if it is a shimmed Fedora or W11. If you have any other OS present in the boot list, it should be deleted. W11 is SB only, and this is where the real issues arise.


  • Are you insane? Debian is a base distro like any other and runs more hardware than any other. It has all of the bootstrapping tools to get hardware working.

    Canonical is a server company and Ubuntu server is literally the product.

    Arch is absolute garbage for most users unless you have a CS degree or you have entirely too much time on your hands and don’t mind an OS as your life project. Arch abhors tutorial content in all documentation and therefore dumps users into a rabbit hole regularly. Pacman is the worst package manager as it will actively break a system and present the user with the dumbest of choices at random because the maintainers are ultimately sadistic and lackadaisical. Arch is nearly identical to Gentoo with Arch binaries often based on Gentoo builds, yet Gentoo provides relevant instruction and documentation with any changes that require user intervention and does so at a responsible and ethical level that shows kindness, respect, and consideration completely absent from Arch. Arch is a troll by trolls for trolls. I’m more than capable of running it now, but I would never bother with such inconsiderate behavior.





  • Oh wow, so we are in kinda similar places but from vastly different paths and capabilities. Back before I was disabled I was a rather extreme outlier of a car enthusiast, like I painted (owned) ported and machined professionally. I was really good with carburetors, but had a chance to get some specially made direct injection race heads with mechanical injector ports in the combustion chamber… I knew some of the Hilborn guys… real edgy race stuff. I was looking at building a supercharged motor with a mini blower and a very custom open source Megasquirt fuel injection setup using a bunch of hacked parts from some junkyard Mercedes direct injection Bosch diesel cars. I had no idea how complex computing and microcontrollers are, but I figured it couldn’t be much worse than how I had figured out all automotive systems and mechanics. After I was disabled 11 years ago riding a bicycle to work while the heads were off of my Camaro, I got into Arduino and just trying to figure out how to build sensors and gauges. I never fully recovered from the broken neck and back, but am still chipping away at compute. Naturally, I started with a mix of digital functionality and interfacing with analog.

    From this perspective, I don’t really like API like interfaces. I often have trouble wrapping my head around them. I want to know what is actually happening under the hood. I have a ton of discrete logic for breadboards and have built stuff like Ben Eater’s breadboard computer. At one point I played with CPLDs in Quartus. I have an ICE40 around but have only barely gotten the open source toolchain running before losing interest and moving on to other stuff. I prefer something like Flash Forth or Micropython running on a microcontroller so that I am independent of some proprietary IDE nonsense. But I am primarily a Maker and prefer fabrication or CAD over programming. I struggle to manage complexity and the advanced algorithms I would know if I had a formal CS background.

    So from that perspective, what I find baffling about RISC under CISC is specifically the timing involved. Your API mindset is likely handwaving this as black box, but I am in this box. Like, I understand how there should be a pipeline of steps involved for the complex instruction to happen. What I do not understand is the reason or mechanisms that separate CISC from RISC in this pipeline. If my goal is to do A…E, and A-B and C-D are RISC instructions, I have a ton of questions. Like why is there still any divide at all for x86 if direct emulation is a translation and subdivision of two instructions? Or how is the timing of this RISC compilation as efficient as if the logic is built as an integrated monolith? How could that ever be more efficient? Is this incompetent cost cutting, backwards compatibility constrained, or some fundamental issue with the topology like RLC issues with the required real estate on the die?

    As far as the Chips and Cheese article, if I recall correctly, that was saved once upon a time in Infinity on my last phone, but Infinity got locked by the dev. The reddit post link would have been a month or two before June of 2023, but your search is as good as mine. I’m pretty good at reading and remembering the abstract bits of info I found useful, but I’m not great about saving citations, so take it as water cooler hearsay if you like. It was said in good faith with no attempt to intentionally mislead.


  • You caught me. I meant this, but was thinking backwards from the bottom up. Like building the logic and registers required to satisfy the CISC instruction.

    This mental space is my thar be dragons and wizards space on the edge of my comprehension and curiosity. The pipelines involved to execute a complex instruction like AVX loading a 512 bit word, while two logical cores are multi threading with cache prediction, along with the DRAM bus width limitations, to run tensor maths – are baffling to me.

    I barely understood the Chips and Cheese article explaining how the primary bottleneck for running LLMs on a CPU is the L2 to L1 cache bus throughput. Conceptually that makes sense, but thinking in terms of the actual hardware, I can’t answer, “why aren’t AI models packaged and processed in blocks specifically sized for this cache bus limitation”. If my cache bus is the limiting factor, duel threading for logical cores seems like asinine stupidity that poisons the cache. Or why an OS CPU scheduler is not equip to automatically detect or flag tensor math and isolate threads from kernel interrupts is beyond me.

    Adding a layer to that and saying all of this is RISC cosplaying as CISC is my mental party clown cum serial killer… “but… but… it is 1 instruction…”


  • ARM is an older Reduced Instruction Set Computing out of Berkeley too. There are not a lot of differences here. x86 could even be better. American companies are mostly run by incompetent misers that extract value through exploitation instead of innovation on the edge and future. Intel has crashed and burned because it failed to keep pace with competition. Like much of the newer x86 stuff is RISC-like wrappers on CISC instructions under the hood, to loosely quote others at places like Linux Plumbers conference talks.

    ARM costs a fortune in royalties. RISC-V removes those royalties and creates an entire ecosystem for companies to independently sell their own IP blocks instead of places like Intel using this space for manipulative exploitation through vendor lock in. If China invests in RISC-V, it will antiquate the entire West within 5-10 years time, similar to what they did with electric vehicles and western privateer pirate capitalist incompetence.


  • 𞋴𝛂𝛋𝛆@lemmy.worldtoAndroid@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    7 months ago

    I think the Chinese will do it with RISC-V, or Europe will demand it independently.

    We’re on the last nodes for fabs. The era of exponential growth is over. It is inevitable that a major shift in hardware longevity and serviceability will happen now. Stuff will also get much more expensive because volume is not needed or possible in the cycle to pay back the node investments.