

dichotomous thinking is the pretentiously fun term
dichotomous thinking is the pretentiously fun term
The UEFI boot system is tricky and you need to get along with Secure Boot to do this. Secure Boot is outside of the Linux kernel. Both Fedora and Ubuntu have systems for this. Fedora uses the Anaconda system and I believe they do it best. I have had a W11 partition for 2 years and never used it once. It can’t even get on the internet with my firewall setup, but it is there and never had any issues the 3 times I logged into it.
I think all of the Fedora systems support the shim key and secure boot but I know Workstation does. For Ubuntu I think it is just the regular vanilla Ubuntu desktop that the shim supports. This may be somewhat sketchy with Nvidia or maybe not. Nvidia “”““open sourced””“” their kernel code but the actual nvcc compiler required to build the binaries is still proprietary crap.
I have a 3080Ti gaming laptop. It isn’t half bad with 16 GB of video RAM from all the way back in 2021. Nvidia is artificially holding back the vram because of monopoly nonsense. The new stuff has very little real consumer value as a result, at least with AI stuff I run. The hardware is a little faster, but more vram is absolutely critical and new stuff that is the same or worse than what I have from 3 generations and nearly 5 years ago is ridiculous.
The battery life blows and the GPU likely won’t even work on battery. It will get donkey balls hot with AI workloads, especially any kind of image gen. This results in lots of thermal throttling. All AI packages run as servers on your network. If you are thinking along these lines if running your own models, get a tower and run the thing remotely.
I manage, and need the ergonomics for physical disability reasons, but I still would prefer to have a separate tower to run models from.
Anyways, you can sign your own UEFI keys to use any distro, but this can be daunting for some people. The US defense department has a good PDF guide on setting your own keys. The UEFI bootloader for the machine may not have all key signing features implemented. There is a way to boot into UEFI directly and set the keys manually but this is not easy to find great guides on how to do it step by step. Gentoo has a tutorial on this, but it assumes a high level of competency.
Other than signing your own keys, the shim keys mentioned are special keys signed by Microsoft for the principal maintainer of the distro. These slide under the Microsoft key to keep secure boot enabled.
If you boot any secure boot enabled OS, the bootloader is required to delete any bootable unsigned code it finds. It does not matter if it is a shimmed Fedora or W11. If you have any other OS present in the boot list, it should be deleted. W11 is SB only, and this is where the real issues arise.
Are you insane? Debian is a base distro like any other and runs more hardware than any other. It has all of the bootstrapping tools to get hardware working.
Canonical is a server company and Ubuntu server is literally the product.
Arch is absolute garbage for most users unless you have a CS degree or you have entirely too much time on your hands and don’t mind an OS as your life project. Arch abhors tutorial content in all documentation and therefore dumps users into a rabbit hole regularly. Pacman is the worst package manager as it will actively break a system and present the user with the dumbest of choices at random because the maintainers are ultimately sadistic and lackadaisical. Arch is nearly identical to Gentoo with Arch binaries often based on Gentoo builds, yet Gentoo provides relevant instruction and documentation with any changes that require user intervention and does so at a responsible and ethical level that shows kindness, respect, and consideration completely absent from Arch. Arch is a troll by trolls for trolls. I’m more than capable of running it now, but I would never bother with such inconsiderate behavior.
deleted by creator
Fedora’s Anaconda system makes UEFI secure boot easy and ships with SELinux integrated but set to permissive by default. Their built in network filtering tools are pretty easy but I still just use OpenWRT on a separate device. Silverblue was nice for a few years but I switched to Workstation for a machine with Nvidia hw.
Lol, call it 300mm and never mention how Ti is antiquated trailing edge junk and analog. Like all the miserly fools, they don’t lead in anything but history from a bygone era. The acquisition was an old Micron fab, so too junk for them to use. The bottom feeder Ti comes in to spin a win. At least most of their stuff is documented, unlike the Qualcomm and Broadcom junk.
You lose the I/O and power efficiency is no comparison. You can get better power efficiency and sometimes some I/O with an old router and OpenWRT, but you’ll be in the class of a Beagle Bone and a much harder learning curve. I’ve never managed to get a sensor or peripheral working on some old laptop’s SPI or I2C buses like how easy it is on a Rπ.
I think you’re right in some cases, but also somewhat attributing malice to stupidity. There are primitive people that are far too scared to risk abandoning their mutually exclusive social support network. They exhibit angst at the unknown and unfamiliar and sway in the direction of fight from their fight or flight mechanism. None of this behavior is within the scope of their self awareness. They exist in a fixated cult like state of tribal ignorance and stupidity, and are wholely incapable of curiosity and learning from sources outside the scope of their tribal isolation.
I was this way before my self awareness grew past the point of reflection. My entire family is like this as are my former and abandoned social support network I am now ostracized from as a result.
This is the actual barrier in place that enables cult like isolation and fixation. Meanwhile, these systems are wholly built upon outsourcing ethics to an organization that only wields shame to keep members in line. Shame can never motivate positive action. Shame can only negatively curb behaviors. Without positive feedback, these systems can only produce depression and negative austere conservative people able to cope with the lack of endorphins. It is truly sadistic in nature. Those that are still out of balance are considered undesirable when their cognitive dissonance pushes back in actions the person may not even understand or register.
Religion is largely a cognitive dissonance factory because of these factors. This does not excuse actions that harm others. But it is this antiquated system of subtle harm in the religious tribal structure and its cult like exclusivity of social network isolation that create people with no independent ethics, unable to learn and reason well, and scared of everything outside of their tiny bubble of a life.
Oh wow, so we are in kinda similar places but from vastly different paths and capabilities. Back before I was disabled I was a rather extreme outlier of a car enthusiast, like I painted (owned) ported and machined professionally. I was really good with carburetors, but had a chance to get some specially made direct injection race heads with mechanical injector ports in the combustion chamber… I knew some of the Hilborn guys… real edgy race stuff. I was looking at building a supercharged motor with a mini blower and a very custom open source Megasquirt fuel injection setup using a bunch of hacked parts from some junkyard Mercedes direct injection Bosch diesel cars. I had no idea how complex computing and microcontrollers are, but I figured it couldn’t be much worse than how I had figured out all automotive systems and mechanics. After I was disabled 11 years ago riding a bicycle to work while the heads were off of my Camaro, I got into Arduino and just trying to figure out how to build sensors and gauges. I never fully recovered from the broken neck and back, but am still chipping away at compute. Naturally, I started with a mix of digital functionality and interfacing with analog.
From this perspective, I don’t really like API like interfaces. I often have trouble wrapping my head around them. I want to know what is actually happening under the hood. I have a ton of discrete logic for breadboards and have built stuff like Ben Eater’s breadboard computer. At one point I played with CPLDs in Quartus. I have an ICE40 around but have only barely gotten the open source toolchain running before losing interest and moving on to other stuff. I prefer something like Flash Forth or Micropython running on a microcontroller so that I am independent of some proprietary IDE nonsense. But I am primarily a Maker and prefer fabrication or CAD over programming. I struggle to manage complexity and the advanced algorithms I would know if I had a formal CS background.
So from that perspective, what I find baffling about RISC under CISC is specifically the timing involved. Your API mindset is likely handwaving this as black box, but I am in this box. Like, I understand how there should be a pipeline of steps involved for the complex instruction to happen. What I do not understand is the reason or mechanisms that separate CISC from RISC in this pipeline. If my goal is to do A…E, and A-B and C-D are RISC instructions, I have a ton of questions. Like why is there still any divide at all for x86 if direct emulation is a translation and subdivision of two instructions? Or how is the timing of this RISC compilation as efficient as if the logic is built as an integrated monolith? How could that ever be more efficient? Is this incompetent cost cutting, backwards compatibility constrained, or some fundamental issue with the topology like RLC issues with the required real estate on the die?
As far as the Chips and Cheese article, if I recall correctly, that was saved once upon a time in Infinity on my last phone, but Infinity got locked by the dev. The reddit post link would have been a month or two before June of 2023, but your search is as good as mine. I’m pretty good at reading and remembering the abstract bits of info I found useful, but I’m not great about saving citations, so take it as water cooler hearsay if you like. It was said in good faith with no attempt to intentionally mislead.
You caught me. I meant this, but was thinking backwards from the bottom up. Like building the logic and registers required to satisfy the CISC instruction.
This mental space is my thar be dragons and wizards space on the edge of my comprehension and curiosity. The pipelines involved to execute a complex instruction like AVX loading a 512 bit word, while two logical cores are multi threading with cache prediction, along with the DRAM bus width limitations, to run tensor maths – are baffling to me.
I barely understood the Chips and Cheese article explaining how the primary bottleneck for running LLMs on a CPU is the L2 to L1 cache bus throughput. Conceptually that makes sense, but thinking in terms of the actual hardware, I can’t answer, “why aren’t AI models packaged and processed in blocks specifically sized for this cache bus limitation”. If my cache bus is the limiting factor, duel threading for logical cores seems like asinine stupidity that poisons the cache. Or why an OS CPU scheduler is not equip to automatically detect or flag tensor math and isolate threads from kernel interrupts is beyond me.
Adding a layer to that and saying all of this is RISC cosplaying as CISC is my mental party clown cum serial killer… “but… but… it is 1 instruction…”
ARM is an older Reduced Instruction Set Computing out of Berkeley too. There are not a lot of differences here. x86 could even be better. American companies are mostly run by incompetent misers that extract value through exploitation instead of innovation on the edge and future. Intel has crashed and burned because it failed to keep pace with competition. Like much of the newer x86 stuff is RISC-like wrappers on CISC instructions under the hood, to loosely quote others at places like Linux Plumbers conference talks.
ARM costs a fortune in royalties. RISC-V removes those royalties and creates an entire ecosystem for companies to independently sell their own IP blocks instead of places like Intel using this space for manipulative exploitation through vendor lock in. If China invests in RISC-V, it will antiquate the entire West within 5-10 years time, similar to what they did with electric vehicles and western privateer pirate capitalist incompetence.
I think the Chinese will do it with RISC-V, or Europe will demand it independently.
We’re on the last nodes for fabs. The era of exponential growth is over. It is inevitable that a major shift in hardware longevity and serviceability will happen now. Stuff will also get much more expensive because volume is not needed or possible in the cycle to pay back the node investments.
The harder they push, the more it incentivises someone else to sell actual open source hardware for profit.
I don’t know for sure, but I watched an interview that Fraser Cain posted with Lucas Norder from “Breakthrough Star Shot”, (the private mission to send a probe to the nearest star Alpha Centauri using lasers and solar sails). In that interview they were talking about a similar technology where the material is ablated to filter the laser frequency of light. I think this is basically the same/similar thing. I’m sure there are some tradeoffs. Anton talks about it not being very clear or usefully in visual focus IIRC. I picture it evolving into something like the FPS video game hack where players of something like Counter Strike can see the outline of other players through walls and such, but obviously to a much lesser extent and not through most obstructions. For instance, it would only take a small percentage of extra visual IR hinting to be super useful in a forest or jungle like environment against an enemy. Seeing the person glow would not be the point. Just a small amount of extra contrast would be a major advantage that works with existing instincts and intuition.
I’m really surprised he doesn’t mention the obvious military application. Like sure, a light source is required to illuminate the whole surrounding space like we see from typical night vision, but… mammals glow in infrared… like we’re emitting that heat-light. Seeing any definitive sign of such a heat signature is a massive advantage in some situations. Goggles are a massive encumbrance disadvantage. A contact lens would gain a lot of situational awareness and mobility.
There is certainly validity in the concept that no known instance of exploitation exists. However that is only anecdotal. The potential exists. Naïve trust in others has a terrible track record on these scales of ethics. Every instruction and register should be fully documented for every product sold.
An adequate webp image is only a few tens of kilobytes. Most people now have a bridged connection between their home network and cellular, unless they go out of their way to block it. Periodic screenshots are rather crazy. It would be much easier to target specific keywords and patterns.
No hardware documentation whatsoever. We don’t know what registers and instructions exist at the lowest levels.
As far as I am aware, there is no way to totally shut off and verify all cellular connections made, like to pass all traffic through a logged filter.
All mobile manufacturers could be doing this too. All of the SoCs are proprietary black boxes as are the modems.
This is common knowledge available anywhere.
https://en.wikipedia.org/wiki/Taiwan
https://en.wikipedia.org/wiki/History_of_Taiwan_(1945–present)
https://en.wikipedia.org/wiki/Chinese_Civil_War
Those all have sources. There are also lots of reputable YouTubers with relevant academic credentials that have coved this, Asianometry and William C Fox are two that I recall covering the subject. I think Caspian Report did as well at one point in the last few years. This is like ultra basic surface level stuff everyone should know or severely question their sources and echo chambers if they are not made aware of this fundamental information.
fork mommy