

Please explain what you mean by “rotate”. The thermostat is physically turning in-place, as though a wall clock?


Please explain what you mean by “rotate”. The thermostat is physically turning in-place, as though a wall clock?


One way to make this more Pythonic – and less C or POSIX-oriented – is to use the pathlib library for all filesystem operations. For example, while you could open a file in a contextmanager, pathlib makes it really easy to read a file:
from pathlib import Path
...
config = Path("/some/file/here.conf").read_text()
This automatically opens the file (which checks for existence), reads out the entire file as a string (rather than bytes, but there’s a method for that too), and then closes up the file. If any of those steps go awry, you get a Python exception and a backtrace explaining exactly what happened.


To many of life’s either-or questions, we often struggle when the answer is: yes. That is to say, two things can hold true at the same time: 1) LLMs can result in job redundancies, and 2) LLMs hallucinate results.
But if we just stopped the analysis there, we wouldn’t have learned anything. To use this reality to terminate any additional critical thinking is, IMO, wholly inappropriate for solving modern challenges, and so we must look into the exact contours of how true these statements are.
To wit, LLM-induced job redundancies could come from skills which have been displaced by the things LLMs can do well. For example, typists lost their jobs when businesspeople were expected to operate a typewriter on their own. And when word processing software came into existence for the personal computer, a lot of typewriter companies folded or were consolidated. In the case of LLMs, consider that people do use them to proofread letters for spelling and grammar.
Technologically, we’ve had spell-check software for a while, but grammar was harder. In turn, an industry appeared somewhere in the late 2000s or early 2010s to develop grammar software. Imagine how the software devs at these companies (eg Grammarly) might be in a precarious situation, if an LLM can do the same work. At least with grammar checking, even the best grammar software still struggles with some of the more esoteric English sentence constructions, so if an LLM isn’t 100% perfect, that’s still acceptable. I can absolutely see the fortunes of grammar software companies suffering due to LLMs, and that means those software devs are indeed threatened by what LLMs can do.
For the second statement, it is trivial to find examples of LLMs hallucinating, sometimes spectacularly or seemingly ironic (although an LLM would be hard-pressed to simulate the intention of irony, I would think). In some fields, such hallucinations are career-limiting moves for the user, such as if an LLM was used to advise on pharmaceutical dosage, or used to draft a bogus legal appeal and the judge is not amused. This is very much a FAFO situation, where somehow the AI/LLM companies are burdened with none of the risk and all of the upside. It’s like how autonomous driving automotive companies are somehow allowed to do public road tests of their beta-quality designs, but the liability for crashes still befalls the poor sod seated behind the wheel. Thoss companies just keep yapping about how those crashes are all “human error” and “an autonomous car is still safer”.
But I digress.
My point is that LLMs have quite a lot of capabilities, and people make a serious mistake when they assume its incompetence in one capacity reflects its competency in another. This is not unlike how humans assess other humans, such as how a record-setting F1 driver would probably be a very good chauffeur for a limousine company. But whereas humans have patterns that suggest they might be good (or bad) at something, LLMs are a creature unlike anything else.
I personally am not bullish on additional LLM improvements, and think the next big push will require additional academic research, being nowhere near commercialization. But even I have to recognize that some very specific tasks are decent using today’s availabile LLMs. I just don’t think that’s good enough for me to consider using them, given their subscription costs, the possibility of becoming dependent, and being too niche.


Ah, I see. That works too, but I usually find it easier to set the subnet mask for the first interface, so that there’s no hard-coding of a route to every intended destination, even if it’s just one.
With ifconfig, that might look like:
ifconfig tun11 172.17.3.11 netmask 255.255.255.0
With the moden “ip” command, it’s even less typing:
ip addr add 172.17.3.11/24 dev tun11


So I gave tun11 an IP again with ifconfig, then added a host route to the first machine
Out of curiosity, what were these commands? I’m a bit confused because I figured that just adding the IP+mask would be sufficient, without having to explicitly add a host route.


BTW, from your username, are you familiar with !Dullsters@dullsters.net ?


Let me make sure I understand the background info. Before things crashed, you had two machines that shared a two-way laser serial link, and so your testing involved sending from one machine to the other, as a way to exercise the TUN driver. Now that the second machine is dead, you wish to light up a spare two-way laser serial link. But rather than connecting to the second (dead) machine or some third machine, this spare link is functionally a “loop back” to the existing machine, the one that’s still alive. And you wish to continue your testing with this revised setup, to save yourself from having to commute to the office just to reboot the second machine.
Do I have that right? If so, firstly, it’s a Saturday in all parts of the world lol. But provided that you’re getting sufficient rest from work, I will continue.
As it stands, you are correct that the Linux machine will prefer to pass traffic internally, when it sees that the destination is local. We can try to defeat this, but it’s very much like cutting against the grain. This involves removing the kernel stack’s tendency to route packets locally, but only for the traffic going to/from the TUN interfaces. But if you get this wrong, you might lose access to the machine, and now you have 0/2 working machines…
IMO, a better solution would be to move at least one of the TUN interfaces into its own “network namespace”. This is the Linux kernel’s idea of separate network stacks, and is one of the constituent technologies used to enable containers (which are like VMs but more lightweight). Since you only require the traffic to exit on one TUN netif and come back in on the other TUN netif, this could work.
First, you create a new namespace (I’ll call it bobby), then you move tun11 into the bobby ns, and then you run all your commands in a shell that’s spawned within the bobby ns. The last part means you have access to all the files and your filesystem, but because you’re in a separate network namespace, you will not see the same netifs that would show up in the “default” namespace.
Here are the commands, but you can check this against this reference too:
ip netns add bobby
ip link set tun11 netns bobby
ip netns exec bobby /bin/bash
From inside this shell, that’s how you access tun11 (and only tun11). You’ll want to open a second SSH connection to your remote machine, which will naturally be in the “default” namespace and will allow you access to the tun10 netif (but not tun11).
Good luck!


Using an MSP430 microcontroller, I once wrote an assembly routine that (ab)used its SPI peripheral in order to stream a bit pattern from memory out to a GPIO pin, at full CPU clock rate, which would light up a “pixel” – or blacken it – in an analog video signal. This was for a project that superimposed an OSD onto the video feed of a dashcam, so that pertinent vehicle data would be indelibly recorded along with the video. It was for one heck of a university project car.
To do this, I had to study the MSP430 instruction timings, which revealed that a byte could be loaded from SRAM into the SPI output register, then a counter incremented, then a comparison against a limit value in a tight loop, all within exactly 8 CPU cycles. And the SPI completes an 8-bit transfer every 8 SPI clock cycles, but the CPU and SPI blocks can use the same clock source. In this way, I can prepare a “frame buffer” of bits to write to the screen – plenty of time during the vertical blanking interval of analog video – and then blast it atop the video signal.
I think I ended up running it at 8 MHz, which gave me sufficient pixel resolution on a 480i analog video signal. Also related was the task of creating a set of typefaces which would be legible on-screen but also be efficient to store in the MSP430’s limited SRAM and EEPROM memories. My job was basically done when someone else was able to use printf() and it actually displayed text over the video.
This MSP430 did not have a DMA engine, and even if it did, few engines permit an N-to-1 transaction to write directly to the SPI output register. Toggling the GPIO register directly was out of the question, due to taking multiple clock cycles to toggle a single bit and load the next value. Whereas my solution was a sustained 1 bit per clock cycle at 8 MHz. All interrupts disabled too, except for the vertical and horizontal blanking intervals, which basically dictated the “thinking time” available for the CPU.
For a single password, it is indeed illogical to distribute it to others, in order to prevent it from being stolen and misused.
That said, the concept of distributing authority amongst others is quite sound. Instead of each owner having the whole secret, they only have a portion of it, and a majority of owners need to agree in order to combine their parts and use the secret. Rather than passwords, it’s typically used for cryptographically signing off on something’s authenticity (eg software updates), where it’s known as threshold signatures:
Imagine for a moment, instead of having 1 secret key, you have 7 secret keys, of which 4 are required to cooperate in the FROST protocol to produce a signature for a given message. You can replace these numbers with some integer t (instead of 4) out of n (instead of 7).
This signature is valid for a single public key.
If fewer than t participants are dishonest, the entire protocol is secure.


Weather station, terrestrial/satellite TV DVR (TVHeadend), Git repository (Forgejo for a nice web UI, cgit for a classic UI), DNS resolver.


I’ve seen the suggestion of buying a GUA subnet, purely to use as a routable-but-unique prefix that will never collide, and will always win over ULA or Legacy IP routes. When I last checked, it was something like €1 for a /48 off of someone’s /32 prefix, complete with a letter of authorization and reverse IP delegation. So it could be routable, if one so chooses.


https://ipv6now.com.au/primers/IPv6Reasons.php
Basically, Legacy IP (v4) is a dead end. Under the original allocation scheme, it should have ran out in the early 1990s. But the Internet explosion meant TCP/IP(v4) was locked in, and so NAT was introduced to stave off address exhaustion. But that caused huge problems to this day, like mismanagement of firewalls and the need to do port-forwarding. It also broke end-to-end connectivity, which requires additional workarounds like STUN/TURN that continue to plague gamers and video conferencing software.
And because of that scarcity, it’s become a land grab where rich companies and countries hoard the limited addresses in circulation, creating haves (North America, Europe) and have-nots (Africa, China, India).
The want for v6 is technical, moral, and even economical: one cannot escape Big Tech or American hegemony while still having to buy IPv4 space on the open market. Czechia and Vietnam are case studies in pushing for all-IPv6, to bolster their domestic technological familiarity and to escape the broad problems with Business As Usual.
Accordingly, there are now three classes of Internet users: v4-only, dual-v4-and-v6, and v6-only. Surprisingly, v6-only is very common now on mobile networks for countries that never had many v4 addresses. And it’s an interop requirement for all Apple apps to function correctly in a v6-only environment. At a minimum, everyone should have access to dual-stack IP networks, so they can reach services that might be v4-only or v6-only.
In due course, the unstoppable march of time will leave v4-only users in the past.


You might also try asking on !ipv6@lemmy.world .
Be advised that even if a VPN offers IPv6, they may not necessarily offer it sensibly. For example, some might only give you a single address (aka a routed /128). That might work for basic web fetching but it’s wholly inadequate if you wanted the VPN to also give addresses to any VMs, or if you want each outbound connection to use a unique IP. And that’s a fair ask, because a normal v6 network can usually do that, even though a typical Legacy IP network can’t.
Some VPNs will offer you a /64 subnet, but their software might not check if your SLAAC-assigned address is leaking your physical MAC address. Your OS should have privacy-extensions enabled to prevent this, but good VPN software should explicitly check for that. Not all software does.


I don’t have any network certificates. And IMO, I’m not entirely enthused about them, but I recognize they’re a required checkbox for getting one’s foot in the door, kinda like having a college degree, esp for certain government employers. But I digress.
My networking training was on-the-job, where my mentor basically gave me a hard-copy version of this book: The All-New Switch Book, 2nd Edition, by Seifert and Edwards. In this case, “all-new” refers to 2008. But that’s alright because the fundamentals of modern computer networks have not substantially changed, even as we push beyond 400 Gbps and use MPLS to forward Metro Ethernet, or whatever.
In the end, a fundamental understanding involves switching and routing, the whole OSI layer model and practical realizations of it, Ethernet in detail, IP (Legacy + v6) in detail, and best-practices for network design. What a CCNA certificate might specifically cover is the Cisco-specific CLI syntax for setting up and maintaining a network, but knowing the fundamentals means it’s easy to manage any vendor’s equipment, or even virtual networks for VMs or hyperscalar cloud environments.
Connection tracking might not be totally necessary for a reverse proxy mode, but it’s worth discussing what happens if connection tracking is disabled or if the known-connections table runs out of room. For a well-behaved protocol like HTTP(S) that has a fixed inbound port (eg 80 or 443) and uses TCP, tracking a connection means being aware of the TCP connection state, which the destination OS already has to do. But since a reverse proxy terminates a TCP connection, then the effort for connection tracking is minimal.
For a poorly-behaved protocol like FTP – which receives initial packets in a fixed inbound port but then spawns a separate port for outbound packers – the effort of connection tracking means setting up the firewall to allow ongoing (ie established) traffic to pass in.
But these are the happy cases. In the event of a network issue that affects an HTTP payload sent from your reverse proxy toward the requesting client, a mid-way router will send back to your machine an ICMP packet describing the problem. If your firewall is not configured to let all ICMP packets through, then the only way in would be if conntrack looks up the connection details from its table and allows the ICMP packet in, as “related” traffic. This is not dissimilar to the FTP case above, but rather than a different port number, it’s an entirely different protocol.
And then there’s UDP tracking, which is relevant to QUIC. For hosting a service, UDP is connectionless and so for any inbound packet we received on port XYZ, conntrack will permit an outbound packet on port XYZ. But that’s redundant since we presumably had to explicitly allow inbound port XYZ to expose the service. But in the opposite case, where we want to access UDP resources on the network, then an outbound packet to port ABC means conntrack will keep an entry to permit an inbound packet on port ABC. If you are doing lots of DNS lookups (typically using UDP), then that alone could swamp the con track table: https://kb.isc.org/docs/aa-01183
It may behoove you to first look at what’s filling conntrack’s table, before looking to disable it outright. It may be possible to specifically skip connection tracking for anything already explicitly permitted through the firewall (eg 80/443). Or if the issue is due to numerous DNS resolution requests from trying to look up spam sources IPs, then perhaps either the logs should not do a synchronous DNS lookup, or you can also skip connection tracking for DNS.


I’m kinda surprised that your ISP was able to sell you a 1 Gbps service but didn’t bother to check if the line equipment was capable of that speed. Here in California, the ONT is considered the “demarcation point”, which is where the ISP’s responsibility ends and where the customer’s responsibility begins. So the ONT is owned and maintained by the ISP, although it often does require AC power from the customer’s home.
Just prior to when I upgraded from 100 Mbps to 1 Gbps, my ISP was already undertaking a network upgrade and that meant they were proactively upgrading customers to newer ONTs that would enable faster service. My understanding is that they had a newer fibre switch on their end, and upgraded customers would need the physical fibre moved from the old switch to the new switch. So to shrink the time where they are forced to operate two separate switches, they reached out to all the customers to replace their ONTs at once. I’m aware that some PON networks can run upgraded services simultaneously on the same fibre, but apparently my ISP doesn’t do that.
As a result, their equipment was already in place when I decided to jump to 1 Gbps. Rather embarrassingly, it was only then that I found that my home’s original CAT5(no E) wiring had two pairs taken for use with a former alarm system. And since 1 Gbps requires all four pairs, the ISP technician could show 1 Gbps at the demarc but not through my home wiring. On my own time, I reunited the missing two pairs and now have 1 Gbps link to the ONT.
In future, I plan to re-run that 30 meter link with CAT6, since my own testing indicated that the existing wiring is too marginal for 10 Gbps, or even the 802.3bz intermediate speeds of 2.5 Gbps or 5 Gbps. And I really do want to upgrade to 2 Gbps service, mostly to say that I have it…


https://github.com/Overv/vramfs
Oh, it’s a user space (FUSE) driver. I was rather hoping it was an out-of-tree Linux kernel driver, since using FUSE will: 1) always pass back to userspace, which costs performance, and 2) destroys any possibility of DMA-enabled memory operations (DPDK is a possible exception). I suppose if the only objective was to store files in VRAM, this does technically meet that, but it’s leaving quite a lot on the table, IMO.
If this were a kernel module, the filesystem performance would presumably improve, limited by how the VRAM is exposed by OpenCL (ie very fast if it’s just all mapped into PCIe). And if it was basically offering VRAM as PCIe memory, then this potentially means the VRAM can be used for certain RAM niche cases, like hugepages: some applications need large quantities of memory, plus a guarantee that it won’t be evicted from RAM, and whose physical addresses can be resolved from userspace (eg DPDK, high-performance compute). If such a driver could offer special hugepages which are backed by VRAM, then those application could benefit.
And at that point, on systems where the PCIe address space is unified with the system address space (eg x86), then it’s entirely plausible to use VRAM as if it were hot-insertable memory, because both RAM and VRAM would occupy known regions within the system memory address space, and the existing MMU would control which processes can access what parts of PCIe-mapped-VRAM.
Is it worth re-engineering the Linux kernel memory subsystem to support RAM over PCIe? Uh, who knows. Though I’ve always like the thought of DDR on PCIe cards. All technologies are doomed to reinvent PCIe, I think, said someone from Level1Techs.


Ok, I have to know: how is this done, and what do people use it for?


!ipv6@lemmy.world would be interested, I would think
If I understand the Encryption Markdown page, it appears the public/private key are primarily to protect the data at-rest? But then both keys are stored on the server, although protected by the passphrase for the keys.
So if the protection boils down to the passphrase, what is the point of having the user upload their own keypair? Are the notes ever exported from the instance while still being encrypted by the user’s keypair?
Also, why PGP? PGP may be readily available, but it’s definitely not an example of user-friendliness, as exemplified by its lack of broad acceptance by non-tech users or non-government users.
And then, why RSA? Or are other key algorithms supported as well, like ed25519?