• 0 Posts
  • 102 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • Looking at the diagram, I don’t see any issue with the network topology. And the power arrangement also shouldn’t be a problem, unless you require the camera/DVR setup to persist during a power cut.

    In that scenario, you would have to provide UPS power to all of: the PoE switch, the L3 switch, and the NVR. But if you don’t have such a requirement, then I don’t see a problem here.

    Also, I hope you’re doing well now.



  • This is an interesting application of so-called AI, where the result is actually desirable and isn’t some sort of frivolity or grift. The memory-safety guarantees offered by native Rust code would be a very welcome improvement over C code that guarantees very little. So a translation of legacy code into Rust would either attain memory safety, or wouldn’t compile. If AI somehow (very unlikely) manages to produce valid Rust that ends up being memory-unsafe, then it’s still an advancement as the compiler folks would have a new scenario to solve for.

    Lots of current uses of AI have focused on what the output could enable, but here, I think it’s worth appreciating that in this application, we don’t need the AI to always complete every translation. After all, some C code will be so hardware-specific that it becomes unwieldy to rewrite in Rust, without also doing a larger refactor. DARPA readily admits that their goal is simply to improve the translation accuracy, rather than achieve perfection. Ideally, this means the result of their research is an AI which knows its own limits and just declines to proceed.

    Assuming that the resulting Rust is: 1) native code, and 2) idiomatic, so humans can still understand and maintain it, this is a project worth pursuing. Meanwhile, I have no doubt grifters will also try to hitch their trailer on DARPA’s wagon, with insane suggestions that proprietary AI can somehow replace whole teams of Rust engineers, or some such nonsense.

    Edit: is my disdain for current commercial applications of AI too obvious? Is my desire for less commercialization and more research-based LLM development too subtle? :)


  • A commenter already provided a fairly comprehensive description of low-level computer security positions. But I also want to note that a firm foundation in low-level implementations is also useful for designing embedded software and firmware.

    As in, writing or deploying against custom BIOS/UEFI images, or for real-time devices where timing is of the essence. Most anyone dealing with an RTOS or kernel drivers or protocol buses will necessarily require an understanding of both the hardware architecture plus the programming language available to them. And if that appeals to you, you might consider looking into embedded software development.

    The field spans anything from writing the control loop for washing machines, to managing data exchange between multiple video co-processors onboard a flying drone to identify and avoid collisions, to negotiating the protocol to set up a 400 Gbps optical transceiver to shoot a laser down 40 km of fibre.

    If something “thinks” but doesn’t have a monitor and keyboard, it’s likely to have one or more processors running embedded software. Look around the room you’re in and see what this field has enabled.




    1. The return value of time.time() is actually a floating-point number … It’s also not guaranteed to be monotonically increasing, which is a whole other thing that can trip people up, but that will have to be a separate blog post.

    Oh god, I didn’t realize that about Python and the POSIX spec. Cautiously, I’m going to guess that GPS seconds are one of the few reliable ways to uniformly convey a monotonically-increasing time reference.

    Python has long since deprecated the datetime.datetime.utcnow() function, because it produces a naive object that is ostensibly in UTC.

    Ok, this is just a plainly bad decision then and now by the datetime library people. What possible reason could have existed to produce a TZ-naive object from a library call that only returns a reference to UTC?


  • If not code or documentation contributions, then well-written bug reports. Seriously, the quality of bug reports sometimes leaves a lot to be desired. And I don’t necessarily mean a full back-trace attached – and please, if you ever send a back-trace, copy-and-paste the text, never a screenshot – but just details like: system details, OS, version, step-by-step instructions to reproduce that a non-coder could also understand, plus what you expected to happen versus what actually happened.

    This stuff (usually) comes naturally to programmers and engineers, but users don’t necessarily see things this way. I sometimes think bug reports need to adopt a “so tell me what happened?” approach, where reporters are encouraged to describe free-form what they think of the software, then providing the specific details that developers need. That at least would collect all the relevant details, plus extra details that no developers thought to ask.

    Even just having folks that help gather and distill details from user reporters on a forum is easing a burden off of developers, and that effort should be welcomed by any competently-organized project. Many projects already have a template for reports, although it often gets mistaken for boilerplate. Helping reports recognize that they need to fill in all the details is a useful activity that isn’t code or docs.



  • I’m not any type of lawyer, especially not a copyright lawyer, though I’ve been informed that the point of having the copyright date is to mark when the work (book, website, photo, etc) was produced and when last edited. Both aspects are important, since the original date is when the copyright clock starts counting, and having it further in the past is useful to prove infringement that occurs later.

    Likewise, each update to the work imbues a new copyright on just the updated parts, which starts its own clock, and is again useful to prosecute infringement.

    As a result, updating the copyright date is not an exercise of writing today’s year. But rather, it’s adding years to a list, compressing as needed, but never removing any years. For example, if a work was created in 2012 and updated in 2013, 2015, 2016, 2017, and 2022, the copyright date could look like:

    © 2012, 2013, 2015-2017, 2022

    To be clear, I’m not terribly concerned with whether large, institutional copyright holders are able to effectively litigate their IP holdings. Rather, this is advice for small producers of works, like freelancers or folks hosting their own blog. In the age of AI, copyright abuse against small players is now rampant, and a copyright date that is always the current year is ammunition for an AI company’s lawyer to argue that they didn’t plagiarize your work, because your work has a date that came after when they trained their models.

    Not that the copyright date is wholly dispositive, but it makes clear from the get-go when a work came unto copyright protection.



  • The original reporting by 404media is excellent in that it covers the background context, links to the actual PDF of the lawsuit, and reaches out to an outside expert to verify information presented in the lawsuit and learned from their research. It’s a worthwhile read, although it’s behind a paywall; archive.ph may be effective though.

    For folks that just want to see the lawsuit and its probably-dodgy claims, the most recent First Amended Complaint is available through RECAP here, along with most of the other legal documents in the case. As for how RECAP can store copies of these documents, see this FAQ and consider donating to their cause.

    Basically, AXS complains about nine things, generally around: copyright infringement, DMCA violations (ie hacking/reverse engineering), trademark counterfeiting and infringement, various unfair competition statutes, civil conspiracy, and breach of contract (re: terms of service).

    I find the civil conspiracy claim to be a bit weird, since it would require proof that the various other ticket websites actually made contact with each other and agreed to do the other eight things that AXS is complaining about. Why would those other websites – who are mutual competitors – do that? Of course, this is just the complaint, so it’s whatever AXS wants to claim under “information and belief”, aka it’s what they think happened, not necessarily with proof yet.


  • Agreed. When I was fresh out of university, my first job had me debugging embedded firmware for a device which had both a PowerPC processor as well as an ARM coprocessor. I remember many evenings staring at disassembled instructions in objdump, as well as getting good at endian conversions. This PPC processor was in big-endian and the ARM was little-endian, which is typical for those processor families. We did briefly consider synthesizing one of them to match the other’s endianness, but this was deemed to be even more confusing haha



  • There was a ton of hairbrained theories floating around, but nobody had any definitive explanation.

    Well I was new to the company and fresh out of college, so I was tasked with figuring this one out.

    This checks out lol

    Knowing very little about USB audio processing, but having cut my teeth in college on 8-bit 8051 processors, I knew what kind of functions tended to be slow.

    I often wonder if this deep level understanding of embedded software/firmware design is still the norm in university instruction. My suspicion has been that focus moved to making use of ever-increasing SoC performance and capabilities, in the pursuit of making it Just Work™ but also proving Wirth’s Law in the process via badly optimized code.

    This was an excellent read, btw.



  • Your primary issue is going to be the power draw. If your electricity supplier has cheap rates, or if you have an abundance of solar power, then it could maybe find life as some sort of traffic analyzer or honeypot.

    But I think even finding a PCI NIC nowadays will be rather difficult. And that CPU probably doesn’t have any sort of virtualization extensions to make it competitive against, say, a Raspberry Pi 5.




  • 1 - I get that light is flashed in binary to code chips but how does it actually fookin work ? What is the machine emmiting [sic] this light made up of ?

    This video by Branch Education (on YouTube or Nebula) is a high level explanation of every step in a semiconductor fab. It doesn’t go over the details of how semiconductor junctions work, though. That sort of device physics is discussed in this YouTube video by Ben Eater, “how semiconductors work”

    2 - How was program’s, OSs, Kernal [sic] etc loaded on CPU in early days when there were no additional computers to feed it those like today ?

    When the CPU powers up, typically the very first thing it starts to execute is the bootloader. Bootloaders will vary depending on the system, and today’s modern Intel or AMD desktop machines boot very differently to their 1980s predecessor. However, since the IBM PC laid the foundation for how most computers booted up for a nearly four decades, it may be instructive to see how it worked in the 80s. This WikiBook on x86 bootloading should be valid for all 32-bit x86 targets, from the original 8086 to the i686. It may even be valid further, but UEFI started to take off, which changed everything into a more modern form.

    But even before the 80s, computers could have a program/kernel/whatever loaded using magnetic tape, punch cards, or even by hand with physical switches, each representing one bit.

    But how does the computer decode this binary “machine code” into instructions to perform? See this video by Ben Eater, explaining machine instructions for the MOS 6502 CPU (circa 1975). The age of the CPU is not important, but rather that by the 70s, the basics of CPU operations has already been laid down, and that CPU is easy to explain yet non-trivial.

    3 - I get internet is light storing information but how ? Fookin HOW ?

    The mechanics of light bouncing inside a fibre optic cable is well-explained in this YouTube video by engineerguy. But for an explanation of how ones-and-zeros get converted into light to be transmitted, that’s a bit more involved. I might just point you to the Wikipedia page for fibre optic communications.

    How the data is encoded is important, as this has significant impact on bandwidth and data integrity, not just for light but for wireless RF transmission and wireline transmission. For wireless, this Branch Education video on Starlink (YouTube or Nebula) is instructive. And for wired, this Computerphile YouTube video on ADSL covers the challenges faced.

    Quite frankly, I might just recommend the entirety of the Computerphile channel, particularly their back catalogue when they laid down computer fundamentals.

    4 - How did it all come to be like it is today and ist it possible for one human to even learn how it all works or are we just limited one or two things ? Like cab we only know how to program or how to make hardware but not both or all ?

    As of 2024, the field is enormous, to the point that a CompSci degree necessarily has to be focused on a specific concentration. But that doesn’t necessarily mean the hard stuff like device physics are off-limits, leaving just stuff like software and AI. Sam Zeloof has been making homemade microchips, devising his own semiconductor process and posting it on YouTube..

    Specifically to your question about either software or hardware, the specialty of embedded software engineering requires skills with low-level software or firmware, as well as dealing with substantial hardware-specific details. People that write drivers or libraries for new hardware require skills from both regimes, being the bridge between Electrical Engineers that design the hardware, and software developers that utilize the hardware.

    Likewise, developers for high performance computers need to know the hardware inside-out, to have any chance of extracting every last bit (pun intended) of speed. However, these developers tend to rely upon documentation such as data sheets, rather than having to be keenly aware of how the hardware was manufactured. Some level of logical abstraction is necessary to tractably understand today’s necessarily large and complex systems.

    5 - Do we have to join Intel first or something to learn how most of the things work lol ?

    Nope! Often, you can look to existing references, such as Linux source code, to provide a peek at what complexities exist in today’s machines. I say that, but the Linux kernel is truly a monster, not because it’s badly written, but because they willingly take code to support every single bleeding platform that people are willing to author code for. And that means lots and lots of edge cases; there’s no such thing as a “standard” computer. X86 might be the closest to a “standard” but Intel has never quite been consistent across that architecture’s existence. And ARM and RISC-V are on the rise, in any case.

    Perhaps what’s most important is to develop strong foundations to build on. Have a cursory understanding of computing, networking, storage, wireless, software licenses, encryption, video encoding/decoding, UI/UX, graphics, services, containers, data and statistical analysis, and data exchange formats. But then pick one and focus on it, seeing how it interacts with other parts of the computing world.

    Growing up, I had an interest in IT and computer maintenance. Then it evolved into writing websites. Then into writing C++ software. Right before university, I started playing around with the Arduino’s Atmel 328p CPU directly, and so I entered uni as a Computer Engineer, hoping to do both software and hardware.

    The space is huge, so start somewhere that interests you. From the examples above, I think online videos are a fantastic resource, but so can blog posts written by engineers at major companies, as can talks at conferences, as can sitting in at university courses.

    Good luck and good studies!