• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • No, rust is stricter because you need to think a lot more about whether weird edge cases in your unsafe code can potentially cause UB. For ex. If your data structure relies on the Ord interface (which gives you comparison operators and total ordering), and someone implements Ord wrong, you aren’t allowed to commit UB still. In C++ land I’d venture to guess most any developer won’t care - that’s a bug with your code and not the data structure.

    It’s also more strict because rusts referencing rules are a lot harder then C’s, since they’re all effectively restrict by default, and just turning a pointer into a reference for a little bit to call a function means that you have to abide by those restrictions now without the help of the compiler.




  • The vulnerability has nothing to do with accidentally logging sensitive information, but crafting a special payload to be logged which gets glibc to write memory it isn’t supposed to write into because it didn’t allocate memory properly. glibc goes too far outside of the scope of its allocation and writes into other memory regions, which an attacked could carefully hand craft to look how they want.

    Other languages wouldn’t have this issue because

    1. they wouldn’t willy nilly allocate a pointer directly like this, but rather make a safer abstraction type on top (like a C++ vector), and

    2. they’d have bounds checking when the compiler can’t prove you can go outside of valid memory regions. (Manually calling .at() in C++, or even better - using a language like rust which makes bounds checks default and unchecked access be opt in with a special method).

    Edit: C’s bad security is well known - it’s the primary motivator for introducing rust into the kernel. Google / Microsoft both report 70% of their security vulnerabilities come from C specific issues, curl maintainer talks about how they use different sanitizers and best practices and still run into the same issues, and even ubiquitous and security critical libraries and tools like sudo + polkit suffer from them regularly.




  • As a third party, my understanding is that both the implementation and the protocol are really hard, if not next to impossible to iterate on. Modern hardware doesn’t work like how it did when X did, and X assumes a lot of things that made sense in the 90s that don’t now. Despite that, we cram a square peg into the round hole and it mostly works - and as the peg becomes a worse shape we just cram it harder. At this point no one wants to keep working on X.

    And I know your point is that it works and we don’t need too, but we do need too. New hardware needs to support X - at least the asahi guys found bugs in the X implementation that only exists on their hardware and no one who wants to fix them. Wayland and X are vastly different, because X doesn’t make sense in the modern day. It breaks things, and a lot of old assumptions aren’t true. That sucks, especially for app devs that rely on those assumptions. But keeping around X isn’t the solution - iterating on Wayland is. Adding protocols to different parts of the stack with proper permission models, moving different pieces of X to different parts of the stack, etc. are a long term viable strategy. Even if it is painful.








  • Right, but squashed commits don’t scale for large PRs. You could argue that large PRs should be avoided, but sometimes they make sense. And in the case where you do have a large PR, a commit by commit review makes a lot of sense to keep your history clean.

    Large features that are relatively isolated from the rest of the codebase make perfect sense to do in a different branch before merging it in - you don’t merge in half broken code. Squashing a large feature into one commit gets rid of any useful history that branch may have had.






  • When you make a project with git, what you’re doing is essentially making a database to control a sequence of changes (or history) that build up your codebase. You can send this database to someone else (or in other words they can clone it), and they can make their own changes on top. If they want to send you changes back, they can send you “patches” to apply on your own database (or rather, your own history).

    Note: everything here is decentralized. Everyone has the entire history, and they send history they want others to have. Now, this can be a hassle with many developers involved. You can imagine sending everyone patches, and them putting it into their own tree, and vice versa. It’s a pain for coordination. So in practice what ends up happening is we have a few (or often, one) repo that works as a source of truth. Everyone sends patches to that repo - and pulls down patches from that repo. That’s where code forges like GitHub come in. Their job is to control this source of truth repo, and essentially coordinate what patches are “officially” in the code.

    In practice, even things like the Linux kernel have sources of truth. Linus’s tree is the “true” Linux, all the maintainers have their own tree that works as the source of truth for their own version of Linux (which they send changes back to Linus when ready), and so on. Your company might have their own repo for their internal project to send to the maintainers as well.

    In practice that means everyone has a copy of the entire repo, but we designate one repo as the real one for the project at hand. This entire (somewhat convoluted mess) is just a way to decide - “where do I get my changes from”. Sending your changes to everyone doesn’t scale, so in practice we just choose who everyone coordinates with.

    Git is completely decentralized (it’s just a database - and everyone has their own copy), but project development isn’t. Code forges like GitHub just represent that.