• 10 Posts
  • 65 Comments
Joined 2 years ago
cake
Cake day: February 26th, 2024

help-circle
  • I don’t understand why you’re getting downvoted. While I don’t share your conviction, I do admit it’s certainly a possibility.

    The advantage of doing things that way is that code becomes much more portable. We may finally reach the goal of “write once, run anywhere”, because the AI may write all the platform specific code.

    It does make a big assumption that the AI output is reliable enough though. At times people will want to tweak the output, so how are they gonna go about that? Maybe if the language is based on Markdown, you can inject snippets of code where necessary. But if you have to do that too often, such a language will lose its appeal.

    There’s a lot of unknowns, but I see why it’s a tempting idea.









  • I would argue that because C is so hard to program in, even the claim to machine efficiency is arguable. Yes, if you have infinite time for implementation, then C is among the most efficient, but then the same applies to C++, Rust and Zig too, because with infinite time any artificial hurdle can be cleared by the programmer.

    In practice however, programmers have limited time. That means they need to use the tools of the language to save themselves time. Languages with higher levels of abstraction make it easier, not harder, to reach high performance, assuming the abstractions don’t provide too much overhead. C++, Rust and Zig all apply in this domain.

    An example is the situation where you need a hash map or B-Tree map to implement efficient lookups. The languages with higher abstraction give you reusable, high performance options. The C programmer will need to either roll his own, which may not be an option if time Is limited, or choose a lower-performance alternative.






  • I found the title of that section slightly triggering too, but the argument they lay down actually makes sense. Consistency helps you to achieve correctness in large codebases, because it means you don’t have to reinvent what is correct over and over in separate pockets of the codebase. Such pockets also make incremental improvements to the codebase harder and harder, so they do come back to bite you.

    Your example of vendors doesn’t relate to that, because you don’t control your vendor’s code. But you do control your organisation’s.





  • You’re ignoring the fact that for many projects it does work.

    It only needs to be perfect if you want to run 100% Node.js software unaltered. While that may be a lofty goal, it’s also an infeasible one.

    That doesn’t mean imperfect support is futile though. By your logic, Bun has no right to exist because it only supports Node.js APIs and doesn’t have noteworthy APIs of its own, and they’re not perfect either. Yet they seem to be at least as successful as Deno is.

    Or for an example in a different domain: Your argument would state that a project like WINE shouldn’t exist because it doesn’t have perfect compatibility with Windows, and it disincentivizes development of Linux games. Yet it is largely thanks to WINE that Valve has been able to make the Steam Deck and that Linux gaming is finally taking off.

    I think what your argument fails to take into account is that you need a significant amount of users to make any impact on the market. And many users have legacy requirements that they can’t throw out overnight, so you have to support those legacy environments. And even with imperfect legacy support you can support your users, especially if the users are willing to make a few changes here or there. But if you have no legacy support, you also get no users except those that have niche greenfield requirements.

    So instead of trying to replace NodeJS or offering an upgrade path for existing Node projects, incentivize formation of ecosystem around Deno

    They are incentivizing their own ecosystem. That’s what Jsr.io is all about. But the world isn’t black and white. They can do more than one thing.