Rust feels like entirely the wrong target for that sort of criticism, especially regarding “energy and resource intensity”. Rust is well-known to be comparable to C in its efficiency.
Rust feels like entirely the wrong target for that sort of criticism, especially regarding “energy and resource intensity”. Rust is well-known to be comparable to C in its efficiency.
I haven’t told you to keep calm. I’m just confused about you repeating the same points, in the same words, over and over, even after being told that you don’t have your facts correct.
I’m not saying you can’t learn or talk about other languages; I’m confused by the mismatch between your posts criticizing people for promoting newer tech stacks and the ones where you seem to be promoting newer tech stacks yourself.
25 years of experience is certainly enough to have strong opinions, but until your last comment I had the impression that you had a year or less of experience in C, hence my question.
If you’re thinking of this post, it’s by the same author: https://snac.bsd.cafe/modev/p/1727478537.713206
That’s not a confession, it’s a condemnation. It’s not your fault that universities generally don’t teach this stuff. (I think I had one lab session wherein we used valgrind.)
Why don’t you answer any of my questions instead of telling me to join your club?
Hopefully you only chmod’d your own systems. Early in my career, I worked on a project wherein we gave a contracting company root access to a computer they could use to test the software they were writing for us. One morning, they sent us a message saying they couldn’t log in. We looked at the computer and discovered it wouldn’t boot. Turned out someone on the remote team had chmod 777’d the entire filesystem. Of course we locked down their access after that.
The education system (universities, colleges, courses) uses the “modern” development stack.
Hahahahahaha!
Only a very few colleges and courses specializing in a very narrow field, such as embedded devices, can teach you the C language.
snort BWAHAHAHAHA!
the “dying C”
[wheezing]
And by doing this they are trying to hide the C language.
[incredulous snort]
And the community is kind
[wistful sigh] I truly wonder what it would be like not to know anything about Linus Torvalds. I sometimes wish I didn’t know about Richard Stallman!
And that it is unlikely that C will be able to replace anything in the near future.
I’m sure you wrote this backwards.
Why do you keep posting this exact same rant? I see that some posts are in different Lemmy communities and you’ve posted it at least once on hacker news, but you also posted it to this same community already (https://snac.bsd.cafe/modev/p/1727338529.193499) and, although I can’t find it now, I remember you posting it months ago, too.
Several of your posts that aren’t about how C is being “suppressed” (which the responses to your post have repeatedly demonstrated isn’t true) are about how you, personally, are still learning C and want more resources to learn it. And now you’re also posting about Nelua and Nim. This is wild to me! Why do you have such strong opinions about a language that you’re still learning? If you’re that passionate about C and believe that people should use it instead of newer languages, why do you care about Nim or Nelua? If you’re just trolling, why do you engage relatively patiently in the comments? And whatever your goal is, why do you keep reposting the same rants, especially this one that’s now quite old?
On the one hand, you’re right, C is waaaay higher-level than many people realize, and the compiler and processor do wild things to make code go faster. On the other hand, the C abstract machine is close enough to how computers “really work” to give you a fairly useful mental model, in a way that no other mainstream high-level language can.
Even so, if you want to know how low-level code works, you should probably just learn one or more actual assembly languages and write a few small programs that way.
C has another advantage, though: firmware, OS kernels, and virtual machines (other than browser JS engines) are still almost entirely written in C. So while it doesn’t teach you accurately how processors work, it is relevant if you want to know about the system software that meditates between the hardware and high-level software.
It’s probably not “provable” one way or the other, but I’d like to see more empirical studies in general within the software industry, and this seems like a fruitful subject for that.
Cool! Oracle, a company famous for making good-will decisions, and open to being “urged” into doing the right thing. 🙄
I suppose the open letter is a nice gesture, and I hope that the petition to cancel the trademark succeeds.
For what it’s worth, Ada and Spark are listed separately in the Wiki article on dependent typing. Again, though, I’m not a language expert.
Whatever you want to call them, my point is that most languages, including Rust, don’t have a way to define new integer types that are constrained by user-provided bounds.
Dependent types, as far as I’m aware, aren’t defined in terms of “compile time” versus “run time”; they’re just types that depend on a value. It seems to me that constraining an integer type to a specific range of values is a clear example of that, but I’m not a type theory expert.
It sounds like you’re talking about dependent typing, then, at least for integers? That’s certainly a feature Rust lacks that seems like it would be nice, though I understand it’s quite complicated to implement and would probably make Rust compile times much slower.
For ordinary integers, an arithmetic overflow is similar to an OOB array reference and should be trapped, though you might sometimes choose to disable the trap for better performance, similar to how you might disable an array subscript OOB check.
That’s exactly what I described above. By default, trapping on overflow/underflow is enabled for debug builds and disabled for release builds. As I said, I think this is a sensible behavior. But in addition to per-operation explicit handling, you can explicitly turn global trapping behavior trapping on or off in your build profile, though.
It depends what kind of errors you’re talking about. Suppose you’re implementing retries in a network protocol. You can get errors pretty regularly, and the error handling will be a nontrivial amount of your runtime.
By Ada getting it right, I assume you mean throwing an exception on any overflow? (Apparently this behavior was optional in older versions of GNAT.) Why is Ada’s preferable to Rust’s?
In Rust, integer overflow panics by default in debug mode but wraps silently in release mode; but, optionally, you can specify wrapping, checked (panicking), or unchecked behavior for a specific operation, so that optimization level doesn’t affect the behavior. This makes sense to me; the unoptimized version is the same as Ada, and the optimized version is not UB, but you can control the behavior explicitly when necessary.
There’s also a massive tradeoff for when the error condition actually occurs. If an exception does get thrown and caught, that is comparatively slowwww.
I also hope that some of the people reading this realize that OP is also the person posting all of the “stop trying to suppress C” posts.