

Oof, yeah, those count. The fact that CMake was best-in-class when I wrote C++ professionally was…awful.


Oof, yeah, those count. The fact that CMake was best-in-class when I wrote C++ professionally was…awful.


I have managed to mostly avoid needing to code in either language, but my strong inclination is to agree that they are indeed hacks.


There’s a proliferation of dynamically and/or softly typed languages. There are very few, if any, truly untyped languages. (POSIX shells come close, though internally they have at least two types, strings and string-arrays, even if the array type isn’t directly usable without non-POSIX features.)


Yes. Types are good. Numeric operations have specific hardware behavior that depends on whether you’re using floating-point or not. Having exclusively floating-point semantics is wildly wrong for a programming language.
I think you’re misunderstanding that paragraph. It’s specifically explaining how LLMs are not like humans, and one way is that you can’t “nurture growth” in them the way you can for a human. That’s not analogous to refining your nvim config and habits.
Exactly: that’s tight feedback loops. Agents are also capable of reading docs and source code prior to generating new function calls, so they benefit from both of the solutions that I said people benefit from.
As an even more obvious example: students who put wrong answers on tests are “hallucinating” by the definition we apply to LLMs.
making the same mistakes
This is key, and I feel like a lot of people arguing about “hallucinations” don’t recognize it. Human memory is extremely fallible; we “hallucinate” wrong information all the time. If you’ve ever forgotten the name of a method, or whether that method even exists in the API you’re using, and started typing it out to see if your autocompleter recognizes it, you’ve just “hallucinated” in the same way an LLM would. The solution isn’t to require programmers to have perfect memory, but to have easily-searchable reference information (e.g. the ability to actually read or search through a class’s method signatures) and tight feedback loops (e.g. the autocompleter and other LSP/IDE features).
This seems like it doesn’t really answer OP’s question, which is specifically about the practical uses or misuses of LLMs, not about whether the “I” in “AI” is really “intelligent” or not.


Fair, but it’s one that the typical tools for finding bugs, tests and static analysis, cannot actually help with.


For what it’s worth, I agree with you about branches, and there are various ongoing discussions about how to make working with branches more convenient. I use an experimental feature called “advance branches” that makes it mostly fit my workflows, and the other benefits of jj are sufficient that I haven’t switched back to git.
I create log files of runs, temporary helper scripts, build output, etc. in my working copy all the time.
The solution to this is to just have a more aggressive .gitignore. But also, note that the “working copy commit” isn’t generally something you want to push or keep; think of it more like a combination of the git staging index and an automatic stash.
Apparently the JS name was selected and announced in partnership with Sun from the very beginning, and Sun had the copyright over both Java and JapaScript up until the acquisition by Oracle. I had no idea, but that makes perfect sense.
Oracle? Oracle owns Java, not JavaScript.
Edit: mea culpa! Sun owned both!


Do you mean Dan Luu, or one of the studies reviewed in the post?


Yeah, I understand that Option and Maybe aren’t new, but they’ve only recently become popular. IIRC several of the studies use Java, which is certainly safer than C++ and is technically statically typed, but in my opinion doesn’t do much to help ensure correctness compared to Rust, Swift, Kotlin, etc.


I don’t know; I haven’t caught up on the research over the past decade. But it’s worth noting that this body of evidence is from before the surge in popularity of strongly typed languages such as Swift, Rust, and TypeScript. In particular, mainstream “statically typed” languages still had null values rather than Option or Maybe.


Note that this post is from 2014.


Partly because it’s from 2014, so the modern static typing renaissance was barely starting (TypeScript was only two years old; Rust hadn’t hit 1.0; Swift was mere months old). And partly because true evidence-based software research is very difficult (how can you possibly measure the impact of a programming language on a large-scale project without having different teams write the same project in different languages?) and it’s rarely even attempted.


Notably, this article is from 2014.
I don’t necessarily love Python either, but it sounds like your perspective is a little limited.
You happen to have almost exclusively used languages that are syntactically descended from a common ancestor, Algol. If you had learned a LISP descendant, or another non-Algol language such as ML, Prolog, APL, or Haskell, you’d probably be less surprised by Python not following the Algol-ish syntax.
As another commenter mentioned, this is basically just a result of Python’s historical development. Explicit types are fully optional, and originally did not exist: the type annotations idea wasn’t even created until 2014, over two decades (!!) after Python’s initial release; and that was just the initial theoretical groundwork, not an implementation of anything. To introduce explicit static typing into a language that is dynamically or implicitly typed, without breaking legacy code, requires gradual typing, an idea that is relatively recent in the history of programming languages, and there are different approaches. The TypeScript approach may seem like the obvious “right” way now that TypeScript has become so dominant in the JS ecosystem, but it was in no way obvious that TypeScript would be so successful back when it was introduced, which was right around when Python started developing its gradually-typed system. So Python took a different approach: rather than designing a new language as a superset of the existing language, and writing a compiler to typecheck the new language and strip type annotations, they added a syntax for type annotations into the language, so that the code you write is still the code that actually gets interpreted, but decided that actually enforcing type-checking should be left to separate tools rather than the Python interpreter. Personally, with the benefit of hindsight, and as someone who has not used Python much and prefers Rust-style static typing, I think the TypeScript way is better. But I don’t think Python is likely to evolve in that direction.