Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 4 Posts
  • 202 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • A pet project… A web novel publishing platform. It’s very fancy: Uses yjs (CRDTs) for collaborative editing, GSAP for special effects (that authors can use in their novels), and it’s built on Vue 3 (with Vueuse and PrimeVue) and Python 3.13 on the backend using FastAPI.

    The editor TipTap with a handful of custom extensions that the AI helped me write. I used AI for two reasons: I don’t know TipTap all that well and I really want to see what AI code assist tools are capable of.

    I’ve evaluated Claud Code (Sonnet 4.5), gpt5, gpt5-codex, gpt5-mini, Gemini 2.5 (it’s such shit; don’t even bother), qwen3-coder:480b, glm-4.6, gpt-oss:120b, and gpt-oss:20b (running locally on my 4060 Ti 16GB). My findings thus far:

    • Claude Code: Fantastic and fast. It makes mistakes but it can correct its own mistakes really fast if you tell it that it made a mistake. When it cleans up after itself like that it does a pretty good job too.
    • gpt5-codex (medium) is OK. Marginally better than gpt5 when it comes to frontend stuff (vite + Typescript + oh-god-what-else-now haha). All the gpt5 (including mini) are fantastic with Python. All the gpt5 models just love to hallucinate and randomly delete huge swaths of code for no f’ing reason. It’ll randomly change your variables around too so you really have to keep an eye on it. It’s hard to describe the types of abominations it’ll create if you let it but here’s an example: In a bash script I had something like SOMEVAR="$BASE_PATH/etc/somepath/somefile" and it changed it to SOMEVAR="/etc/somepath/somefile" for no fucking reason. That change had nothing at all to do with the prompt! So when I say, “You have to be careful” I mean it!
    • gpt-oss:120b (running via Ollama cloud): Absolutely fantastic. So fast! Also, I haven’t found it to make random hallucinations/total bullshit changes the way gpt5 does.
    • gpt-oss:20b: Surprisingly good! Also, faster than you’d think it’d be—even when giving it a huge refactor. This model has lead me to believe that the future of AI-assisted coding is local. It’s like 90% of the way there. A few generations of PC hardware/GPUs and we won’t need the cloud anymore.
    • glm-4.6 and qwen3-coder:480b-cloud: About the same as gpt5-mini. Not as fast as gpt-oss:120b so why bother? They’re all about the same (for my use cases).

    For reference, ALL the models are great with Python. For whatever reason, that language is king when it comes to AI code assist.



  • I’m having the opposite experience: It’s been super fun! It can be frustrating though when the AI can’t figure things out but overall I’ve found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don’t provide the same level of “wow, that just worked!” Or “wow, this code is actually well-documented and readable.”

    Seriously: If you haven’t tried Claude Code (in VS Code via that extension of the same name), you’re missing out. It’s really a full generation or two ahead of the other coding assistant models. It’s that good.

    Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn’t give you enough credits and the gap between $20/month and $100/month is too large 😁





  • WTF? Have you ever been in a data center? They don’t release anything. They just… Sit. And blink lights while server fans blow and cooling systems whir, pumping water throughout.

    The cooling systems they use aren’t that different from any office building. They’re just bigger, beefier versions. They don’t use anything super special. The Pfas they’re talking about in this article are the same old shit that’s used in any industrial air conditioner.

    For the sake of argument, let’s assume that a data center uses 10 times more cooling as an equivalently sized office building. I don’t know about you, but everywhere that I’ve seen data centers, there’s loads and loads of office buildings nearby. Far more than say 10 for every data center.

    My point is this: If you’re going to be bitching about pfas and cooling systems, why focus on data centers (or AI, specifically) when there’s all these damned office buildings? Instead, why don’t we talk about work from home policies which would be an actual way to reduce pfas use.

    This article… Ugh. It’s like bitching that electric car batteries can catch fire, pretending that regular cars don’t have a much, much higher likelihood of catching fire and there’s several orders of magnitude more of them.

    Are Pfas a problem? Yes. Are data centers anywhere near the top 1000 targets for non-trivially reducing their use? No.

    Aside: This is just like the articles bitching about data center water use… Data centers recycle their water! They have a great big intake when they’re done being built but then they’re done. They only need trivial amounts of water after that.








  • As an information security professional and someone who works on tiny, embedded systems, knowing that a project is written in Rust is a huge enticement. I wish more projects written in Rust advertised this fact!

    Benefits of Rust projects—from my perspective:

    • Don’t have to worry about the biggest, most common security flaws. Rust projects can still have security flaws (anything can) but it’s much less likely for certain categories of flaws.
    • Super easy to build stuff from scratch. Rust’s crates ecosystem is fantastic! Especially in the world of embedded where it’s a godsend compared to dealing with C/C++ libraries.
    • It’s probably super low overhead and really fast (because Rust stuff just tends to be like that due to the nature of the language and that special way the borrow checker bitches at you when you make poor programming choices haha).
    • It’s probably cross-platform or trivially made cross-platform.

  • Also, stuff that gets mis-labeled as AI can be just as dangerous. Especially when you consider that the AI detection might use such labels to train itself. So someone who’s face is weirdly symmetrical might get marked as AI and then have hard time applying for jobs, purchasing things, getting credit, etc.

    I want to know what counts as AI. If someone uses AI to remove the background in an image or just to remove someone standing in the background is technically generative AI but that’s something you can do in any photo editor anyway with a bit of work.


  • Meh. Nothing in this article is strong evidence of anything. They’re only looking at a tiny sample of data and wildly speculating about which entry-level jobs are being supplanted by AI.

    As a software engineer who uses AI, I fail to see how AI can replace any given entry-level software engineering position. There’s no way! Any company that does that is just asking for trouble.

    What’s more likely, is that AI is making senior software engineers more productive so they don’t need to hire more developers to assist them with more trivial/time consuming tasks.

    This is a very temporary thing, though. As anyone in software can tell you: Software only gets more complex over time. Eventually these companies will have to start hiring new people again. This process usually takes about six months to a year.

    If AI is causing a drop in entry-level hiring, my speculation (which isn’t as wild as in the article since I’m actually there on the ground using this stuff) is that it’s just a temporary blip while companies work out how to take advantage the slightly-enhanced productivity.

    It’s inevitable: They’ll start new projects to build new stuff because now—suddenly—they have the budget. Then they’ll hire people to make up the difference.

    This is how companies have worked since the invention of bullshit jobs. The need for bullshit grows with productivity.