

It’s a broad topic. Everytime I see some new AI-coded project linked in the selfhosted community, it’s kinda shit… I had hallucinated installation instructions. Very overexagerrated claims of what it’s supposed to do… Sometimes it looks okay but some buttons don’t do anything and then I look at the code and everything is more of a stub. Some projects have ridiculous security issues like someone finds a master key buried in the code, and of course none of the “developers” ever noticed because noone ever had a look at the code…
You’re somewhere in the same territory. Maybe you’re the one who gets it applied properly. But once I’m going to notice the tell-tale signs of vibe-coding, I’m going to start looking at it with the prejudice that got shaped by my prior experience. And I tend to be right most of the times.
But with that said, I don’t think it’s healthy to have a war over it, ban people and yell at each other. Most I want is transparency. I think all software projects should just disclose if and how they use AI, to what extent. And the users can make up their mind.
And with cryptography code… Isn’t that a bit dangerous? From my own experience, AI models tend to learn a lot of example code and the standard documentation of libraries… Wikipedia articles and such… And then generate responses closer to that, than completely new thoughts… But(!) all these examples, tutorials and boilerplate code use a lot of shortcuts to explain it in simpler terms. Shortcuts that weaken security. And I wouldn’t be surprised if your AI is then going ahead to reproduce that, and casually forget about the steps to prepare the numbers and follow up on the next steps if that wasn’t ever in the Wikipedia example code. And I’ve seen a lot of wrong advice on StackOverflow and Reddit, so you better hope it also didn’t internalize that. There’s some fairly common myths about security or cryptography details out there. And I never know if your average Claude learned more from Reddit discussions, or from computer science technical literature… And you probably used Claude to skip reading the computer science books as well (and have a really close look at the code), or you probably would have just typed it down yourself. So I’d expect your software to be roughly as sound as newbie code, up to the average of projects that’s out there on GitHub, which your AI has probably learned from. Not any better than that.





Did you do formal proofs or verification? I had a quick look at the repos and I can’t find them.