

… don’t forget about the backups.
And if your major issue is putting things in wrong locations… Maybe learn about some abstraction layers, so next time you’re able to just move it, instead of tearing it down?
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.


… don’t forget about the backups.
And if your major issue is putting things in wrong locations… Maybe learn about some abstraction layers, so next time you’re able to just move it, instead of tearing it down?


Maybe don’t learn all of it at the same time, or you’re bound to get confused and mix up whether some concept was from Dart, Python or one of the several frameworks.


Just read the robots.txt and obey the rules. Also set your user agent string properly. We’ve had crawlers forever on the internet and that’s the long accepted way to give consent or revoke consent, for website owners. Either you match a disallow directive and need to stop. Or you’re completely fine to scrape it.


Thanks for the link! But I’m afraid it doesn’t tell me much. a) FreeBSD isn’t even on the list, so I don’t know the numbers to compare it to. and b) there’s things like survivorship bias. Looking at numbers like this is literally the textbook example of how to do it the wrong way. You have to do statistics the proper way around. For all we know by those numbers, Linux could be the best battle-tested OS in the world. I mean they fixed 3 times as many vulnerabilities as Microsoft did for any of their products?!


Sometimes I wish people would back up their factual claims with numbers and studies.
Also: FreeBSD phone, when??


Thanks. Sadly I can’t even get the latest version to work. It does find the other peer and loads the chat interface, but doesn’t open a data channel, so it’ll say “not connected” and do an error popup everytime I try to send a message. And I’ve spend enough time debugging it for now.
Just some general words of my wisdom: I think software projects are first and foremost about focus. I don’t really know what you’re trying to do here. If that’s writing a cryptography library, I think focus is about right. You first need to lay down the design properly. Make sure you factor in advanced tech like formal proofs from the start. After that you need to write the actual code. And then also make sure it aligns with your testing. I mean it’s fairly common to make mistakes while writing computer code, or have bugs… And any of those could render your more formal methods useless. For example like that one time when some Debian package always sent the same random number as a seed… That meant the algorithms were 100% correct. Just used in a wrong way so most of the encryption was futile. Things like that require an equal amount of focus. If not more, since we already know how Double ratchet works, the important part is to implement it correctly and use it correctly. That deserves a massive amount of focus (and effort). It’s also the major part of a security audit of a software project as a whole.
We also have things like sidechannel-attacks, which aren’t covered. But I think that’s a minor thing with what we’re looking at.
And if you’re trying to develop a chat app, Your focus probably needs to be somewhere aimed to make it work, first. Make it connect reliably and across a multitude of devices. Cryptography is pretty much dispensable at that step. Then focus on the UX. Make sure it’s not vulnerable to just bypass any subsequent encryption because for example you don’t have script nonces and everyone in the chat can inject JavaScript and just bypass your entire encryption.
Think about metadata and if your software product wants to address that. You could be doing encrypted messages but all kinds of third parties know who is talking to whom… Make sure you do what your users expect!
And I think that’s also the reason for some of the downvotes here. You have a narrow focus on the formal proof of your encryption algorithm. While your audience probably expects a working Chat app. For all they care it could be entirely unencrypted in the alpha version, and encryption comes in a later version. We as users need something that works in the first place. We want to know what happens to our metadata. If there’s security vulnerabilities in the software. And once all of that is in place, then we start to worry about the specifics of the end-to-end-encryption.
Probably also related to the AI-slop argument. I don’t really know what shaped your focus. But it must look to your audience like you’re deep in some singular rabbit hole, because you write about formal proofs a lot. But then there’s this huge disparity with what your audience assumes you’re doing, or what you have to show off. Just my opinion. But it’s kinda like that for me. You write about how great AI assisted coding is, and where it led you. But then I try to use your software. And it doesn’t even connect. And that really shapes my first impression of it all, in a very negative way. I mean… If we hadn’t talked, I would have just assumed your cryptography is on the same level as your code to do the peer connections. And that wasn’t a good first impression.
I think the added benefit of an OpenWRT router is, you get 3 more ports (for your TV, Playstation and PC), plus a Wifi network. And it’s really hard to break it. But a MiniPC with OPNsense, of course will be more powerful. And some more advanced things have been notoriously difficult to set up in OpenWRT, maybe OPNsense does it a bit better.


I dislike it. Usually I’d use packages from my Linux distribution. Or package it myself and maybe upstream the effort if my distro has a user repository. Now (this way) it’s down to everybody download random files from the internet and execute them. Specifically what every Linux tutorial instructs you not to do. Plus there’s no updates, no security, no version control or transparency. It’s not licensed in any free way, so I can’t fix it or adapt it to my liking, I can’t help you write better Python code…
But it’s your software project. You’re perfectly fine to do whatever you want with it. And it’s certainly commendable to write software, whether you do it for yourself, or put it out there in some way.


To give some perspective: BitTorrent was released in 2001. So in the 90s, you’d be looking at some precursor to that. And the first CD recorder to cost less than $1000 was sold in 1995. Before that, they’d cost something like a car.
We definitely shared and copied a lot of floppy disks back then. And music on tapes.


Shouldn’t the upgrade also update the bootloader’s default entry to a new kernel? The way I’ve been doing it was apt update && apt dist-upgrade. And then reboot once every 1 to 2 years if I feel like it, am bored, or there’s all these news articles about a severe bug in the kernel.
Syncthing or Nextcloud. There’s a bunch of Linux sync software: https://awesome-selfhosted.net/tags/file-transfer--synchronization.html
Traditionally, you’d just put it on a NFS volume and be done with it. Or make it a boring plain old independent laptop with nightly backups configured, if your users always work from the same machine and don’t like… switch to a different computer in the middle of a task.
I have a port forwarding without any tunnel to third parties and Wireguard.


@xoron@programming.dev Does the currently deployed version on chat.positive-intentions.com work? I tried to connect and try some more. But somehow it doesn’t ever connect. I’m following the procedure in the Youtube video. It reloads something on the page intermittently but never connects to the other browser.
And already after opening the page, it says: “My peer ID is: xy”
But then immediately “peer disconnected” and “peer closed: undefined”. Even before I do anything. Is it supposed to say that?
I tried several combinations of Chromium 147 and LibreWolf 150. And whatever Vanadium is on my phone. I tried phone-computer and two different browsers on the same computer. Is that an issue? Other PeerJS applications work just fine.
And does the QR scanner work? It opens the camera and scans the QR code just fine, but then reloads and doesn’t put any ID into the field?! So I guess that’s broken and I need to copy-paste it?
Edit: Your file demo seems to work better. It at least gets to the point where it tries to open a connection. For some reason it also fails (ICE failed, your TURN server appears to be broken, see about:webrtc for more details). But at least that demo gets far enough to listen to connections and try to initialize them.


Uh, sorry your code is a bit difficult to read. There seems to be one implementation in the ‘src’ directory, which is referenced in your ProVerif pi code. But then there’s another one(?) in the ‘signal-protocol-core’ directory which seems to be the one that’s actually built?
And how did you arrive at those proverif files? Do they come from your Rust code? How? And how do you make sure they relate to your code? I mean for all I know they could contain some correct design, while your code does something else… I’m not really an expert at this, but they seem (to me) just to appear in some commit but I don’t really get how it relates to the Rust code. Or how it came to be.
And then it’s a bit difficult to tell for me whether your Chat uses the cryptography code from the ‘cryptography’ repository. Or the one from the ‘signal-protocol’ repository. It seems to load both?! But your own AI security audit flagged a lot of issues with your ‘cryptography’ repository. I can’t tell if that’s still up-to-date information but there was some report with mostly exclamation marks and red crosses in it. And a recommendation not to do it this way.
While at it, I had a look at the browser’s developer console, and you have a lot of JavaScript warnings and errors there. Which I guess isn’t good?! And another sidenote: If I were you and developing a secure and private messenger, I’d skip all the requests to Google fonts, AWS, JSdelivr, third party JS CDN, analytics… It directly connects to Youtube and another analytics service which gets broad permissions. The infrastructure isn’t entirely controlled by you, for example the signalling server is the default free one. All of that isn’t great for privacy. Plus your content security policy has way too many asterisks in it with external domains and domains you control but there’s debugging stuff on there. And I don’t think you even put further restrictions on what JavaScript can be loaded or injected, other than the CSP?!
And the hax just traslates code and is supposed to do a bit of type-checking and see if your code generates things with the correct length. It doesn’t currently do any theorems or verification regarding cryptography, does it? I’m not sure where to look.
Sorry I’m not exactly a security researcher… Maybe my layman’s audit is shit… But I think there’s quite some stuff going on which pretty much renders any verification of a component irrelevant. I could be wrong though. But I’d still be interested to hear how the code relates to the ProVerif files, and what kind of assurance there is, they’re the same.


diving into cryptography we have formals proofs and verification we can use
Did you do formal proofs or verification? I had a quick look at the repos and I can’t find them.


It’s a broad topic. Everytime I see some new AI-coded project linked in the selfhosted community, it’s kinda shit… I had hallucinated installation instructions. Very overexagerrated claims of what it’s supposed to do… Sometimes it looks okay but some buttons don’t do anything and then I look at the code and everything is more of a stub. Some projects have ridiculous security issues like someone finds a master key buried in the code, and of course none of the “developers” ever noticed because noone ever had a look at the code…
You’re somewhere in the same territory. Maybe you’re the one who gets it applied properly. But once I’m going to notice the tell-tale signs of vibe-coding, I’m going to start looking at it with the prejudice that got shaped by my prior experience. And I tend to be right most of the times.
But with that said, I don’t think it’s healthy to have a war over it, ban people and yell at each other. Most I want is transparency. I think all software projects should just disclose if and how they use AI, to what extent. And the users can make up their mind.
And with cryptography code… Isn’t that a bit dangerous? From my own experience, AI models tend to learn a lot of example code and the standard documentation of libraries… Wikipedia articles and such… And then generate responses closer to that, than completely new thoughts… But(!) all these examples, tutorials and boilerplate code use a lot of shortcuts to explain it in simpler terms. Shortcuts that weaken security. And I wouldn’t be surprised if your AI is then going ahead to reproduce that, and casually forget about the steps to prepare the numbers and follow up on the next steps if that wasn’t ever in the Wikipedia example code. And I’ve seen a lot of wrong advice on StackOverflow and Reddit, so you better hope it also didn’t internalize that. There’s some fairly common myths about security or cryptography details out there. And I never know if your average Claude learned more from Reddit discussions, or from computer science technical literature… And you probably used Claude to skip reading the computer science books as well (and have a really close look at the code), or you probably would have just typed it down yourself. So I’d expect your software to be roughly as sound as newbie code, up to the average of projects that’s out there on GitHub, which your AI has probably learned from. Not any better than that.


The entire page is an advertisement for an AI tool that helped uncover it. Guess that’s the demonstration on how it augments a report.


I think there’s pros and cons to everything. That way would have been less of a dickhead move towards the Forgejo developers. But a big letdown to admins as they don’t know what’s up with the software they’re running on their servers. The way the author chose gives some new intelligence to admins, and they can now act on it, since it’s public knowledge. But it’s annoying to the devs.
I guess I as a Forgejo user am kinda greatful they did it this way. Now I got to learn the story and can allocate 2h on the weekend to see if my personal Forgejo container is isolated enough and whether the backups still work.
(But that’s just my opinion after reading one side of the story. Maybe there’s more to the story and they’re being a dick nonetheless…)
Edit: And regarding just dropping the security team an informal mail… I don’t know if that’s clever. You’d normally either follow some security policy, or don’t engage. Sending them other kinds of mails which violate their policy (an internal carrot) might not be the best choice.


Thx very much. That’s valuable info. I edited my comment and crossed it off my list of software to evaluate for future projects. I already got the vibe-coding and a bit of sketchiness by scrolling through the latest commits and issue tracker.
I feel the majority of it isn’t really Android, but some Google Apps… Google Camera, Instagram, Find hub… RCS messages aren’t in AOSP either… The threat detection will be inside of the Google services… So I won’t get most of this on my phone. And I wonder if OS verification and hardware isolation are features to protect my security, or whether that’s to safeguard some DRM stuff and whatever some dubious app developers don’t want me to touch on my own phone.