Id like to share my implementation of the signal protocol that i use in my messaging app. The implementation is in rust and compiles to WASM for browser-based usage.
https://github.com/positive-intentions/signal-protocol
Its far from finished and im not sure when its a good time to share it, but i think its reasonable now.
The aim is for it to align with the official implementation (https://github.com/signalapp/libsignal). That version was not used because my use case required client side browser-based functionality and i struggled to achieve that in the official one where javascript is used but is targeting nodejs.
There are other nuances to my approach like using module federation, which led to me moving away from the official version.
I am using AI to do this so its important to be clear that im not asking you to audit or review my vibecoded project. While i have made attempts to create things like audits and formal-proof verication, i know better than to trust AI when it tells me its working correctly.
Sharing it now if there is feedback about the implementation. Anything i may be over looking? Feel free to reach out for clarity on any details.
See it in action here in the work-in-progress demo: https://p2p.positive-intentions.com/iframe.html?globals=&id=demo-p2p-messaging--p-2-p-messaging&viewMode=story


my bad. i wrote the post, but im no shakespeare. the confusing messaging was meant to convey along the lines that im aware that people have better things to do than review my project, id like to put it out there if youre interested.
im mainly working on a messaging app as linked. several secure messaging apps exist and like anyone else working on a messaging app, i want mine to be secure. in the cybersec community there is emphesis on open source. the project is linked in the post to share (because otherwise people arent going to come across it).
ive done a good amount of testing and reviewing myself, but im sure i can spend more time. i try to make it clear in this post and the readme that its still a work in progress.
given the fact that LLMs’ outputs can include verbatim or near-verbatim excerpts of their training data, how do you justify placing an open source license (or more generally, any copyright claim of your own) on source code you’ve “authored” using one of these automatic plagiarism machines?
im not sure how easy it is to get LLM’s to output near-verbatim. different models have understandably different results. i dont know if youre asking me to justify using AI?
considering how well documented and discussed the signal protocol might be, its understandable that the LLM would have a decent grasp of the concept from the onset and may well be able to get near-verbatim results. i dont want “using AI” to be used to undermine my efforts. what you see is the result of the typical software development process when i planned and iterated for improvements. im sure you can imagine how AI can help in the process.
im not an expert on licences. i chose that licence after a fairly brief consideration… you’re the first to give any pushback on it. we can discuss further if you can share any insights on licences. i created the project. its not cloned and refactored from some other existing implementation. i cant be more transparent than it being on github with a commit history.
we then start leaning towards the questions of: if anyone authors any code they produce with AI?
the more relevant question is how easy it is to be confident that they aren’t. (and the answer is: you can’t)
😭
note that many free software projects have had to establish rules explicitly prohibiting LLM contributions precisely to avoid the issue of plagiarism, as well as the related issue of copyright infringement. (and of course, code quality is another reason to…)