For context, Haven is a fairly new open source (GPLv3) SSH client for Android.
At first when I saw this I was eager to explore it: there isn’t much choice in terms of open source SSH clients for Android. Termius is proprietary, ConnectBot is unmaintained (but recently has had some new activity?) and JuiceSSH was never open source afaik. Currently I am using Termux + openssh but that’s not great either (e.g. no FIDO ssh key support).
However upon further inspection I am a little suspicious that AI is used significantly for a few reasons:
- Claude has contributed a few commits (but not many)
- Some of the markdown files (like VISION.md) read like AI generated text
- The way the author replies to issues and PRs also reads like AI generated text, with heavy use of em dashes and bold text unnecessarily
- The rate of commits and new features seems rather high for a single person working by themselves
Are my suspicions founded? Even if the author uses AI to generate documentation and reply to issues, I’m not sure about the actual code itself. SSH access is quite a sensitive thing so I’d like to know whether the client I am using is built with AI or not. Would appreciate your thoughts.



i know a lot of software that’s built by hand where that applies, hell, bitwarden had security vulnerabilities very recently and it’s a software that’s highly regarded as trustworthy.
i ask who cares because the question is not wether the software is good, bad, etc, it’s about something that’s irrelevant to the quality of it.
A big problem is that vibe-coded stuff tends to be much harder to maintain as the ‘author’ diesn’t actually know how it works, and the code was not built for humans to understand. That’s not a problem that’s unique to LLM generated code, but it is common place in code that is, so “was this generated by LLM/AI” becomes a useful proxy question for “is thus codebase likely to be harder to maintain, and thus be less likely to be maintained well?”
There’s also the sociatal level costs of using these models to consider. At present, they use significant amounts of power and cooling, both of which lead to adverse environmental effects. It serms quite appropriate to ask if a project using them, and to make the choice to avoid those that are, out of principal as well as technical concern.
You can audit the code but something like:
Is a huge problem in itself IMO. It implies there’s no real human oversight of the project.
and that’s a fair criticism, but it doesnt have anything to do with ai or not, but bad knowledge about engineering or coding. i have a family member who has been working as a developer for 20 years, he is now using ai to automate that process, he knows what to tell the machine, he oversees the project, etc, that’s the way it’s going to be from now on, and it’s something that has happened to other industries years before. i’ve worked as an interpreter, translator and proof reader. do you think i do all by hand? no one does that anymore and i dont see people complaining about translations being “machine written” or whatever. i am doing the work, i check errors, i change, words, etc, the only thing i dont do is literally have to type 500 times words i know the translation of.