Idk, but I don’t see why commits of shit code from AI are any different from commits to shit code from fleshbags.
Shit code is shit code.
If the maintainers of the project have their review game on point then shit code will not be in the repo, if they don’t, then AI or not, shit code will be in the repo.
So, I see no reason to panic and raise alarm about AI commits.
If anything hopefully some LLM assistance can take the weight off the absolute saints among us that are unpaid maintainers of crucial FOSS repos, like for instance with the whole XZ situation.
Vibecoding or outsourcing your brain to proprietary tech is a choice like how using an assembly line plant to stab yourself in the balls is a choice. You can choose to use tools in non-idiotic ways as well.
I’d be far more concerned over stuff like Immich getting bought out by a company with all sorts of links to the shadiest blokes going amongst the ultra-rich.
The issue is the barrier to entry for creating shit PRs has almost vanished while reviewing those PRs for quality by a human being hasn’t, so it pushes undue burden on the maintainers. See blog posts by Daniel Steinberg (maintainer of curl) for example.
Far be it from me to argue with Steinberg, fair enough. I must be wrong.
I guess I just don’t see how there was ever a barrier in the first place. The amount of juniors who couldn’t code their way out of fizzbuzz who think they are geniuses has exploded in recent years, I largely count myself amongst them too, with job interviews being as competitive as they are, and a big old green commit history being seen as a plus and people buying stars and such, I just don’t see how this was anything but an eventuality with or without AI, not unlike the endless barely valid CVE slop too.
My theory is that it is much psychologically easier to publish something you put little effort into and is mostly not your own work. Ego related fears are a strong motivator. Or maybe it’s just a question of volume, idk.
Volume and what i think of as uncanny valley code.
Where you, as a maintainer, previously had 10 PR’s to get through you now have 100+.
Where you, as a maintainer, previously had a range of quality of submissions, you now have a significant proportion of submissions that look reasonable, but only upon further inspection aren’t doing what is described.
Those two things are multiplicative and add an immense amount of effort on the maintainers side.
and just in case you are thinking “but automation”, it mostly doesn’t fix these issue to any appreciable degree.
That’s before you even get into what kind of % of the LLM PR’s are useful ( a different discussion )
When you put it in a circle like that it looks 100% more like a butthole.
At least buttholes are useful.
I do appreciate my mouth doesn’t have to do double duty like a sea cucumber.
Hank Green did a blind ranking of how much AI company logos look like buttholes. Most did, but this one definitely won.
https://velvetshark.com/ai-company-logos-that-look-like-buttholes nice commentary from april 10th last year
software development is not my word so im not the most knowledgeable, but would you consider possible that there will be a split in FOSS (and maybe even proprietary software) in between non-AI seed lines of code and those that are contaminated seed lines?
considering all the possible back doors and other vulnerabilities?
Why aren’t we yelling at the maintainers for accepting these PRs in the first place?
Right, yelling at the unpaid volunteers that maintain critical infrastructure we take for granted is definitely going to productive
I can bite them instead if you’d prefer.





