One of the internet's most widely used software libraries approached collapse in January 2026. Not because of a hacker attack or technical failure – but because a single volunteer was overwhelmed by garbage.

Daniel Stenberg, the sole maintainer of cURL – the library that handles data transfer in billions of devices – closed the project's bug bounty program after a tidal wave of AI-generated vulnerability reports. Twenty such reports had arrived since the New Year. Seven of them came during a single 16-hour period. The reports looked legitimate, but were not – they were hallucinated analyses that required hours to debunk.

"I had to prioritize my own mental health," Stenberg wrote according to Bleeping Computer. It is a sentence that summarizes a systemic crisis in the making.

"AI tools optimize for producing code – not for producing good code. It is a distinction that is costing the community dearly."

The Numbers Behind the Crisis

45%
AI code with security flaws (Veracode 2025)
322%
More permission issues in AI code vs. human-written (Apiiro)

Figures from research communities make for discouraging reading. Veracode's GenAI Code Security Report from 2025 found that 45 percent of AI-generated code in security-critical contexts contains CWE-registered weaknesses – Common Weakness Enumeration, the industry standard for software vulnerabilities. The most common errors include SQL injection (CWE-89), cross-site scripting, and path traversal.

Security firm Apiiro found even more alarming figures: AI coding tools introduce 322 percent more permission issues and ten times as many security findings compared to human-written code. And according to Aikido research, AI-generated code is now linked to one in five security breaches in production systems.

GrowExx reports that 62 percent of AI-generated code carries design flaws or known vulnerable patterns – and that even the best models produce secure code only 56 to 69 percent of the time.

A particularly underestimated risk is "hallucinated dependencies": AI models suggesting npm or PyPI packages that do not exist, thereby opening the door for typosquatting attacks in the supply chain. UpGuard's February 2026 analysis of over 18,000 GitHub configurations for AI agents revealed that one in five developers give such agents unrestricted access to their workstations – including the ability to delete files and run arbitrary code.

96 percent of developers do not trust AI code to be functionally correct – yet only 48 percent always verify the code before committing (Sonar survey).
AI Coding Tools are Killing Open Source from Within

"Eternal September" – Open Source Meets Its New Crisis

GitHub describes the situation internally as "Eternal September" – a reference to the phenomenon where an online community collapses under the influx of inexperienced users who haven't learned the community's norms. Now, AI tools are playing the role of the infinite September.

The problem is structural: AI dramatically lowers the threshold for sending code, but does nothing to increase the capacity to review it. And this is where the incentives break down.

Platforms like GitHub have an interest in high contribution volume – it provides engagement metrics. Maintainers bear the costs alone.

Seth Larson, who triages security reports for several large open source projects, documented in early 2026 a sharp increase in what he called "extremely low-quality, spammy, and LLM-hallucinated" reporting that appeared legitimate on the surface. Debunking one such report can take hours.

The Godot engine is another example. Rémi Verschelde, one of the project's lead maintainers, has said that the review process has now fundamentally changed: reviewers must actively ask themselves whether the code is human-written and actually tested – time that previously went to mentoring and real development.

The Matplotlib project experienced something even more bizarre: autonomous AI agents sending code, and when rejected, publishing negative articles about the maintainers. According to The Sham Blog, this was a coordinated, agent-driven reaction to rejection.

AI Coding Tools are Killing Open Source from Within

The Hidden Technical Debt

The most treacherous aspect of the problem is not that AI code is obviously wrong. It is that it looks right.

An analysis from Agile Pain Relief documents that technical debt increases 30 to 41 percent after teams adopt AI coding tools, and that cognitive complexity in codebases rises by 39 percent in agent-assisted repositories. AI-generated pull requests have 1.7 times as many issues as human-written ones, according to research cited by RedMonk.

The acceptance rate for AI-generated pull requests is 83.77 percent versus 91 percent for human-written ones – a difference that seems small, but multiplied by volume represents enormous amounts of extra work.

And volume is precisely the point. When one developer can send ten pull requests per day instead of one, the dynamics change fundamentally – even if the acceptance rate only falls marginally.

Consequences Beyond the Code Itself

The crisis is not limited to code review. AI is also undermining the business models of open source projects.

Adam Wathan, founder of Tailwind CSS, directly linked January 2026 layoffs at the company to the effect of AI tools: documentation traffic fell 40 percent, and revenue collapsed 80 percent. Developers pull Tailwind code directly from Copilot instead of visiting documentation or discovering paid products. Stack Overflow activity has fallen approximately 25 percent since ChatGPT was launched.

Open source is infrastructure. When infrastructure maintainers give up, it's not just individual projects that suffer – it's the entire technology stack of businesses and the public sector worldwide.

The Community Seeks a Way Out

Solutions are emerging, but none are perfect.

GitHub has launched interaction limits and "trust signals" to help maintainers. The company Chainguard has created the EmeritOSS program, which takes over archived projects and maintains them with GenAI automation – the goal is for three people to handle 1,000 projects. HeroDevs offers "Never-Ending Support" for projects that have reached end-of-life.

Within the open source community itself, "pull-based" models are being discussed, where contributors fix bugs on their own forks and maintainers pull changes when they are ready – instead of inbound pull requests consuming capacity. Mitchell Hashimoto has developed tools that limit contributions to "approved" users.

The LLVM project's policy has become a reference point: AI is allowed, but a human must read all generated content, and its use must be explicitly marked in commit messages. Fully automated agents without human approval are prohibited.

CNCF – Cloud Native Computing Foundation – is now encouraging all affiliated projects to introduce explicit AI guidelines, in line with existing governance documentation.

What Developers and Organizations Should Do Now

For companies that depend on open source libraries – which includes almost everyone involved in software development – this is not an abstract international debate. A critical library that stagnates or disappears because maintainers are burnt out hits home directly.

While there is currently little local primary research mapping the extent, international trends are clear enough to act upon:

Implement AI guidelines in your own projects. Follow the LLVM model: allow AI tools, but require human review and explicit labeling. Prohibit fully automated agents without human approval.

Automate security scanning. Tools like SonarQube, Snyk, and GitHub Advanced Security should be mandatory in CI/CD pipelines. Static analysis doesn't catch everything, but it significantly reduces risk – especially now that code volume is increasing.

Be extra critical in critical sectors. For organizations in finance, health, and public administration, third-party code is a significant attack vector. A compromised open source library in a supply chain can have serious consequences – as highlighted by UpGuard's findings on uncritical agent access.

Contribute to maintenance, not just code. Support the projects you depend on – financially or with review expertise. Open source is not free infrastructure; it is volunteer work that is reaching its capacity limit.