AI coding assistants have collapsed the cost of generating a pull request to near-zero. The cost of reviewing that pull request has not changed at all. That asymmetry — cheap generation, expensive review — is the root cause of a structural shift in open-source supply chain risk that your existing dependency management process was not built to handle.
The underlying problem is economic. The projects your software depends on are receiving more submissions than their maintainers can review, and a maintainer who burns out does not hand the project to a successor. They quietly disengage, and what was a healthy dependency becomes a zombie component sitting in your stack.
This guide maps six dimensions of that risk and directs you to the deep-dive article for each.
Cluster overview
| Risk dimension | Article | |—|—| | The economic mechanism | Why AI Pull Requests Cost More Than They Contribute to Open-Source Projects | | The documented incident record | Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real | | Governance responses | Three Open-Source Governance Orientations for Managing AI-Generated Contribution Volume | | Platform and ecosystem tooling | What GitHub and the OSS Ecosystem Are Building to Protect Maintainers from AI Slop | | Dependency risk management | Adding Open-Source Maintainer Health to Your Software Supply Chain Risk Process | | Contributing back as risk mitigation | The Business Case for Contributing Back to the Open-Source Projects You Depend On |
What is the AI-generated open source supply chain risk problem?
Open-source supply chain risk has always existed — your software depends on upstream libraries you do not control, and vulnerabilities, licence conflicts, or maintainer abandonment propagate into your product. What is new is that AI coding assistants have made it trivially cheap to generate plausible-looking pull requests, bug reports, and issue comments without a corresponding increase in maintainer capacity to review them. The Black Duck 2026 OSSRA report found open-source vulnerabilities more than doubled year-over-year, with 93% of audited codebases containing zombie components. This affects you regardless of whether your team generates any AI contributions.
For the economic mechanism, see Why AI Pull Requests Cost More Than They Contribute to Open-Source Projects.
Why does volume alone create a security problem, not just a quality annoyance?
The risk is not that AI-generated pull requests are bad on average. It is that the maintainers who would catch the bad ones are finite and increasingly overwhelmed. AI-generated submissions add submissions, not reviewers — each one is a net drain on maintainer time even if it is technically valid. When review capacity is saturated, real vulnerabilities and legitimate contributions get triaged out alongside the slop.
The practitioner term for the worst of these is “AI slop” — superficially plausible but incorrect, hallucinated, or misattributed code. It looks genuine from a distance, so it demands full review effort before its quality can be assessed.
A CodeRabbit study found AI-generated PRs contained 1.7 times more issues than human-written ones, with security vulnerabilities 1.5 to 2 times more frequent. Google’s 2025 DORA Report found that 90% increased AI adoption correlated with 91% longer code review times. When review capacity is saturated, real vulnerabilities get triaged out alongside the slop. A maintainer who disengages produces a zombie component — one your SCA tooling will flag eventually, but which was an active project twelve months earlier.
For the incidents that made this visible, see Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real.
What has actually happened? The incident record so far
The evidence is documented, named, and dateable. In January 2026, curl’s maintainer Daniel Stenberg shut down the project’s bug bounty programme because AI-generated reports had reduced the genuine vulnerability rate from above 15% to below 5%. tldraw’s Steve Ruiz began automatically closing external pull requests after AI-driven PR volume more than doubled in a single quarter. Ghostty pivoted to an invite-only contribution model, the Node.js TSC held a formal governance discussion after a 19,000-line PR generated by Claude Code triggered a petition from over 80 developers, and Django formalised a written AI contribution disclosure policy.
These projects span different scales and ecosystems — curl is widely-used global infrastructure, tldraw is a focused developer tool, Node.js is a platform runtime. The spread confirms this is systemic. For contrast, OpenSSL‘s AISLE programme used AI-assisted expert analysis to find genuine zero-days — the productive use case is expert-led review augmented by AI, not AI-generated volume submission.
For full incident analysis, see Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real.
How are open source projects responding to AI contribution volume?
Academic research published in March 2026 (arXiv 2603.26487) analysed 67 projects and identified three governance orientations: Prohibitionist (AI contributions present structural, non-absorbable risk — tldraw, QEMU); Boundary-and-accountability (AI inputs may enter the workflow under explicit conditions of disclosure and verification — Ghostty’s Vouch system, Django’s disclosure requirements); and Quality-first (contributions are judged by quality standards regardless of how they were produced). Each orientation involves real tradeoffs around contributor pool size, maintainer load, and community culture.
None of these orientations is universally right. But the existence of a stated governance orientation is itself a signal — a project that has thought about AI contributions is better positioned to maintain review quality than one that has not. AGENTS.md files, the emerging convention for communicating AI contribution policy directly to coding agents, are part of this landscape.
For the governance framework in depth, see Three Open-Source Governance Orientations for Managing AI-Generated Contribution Volume.
What are GitHub and the broader ecosystem building to address this?
GitHub is developing platform-level controls: configurable PR permissions, a “disable pull requests” switch added in February 2026, improved AI attribution visibility, and more granular controls for who can create and review PRs. Mitchell Hashimoto’s Vouch provides forge-agnostic trust-gating, and the Open Source Pledge is formalising a mechanism for companies to make sustainability commitments. Dependabot and Renovate already established the precedent for platform-level bot management — the governance infrastructure for automated contributions exists and is being extended to cover AI-generated submissions.
None of these controls are fully deployed. The ecosystem response is lagging the problem by 12 to 18 months, which means supply chain risk remains elevated while platform controls catch up.
For platform responses in detail, see What GitHub and the OSS Ecosystem Are Building to Protect Maintainers from AI Slop.
How does this change what you should be checking in your dependency tree?
The bus factor — CHAOSS‘s formal metric is the Contributor Absence Factor — is the minimum number of contributors whose departure would jeopardise a project. AI contributions that add code volume without adding capable maintainers do not improve that number; they may mask it, because a project can appear “active” based on PR volume while the actual maintenance rests with one overwhelmed person.
Add CHAOSS viability metrics — commit frequency, bus factor, issue response time, release cadence — to your dependency audit. SCA tooling catches known vulnerabilities but does not catch maintainer health decline. When a dependency deteriorates, your options are the Fork / Fund / Migrate framework: fork and self-maintain (average cost: $258,000 per release cycle), fund the upstream maintainer, or migrate to an alternative.
For the operational framework, see Adding Open-Source Maintainer Health to Your Software Supply Chain Risk Process.
What does the EU Cyber Resilience Act mean for your use of open source?
The EU Cyber Resilience Act (fully effective by 2027) requires software products with digital elements to maintain Software Bills of Materials (SBOMs) in machine-readable format. AI-generated code complicates SBOM production because its training-data provenance is opaque — models trained on copyleft code may produce output that inadvertently reproduces those licence obligations. The 2026 OSSRA report found licence conflicts in 68% of audited codebases, the largest year-over-year increase in the report’s history.
Even outside European markets, US and EU SBOM mandates are converging. Building SBOM generation into your release process now costs less than retrofitting it later.
For SBOM and maintainer health integration, see Adding Open-Source Maintainer Health to Your Software Supply Chain Risk Process.
Is there a feedback loop that makes this worse over time?
Yes. AI coding models are trained on public repositories. As those repositories fill with AI-generated code — which tends to be stylistically plausible but mechanically weaker — training data quality degrades. Future models produce lower-quality output, which generates more low-quality contributions, which degrades repositories further. This “model collapse” feedback loop has no natural circuit breaker and compounds with zombie component acceleration and licence laundering.
The vibe coding dynamic accelerates this: contributors who generate code without understanding it cannot maintain, debug, or defend it — so even AI-generated contributions that pass review may introduce maintenance liability downstream.
The best point for establishing good governance practices — both internally with your team’s AI usage policy and externally with your contribution posture toward upstream projects — is now, before the feedback loop has fully closed.
For the mechanism behind the feedback loop, see Why AI Pull Requests Cost More Than They Contribute to Open-Source Projects.
Does contributing back to open source actually reduce your supply chain risk?
A February 2026 Linux Foundation report found that active open-source contribution delivers 2 to 5 times return on investment, with 66% of organisations reporting faster upstream responses to security issues. For most engineering teams, this means targeted contributions to the two or three dependencies most critical to your product — not an Open Source Programme Office. Funding via Tidelift or the Open Source Pledge is the lowest-friction option if code contributions are not practical. Teams that contribute back also gain better visibility into the governance health of the projects they depend on, which is itself a risk management advantage.
Passive consumption is not a neutral default. Organisations relying on internal workarounds spend an average of $670,000 annually. The supply chain risk from AI contribution volume is real and growing, but it is manageable with the right operating posture.
For the full business case, see The Business Case for Contributing Back to the Open-Source Projects You Depend On.
Resource Hub: Open-Source Supply Chain Risk Library
Understanding the Problem
| Article | What it covers | |———|—————| | Why AI Pull Requests Cost More Than They Contribute to Open-Source Projects | The cost asymmetry mechanism — why AI contribution volume harms even well-intentioned projects and what the push-based contribution model vulnerability means for maintainer workload | | Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real | Documented incident record: curl, Node.js, Ghostty, tldraw, and Django — what happened, what each project did, and what the pattern means |
Governance and Platform Responses
| Article | What it covers | |———|—————| | Three Open-Source Governance Orientations for Managing AI-Generated Contribution Volume | The three governance orientations (Prohibitionist, Boundary-and-Accountability, Quality-First) with project examples and a decision framework for evaluating a dependency’s governance stance | | What GitHub and the OSS Ecosystem Are Building to Protect Maintainers from AI Slop | Platform-level controls at GitHub, the Vouch trust-gating tool, the Open Source Pledge, and where ecosystem infrastructure is and is not keeping up |
Managing Your Exposure
| Article | What it covers | |———|—————| | Adding Open-Source Maintainer Health to Your Software Supply Chain Risk Process | Applying CHAOSS viability metrics to your dependency tree, identifying zombie components, and using the Fork / Fund / Migrate framework | | The Business Case for Contributing Back to the Open-Source Projects You Depend On | The 2 to 5 times ROI evidence from the Linux Foundation, scoping upstream contribution for a small engineering team, and when funding beats code contributions |
FAQ Section
What is “AI slop” in the open-source context?
AI slop is the practitioner term for low-quality AI-generated contributions — pull requests, bug reports, and issue comments that are superficially plausible but incorrect or hallucinated. The defining characteristic is that slop looks genuine from a distance, demanding full review time before its quality can be assessed. In 2025, AI-generated reports grew to approximately 20% of curl’s bug bounty submissions, and by July 2025 only 5% of all submissions were genuine vulnerabilities. For the full documented incident record, see Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real.
Is this problem only relevant if my team contributes to open-source projects?
No. The risk lands in your dependency tree regardless of whether your team generates AI contributions. If AI volume strains the maintainers of your critical dependencies, the consequence is slower security patches, accumulated zombie components, and higher bus factor fragility — all of which affect your stack even if your team never submits a PR.
What is a zombie component and how do I identify one?
A zombie component is an open-source library with no development activity in the past two years — no commits, no issue responses, no releases. Start with your SCA tooling (Dependabot, Snyk, or Black Duck will flag components with no recent releases), then cross-reference the project’s GitHub activity and CHAOSS viability metrics for your highest-criticality dependencies. The maintainer health supply chain risk process covers how to build this check into a repeatable quarterly audit.
What is the bus factor and why does it matter for AI contribution risk?
The bus factor is the minimum number of contributors whose departure would jeopardise a project. CHAOSS formalises this as the Contributor Absence Factor. To check it for a dependency, look at the project’s contributor graph on GitHub: how many people have committed in the last 90 days, and how concentrated is the commit activity? A project where one person accounts for 80% or more of recent commits has a bus factor of one regardless of how many open PRs it has. The GitHub maintainer protection controls being rolled out in 2026 can help single-maintainer projects defend their review capacity without closing to contributions entirely.
What does my team need to do differently when using AI tools to contribute to open source?
Require that anyone submitting to an external project using AI-generated code follows the project’s stated AI contribution policy — check for an AGENTS.md file or a CONTRIBUTING.md section on AI use. Treat AI as a drafting tool, not an authoring tool: the person submitting the PR should be able to explain, defend, and maintain what they are submitting. The three governance orientations framework provides a practical decision tree for reading what a project’s policy signals about its contribution health.
How does the EU Cyber Resilience Act affect a company that is not selling into European markets?
The CRA has spillover effects. US and EU SBOM mandates are converging, making SBOM hygiene a baseline procurement requirement regardless of geography. Building SBOM generation into your release process now costs less than retrofitting it when a customer or regulator asks. See Adding Open-Source Maintainer Health to Your Software Supply Chain Risk Process for how to integrate SBOM generation and licence conflict scanning into a unified dependency health review.