Insights Business| SaaS| Technology Three Open-Source Governance Orientations for Managing AI-Generated Contribution Volume
Business
|
SaaS
|
Technology
Apr 1, 2026

Three Open-Source Governance Orientations for Managing AI-Generated Contribution Volume

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of three open-source governance orientations for managing AI-generated contribution volume

Open-source projects are scrambling to write formal AI contribution policies — but they’re not all arriving at the same answer. RedMonk’s survey of 77 OSS organisations found a fragmented landscape. Some projects ban AI contributions outright. Others require disclosure and accountability. Others don’t care how code was produced as long as it passes review.

The first systematic attempt to map all of this comes from arXiv 2603.26487 — “Beyond Banning AI” by Yang, He, and Zhou of Peking University — which analysed 67 highly visible OSS projects and derived three governance orientations and twelve operational strategies from what they found.

This article explains each orientation, walks through the LLVM, EFF, and Ghostty policies as concrete examples, and gives you a practical rubric for evaluating your dependencies and writing your own internal contribution rules. These governance choices sit at the upstream layer of the full context of open-source supply chain risk.

Why do open-source projects need formal AI contribution policies now?

AI tools have changed the economics of open-source contribution. Generating a pull request now takes seconds. Reviewing one still takes as long as it always did.

Before AI tools, a PR was a signal of genuine interest. Maintainers extended the benefit of the doubt because the effort required to send one was credible. AI has broken that signal. As GitHub put it in their “Welcome to the Eternal September of Open Source” post: “The cost to create has dropped but the cost to review has not.”

The canonical example of how bad this gets is the Node.js incident: a 19,000-line PR generated with Claude Code triggered a petition signed by over 80 developers calling for a project-wide ban. The cost asymmetry that makes this problem structural is documented in arXiv 2601.15494. The curl bug bounty shutdown is the logical endpoint — fabricated AI security reports were costing more to process than the programme was worth.

Without an explicit policy, maintainers are individually enforcing unwritten rules. That creates inconsistency, resentment, and burnout. Policy formalises what maintainers already know: the contribution economics have changed and something has to give.

What are the three governance orientations, and what does each one assume?

The arXiv 2603.26487 framework identifies three top-level orientations. These aren’t specific policies — they’re underlying stances about risk, trust, and what AI-generated contributions actually represent. Real projects often blend elements from more than one.

O1 — Prohibitionist: AI-generated contributions present structural risk — provenance uncertainty, licence contamination, or review-capacity overload — that normal review processes can’t reliably catch. Categorical exclusion or strict access control is the rational response.

O2 — Boundary-and-Accountability: AI-assisted contributions are fine if the contributor discloses AI tool use and demonstrates genuine understanding of what they submitted. The policy governs contributor behaviour, not the capability of the tool.

O3 — Quality-First / Tool-Agnostic: Contributions get evaluated on merit regardless of how they were produced. Existing quality gates — CI/CD, code review standards — are sufficient. AI-specific rules add friction without proportional benefit.

The key distinction worth understanding: O1 and O2 govern AI inputs directly. O3 governs outputs only. O1 assumes provenance is the primary risk. O2 assumes contributor accountability is. O3 assumes your review pipeline can catch anything that matters. None of these has won — and the right answer depends on your project’s specific bottleneck. Understanding AI contribution pressure reshaping OSS governance at the supply chain level helps make sense of why these orientations differ so sharply.

What does a Prohibitionist policy look like in practice? (LLVM)

LLVM’s policy is prohibitionist-adjacent rather than fully prohibitionist. It doesn’t ban AI tools outright, but it restricts them in ways that functionally exclude many AI-assisted workflows. The LLVM AI Tool Use Policy is the canonical example of this approach.

The foundation is a human-in-the-loop requirement: “Contributors must read and review all LLM-generated code or text before they ask other project members to review it… they should be able to answer questions about their work.”

Two specific prohibitions do the heavy lifting. First, LLVM bans AI agents acting without human approval — explicitly naming the GitHub @claude agent. Second, LLVM bans AI tools for “good first issues” — the primary entry point for low-effort, high-volume AI submissions. That removes the most obvious vector for turning the project into a spam target.

LLVM also formalises the concept of extractive contribution: “a contribution should be worth more to the project than the time it takes to review it.” Maintainers can apply an extractive label to off-track PRs, and persistent non-compliance escalates to moderation.

What LLVM explicitly permits: AI-assisted contributions where the contributor has reviewed the output and can defend design decisions. This is a standard, not a blanket ban. The policy relies on review culture rather than automated enforcement. The incidents that forced these policies into existence made LLVM’s formal approach necessary.

What does a Boundary-and-Accountability policy require from contributors? (EFF)

The Electronic Frontier Foundation published its LLM-assisted contribution policy in February 2026. The policy opens with a candid acknowledgement of the tension: “Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems.”

The EFF policy has a clean two-part structure.

Boundary (disclosure): Contributors must disclose when they use LLM tools.

Accountability (demonstrated understanding): Comments and documentation must be authored by a human. Where LLVM’s accountability is asserted at review time — can you defend this in conversation? — EFF’s is embedded in the submission artefact itself. Self-authored comments mean the human’s understanding has to be visible in what they submit.

EFF doesn’t ban LLMs. “Their use has become so pervasive a blanket ban is impractical to enforce.” Instead the policy creates constraints that make AI-assisted contributions viable only when the contributor genuinely understands what they’re submitting.

The disclosure requirement (A2 in arXiv 2603.26487) is the most widely adopted single strategy in the 67-project corpus. The tradeoff: the barrier is lower than LLVM’s, but it relies on contributor honesty. A bad-faith contributor can tick the disclosure box while submitting code they don’t understand. For cases where disclosure failed to prevent problems, see the curl bug bounty shutdown and the incidents that followed.

When does invitation-only become the right answer? (Ghostty)

Ghostty — Mitchell Hashimoto’s terminal emulator — implemented one of the most structurally restrictive governance responses in the OSS ecosystem. Pull requests are only accepted from contributors explicitly vouched for by existing trusted contributors.

The mechanism is the Vouch project: only vouched contributors can submit PRs, the trust graph is explicit and decentralised, and trusted contributors can endorse newcomers to grow the inner circle deliberately.

What drove Ghostty to this was simple: the cost-benefit ratio of accepting unsolicited contributions turned negative. tldraw arrived at the same endpoint through platform automation — automated closure of all unsolicited PRs. Steve Ruiz summed it up bluntly: “In a world of AI coding assistants, is code from external contributors actually valuable at all?” His project was receiving PRs that “claimed to solve a problem we didn’t have or fix a bug that didn’t exist.”

This is the extreme end of the O1 orientation: not rules about what AI contributions must include, but structural access control that prevents unsolicited contributions entirely.

The community costs are real. The contributor pool shrinks, feature development from outside slows, and the model can feel unwelcoming to skilled new contributors who happen to be unknown. The counterargument: many maintainers already use informal vouching. Vouch simply codifies what already happens.

And invitation-only isn’t the same as closing the project. New contributors can still submit patches by publishing them publicly and asking trusted contributors to pull them. The threshold question is economic: when the marginal cost of reviewing unsolicited contributions consistently exceeds their marginal value, this becomes defensible.

What do the twelve governance strategies tell us about the policy design space?

The twelve strategies from arXiv 2603.26487 are the operational implementations of the three orientations. They show that the same underlying stance can be implemented through very different mechanisms — and that you have more options than just picking an orientation and running with it.

The strategies break into four functional groups. Function A governs entry and input qualification: A1 Boundary Exclusion, A2 Transparency and Disclosure, A3 Compliance and Provenance Safeguarding. Function B governs responsibility and evidence restoration: B1 Accountability Reinforcement, B2 Verification and Evidence Gating, B3 AI Tooling Governance via AGENTS.md. Function C governs review burden and workflow protection: C1 Scope and Intentionality Control, C2 Capacity and Queue Control, C3 Moderation and Sanctions, C4 Security Channel Governance. Function D governs infrastructure and institutional adjustment: D1 Channel and Platform Reconfiguration, D2 Incentive Redesign.

Three strategies are worth looking at more closely.

A2 (Transparency and Disclosure) is the most commonly adopted strategy in the corpus. It’s the minimum viable policy — compatible with both O2 and O3, requires no structural access changes, and is the baseline for anything more sophisticated. A disclosure checkbox in your PR template is A2 in isolation. Better than nothing, but it doesn’t create accountability or reduce volume on its own.

B3 (AI Tooling Governance / AGENTS.md) is double-edged. The governance benefit is clear, but adding AGENTS.md also signals you’re AI-friendly, which can attract more submissions than you intended. Be cautious if you’re already at capacity.

B2 (Criteria-Based Gating): GitHub Community Discussion #185387 proposes requiring a linked, triaged issue before a PR can be opened. GitHub has shipped some relief features but criteria-based gating isn’t generally available yet.

The key insight: strategies can be combined across orientations. Adopt A2 disclosure from O2, pair it with B2 criteria-based gating and D2 incentive redesign — you’re not locked to one orientation for everything. For platform tools that operationalise these strategies, see what GitHub and the OSS ecosystem are building to protect maintainers from AI slop. These governance orientations sit within the full context of AI contribution pressure reshaping OSS supply chain risk — a broader picture that spans the economic mechanism, the incident record, platform responses, and risk management frameworks.

How do you assess whether a project’s governance is adequate from the outside?

If you depend on OSS projects you don’t control, the question isn’t which orientation to adopt — it’s whether your dependency has adequate governance given your risk exposure.

Here’s a five-signal rubric.

1. Policy existence. Check CONTRIBUTING.md. Search for “LLM,” “AI,” “generative,” or “Copilot.” Also check PR templates (.github/PULL_REQUEST_TEMPLATE.md) and SECURITY.md. If nothing surfaces an AI policy, the project is effectively O3 by default.

2. Policy specificity. Does the policy name specific required or prohibited behaviours? Vague language like “quality contributions only” is O3 by default, not genuine governance. LLVM’s policy is enforceable. “We value quality” is an aspiration.

3. Enforcement mechanism. Is there automated enforcement — CI quality gates, PR templates, criteria-based gating — or does policy rely entirely on reviewer discretion? Structural enforcement beats human judgement under volume pressure.

4. Contributor health signals. The CHAOSS Contributor Absence Factor measures the fewest number of committers comprising 50% of project activity. If one or two people account for half of all activity, the governance framework depends on those individuals staying engaged — and AI contribution volume is a new stressor on exactly that vulnerability.

5. Recent governance activity. Has the project updated its contribution policies since 2025? Projects that haven’t updated since 2023 may not have addressed the AI contribution volume shift at all.

For critical dependencies, check recent PR history to confirm the policy is actually being applied. Prioritise scrutiny by: critical path, unavailable alternatives, and CVE exposure history. Using governance quality as a risk signal in your dependency audits is where this assessment becomes actionable.

What should your own upstream contribution policy say about AI tools?

For engineering teams that contribute upstream to OSS projects, an internal contribution policy on AI tool use is both a governance obligation and a reputation management tool. Bad AI-assisted contributions damage your standing with the maintainers whose goodwill you depend on. Here are five things your policy needs to address.

1. Disclosure requirement. State whether contributors must disclose AI tool use in PR descriptions or commit messages. Match the receiving project’s policy where one exists.

2. Accountability standard. Contributors must be able to explain every line they submit. Adopt the LLVM human-in-the-loop requirement as the default unless a project explicitly operates under O3.

3. Documentation and comments. Do not submit AI-generated explanatory comments or documentation. Write your own. If you can’t write the comments, you haven’t understood the code.

4. Context check before contributing. Verify whether the target project has an explicit AI contribution policy. If the project is O1 or prohibitionist-adjacent like LLVM, don’t use AI tools in that contribution regardless of quality.

5. Contribution type scope. Apply stricter standards to security-sensitive contributions. Django‘s security team said it plainly: “Almost every report now is a variation on a prior vulnerability… Clearly, reporters are using LLMs to generate (initially) plausible variations.” Contributing fabricated security findings gets you blacklisted from the security channel.

Formalising this protects your team’s standing, reduces extractive contribution exposure, and gives contributors a clear standard to work to. Using governance quality as a risk signal in your dependency audits is where this framework becomes actionable — it’s part of how you manage supply-chain risk across the open-source dependencies your product depends on. For the complete picture of how AI-generated contributions are reshaping open-source supply chain risk across all dimensions, the pillar page maps each risk area and links to the full series.

FAQ

Which governance orientation should our open-source project adopt?

Match orientation to project profile. O1 suits small-core projects with high technical standards and limited maintainer bandwidth. O2 suits large contributor communities where AI-assisted contributions from skilled contributors have genuine value. O3 suits projects with robust CI/CD where the review pipeline can catch quality problems regardless of where the code came from.

If you’re unsure, start with O2’s minimum viable implementation: the A2 disclosure requirement. It’s the most commonly adopted approach, provides a clear accountability baseline, and doesn’t require structural access changes to implement.

What is AGENTS.md and should we use it?

AGENTS.md is a repository-level instruction file that gives AI coding agents project-specific constraints — what to avoid, how to format PRs, what testing is required. It’s Strategy B3 (AI Tooling Governance).

The catch: adding AGENTS.md signals that you’re AI-friendly, which can attract more contributions than you intended. Use it if you’re O3 and want AI tools to work well with your project. Be cautious if you’re O1, or if the openness signal would create more review volume than you can absorb.

Does a quality-first policy actually work?

O3 works if your CI/CD is comprehensive and your reviewers can identify AI-generated code containing subtle logical errors or maintainability problems after it passes automated checks.

The structural risk is that it absorbs the full volume increase without reducing it. O3 is right for projects with well-funded, professional contributor bases where review capacity can scale. It’s risky for volunteer-maintained projects where maintainer time is fixed.

How do we know if a project has an AI contribution policy?

Check CONTRIBUTING.md. Search for “LLM,” “AI,” “generative,” or “Copilot.” Also check PR templates (.github/PULL_REQUEST_TEMPLATE.md) and SECURITY.md. If nothing surfaces, the project is effectively O3 by default.

What is the difference between the LLVM and EFF policies?

Both are O2 (Boundary-and-Accountability) but their accountability mechanism differs. LLVM’s accountability is demonstrated at review time: can you defend any line in discussion? EFF’s is embedded in the submission artefact itself: human-authored comments make understanding visible in what is submitted.

LLVM also prohibits specific use cases — AI agents acting autonomously, AI for “good first issues” — while EFF’s scope is narrower and framed through a civil liberties lens.

What is the arXiv 2603.26487 paper and why is it the reference for this framework?

“Beyond Banning AI: A First Look at GenAI Governance in Open Source Software Communities” by Yang, He, and Zhou (Peking University, March 2026) is the first systematic qualitative study of AI contribution governance across a large OSS project corpus — 67 highly visible projects — yielding the three orientations and twelve strategies. No equivalent framework exists in practitioner or analyst literature.

What is criteria-based gating and is it available now?

Criteria-based gating (Strategy B2) would require a PR to be linked to a pre-existing, triaged issue before it can be opened. The proposal is tracked in GitHub Community Discussion #185387. GitHub has shipped some relief features but criteria-based gating isn’t generally available yet.

Can I just add a disclosure checkbox to our PR template and call it done?

A disclosure checkbox is A2 in isolation — the minimum viable O2 implementation. It doesn’t create accountability, filter quality, or reduce volume on its own. Pair it with an explicit accountability statement and quality gates that apply regardless of AI disclosure status.

How does the Ghostty invitation-only model differ from just closing the project to outside contributors?

Invitation-only changes who can submit pull requests — not who can use, fork, or raise issues. New contributors can still submit patches by publishing them publicly and asking trusted contributors to pull them. The Vouch system makes the trust graph explicit so the inner circle can grow deliberately.

What happens when a project changes orientation mid-stream?

Policy changes create friction. Developers contributing under O3 norms may resist new O2 requirements. The most successful transitions include a clear public explanation of the reason, a transition period, and acknowledgement that the goal is sustainability, not restriction for its own sake. LLVM, EFF, and Ghostty all published explanatory posts alongside their policy changes.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter