Insights Business| SaaS| Technology Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real
Business
|
SaaS
|
Technology
Apr 1, 2026

Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic Curl Bug Bounty Shutdown and the Open-Source Incidents That Proved the Problem Is Real

In January 2026, Daniel Stenberg shut down the curl bug bounty programme he’d been running since 2019. Not because the money ran out. Because the economics had become untenable.

87 confirmed vulnerabilities. Over $100,000 USD in rewards. Six and a half years. And it ended because AI-generated security reports had made triage unsustainable.

This is not curl’s bad luck. Node.js dealt with a 19,000-line AI-generated pull request that triggered a formal community petition. Ghostty closed its doors to outside contributors within months of going public. tldraw stopped accepting pull requests entirely. Django’s Security Team documented a new category of AI-generated vulnerability report that required expert evaluation to reject.

Each of these incidents is what AI-generated contribution pressure as a supply-chain concern looks like on the ground. And the OpenSSL/AISLE case shows there is a better way: expert-guided AI analysis found 12 zero-days without a single invalid report reaching the maintainers. The difference is expert verification, not AI involvement.


Why did curl shut down its bug bounty program?

curl’s HackerOne bug bounty programme ended on January 31, 2026. Once, the confirmed-vulnerability rate exceeded 15%. By 2025, it had fallen below 5%. Stenberg put it plainly: “Not only the volume goes up, the quality goes down. So we spend more time than ever to get less out of it than ever.”

Here is why that matters. A well-functioning bug bounty works because generating a credible report is expensive — the time and codebase knowledge required act as a natural quality filter. AI removes that cost on the submission side while leaving the maintainer’s triage cost completely unchanged. That is the underlying mechanism behind most of these incidents.

Stenberg coined “death by a thousand slops” in a July 2025 blog post (daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/). curl’s security.txt started including: “We will ban you and ridicule you in public if you waste our time on crap reports.” On the reports themselves: “You fire up ChatGPT and ask ‘please point out the security problem in the curl project and make it sound horrible’ and it’ll do that.” On the decision to shut it down: “We need to make moves to ensure our survival and intact mental health.”

The replacement is GitHub’s Private Vulnerability Reporting at github.com/curl/curl/security/advisories. No monetary reward. Maintainer-controlled intake. Stenberg documented his further experience at FOSDEM 2026 in “Open Source Security in spite of AI” (fosdem.org/2026/schedule/event/B7YKQ7-oss-in-spite-of-ai/).


What happened when Claude Code generated a 19,000-line pull request for Node.js?

In late 2025, a Node.js TSC member submitted a 19,000-line pull request generated using Claude Code — a complete module refactor that reviewers estimated would take days to assess. The incident triggered a petition signed by over 80 Node.js developers calling for a project-wide ban on AI-assisted contributions — documented in arXiv 2603.26487 as the largest formal community response to a single AI contribution incident on record.

Scale amplified the cost asymmetry rather than demonstrating productivity. There is no accumulated track record behind a single AI-generated module refactor. Every line requires the same evaluation as if it came from an unknown contributor.

The TSC did not implement an outright ban. It implemented a minimum HackerOne Signal score requirement — requiring a track record of valid security submissions before participation is permitted. AI slop cannot have accumulated Signal score.

Matteo Collina, TSC Chair: “My ability to ship is no longer limited by how fast I can code. It’s limited by my skill to review. And I think that’s exactly how it should be.” The moment review stops, accountability stops with it.

The Node.js incident also established that community-driven policy demands — not solely maintainer decisions — could trigger formal governance changes. That precedent matters for how governance responses developed.


Why did Ghostty close its doors to outside contributors?

Ghostty is a GPU-accelerated terminal emulator created by Mitchell Hashimoto, HashiCorp‘s founder. It launched publicly in December 2024 and within months had pivoted to an invitation-only contribution model.

The Ghostty AI policy is zero-tolerance: contributors who submit AI-generated code without adequate human review face bans, with permanent bans for repeat violations. Consequences are named explicitly.

The mechanism is Vouch — a tool Hashimoto built where only contributors vouched for by existing trusted members can submit pull requests. The community debate captures the tradeoff: “a necessary spam filter” versus “an insider’s club where social standing becomes a gatekeeping lever.”

What makes the Ghostty case significant is the timing. It launched in December 2024 with well-resourced leadership, and the problem appeared within months. You do not need years of technical debt for this to happen. You just need an attractive enough target and a low enough submission barrier.

Ghostty’s approach is selective admission, not shutdown — open only to contributors who have been vouched for. The taxonomy of these governance orientations is worth examining against your own dependencies.


When tldraw stopped accepting PRs entirely: what the “nuclear option” looks like

tldraw is an open source infinite canvas and drawing SDK. In January 2026, founder Steve Ruiz announced it would begin automatically closing pull requests from external contributors.

Ruiz introduced the term “well-formed noise”: PRs “that claimed to solve a problem we didn’t have or fix a bug that didn’t exist.” Correct syntax, plausible commit messages, apparent codebase understanding — but detecting them as invalid requires the same review effort as a legitimate contribution. As Ruiz put it: “To an outsider, the result of my fire-and-forget ‘fix button’ might look identical to a professional, well-researched, intellectually serious bug report.”

There was a secondary signal worth noting: even large PRs were “abandoned, languishing because their authors had neglected to sign our CLA.” A human with skin in the game will sign a Contributor Licence Agreement. An AI-generated submission’s author frequently does not.

GitHub lacked adequate tools for controlling external contribution intake — a gap GitHub began addressing in February 2026, though turning off PRs without an alternative channel makes legitimate bug reports invisible.

Ruiz acknowledged the tradeoff honestly: when bad work is virtually indistinguishable from good, “the value of external contribution is probably less than zero.” When a project’s founder cannot sustain PR review, the downstream risk is project abandonment.


How did AI change the trust model for security vulnerability reports?

Django’s Security Team published an update on February 4, 2026 describing a new pattern: “Almost every report now is a variation on a prior vulnerability.” The mechanism was plain: “Clearly, reporters are using LLMs to generate (initially) plausible variations.” A specific example: CVE 2025-13473, patched February 3, 2026, was “a straightforward variation on CVE 2024-39329.” These reports require expert triage time to evaluate and reject.

This is the trust model shift. Security disclosure previously assumed submitting a report was costly enough to filter out noise. That assumption no longer holds. Bug bounty platforms were built for an environment where valid reports are rare and expensive to produce. AI removed the friction that provided the quality filter. Each of these incidents is a concrete data point in the broader risk management context that CTOs managing OSS dependencies now need to account for.

Node.js’s Signal score requirement is the platform-level response. The policy responses that LLVM and EFF developed are the governance layer worth examining next.


What does expert-guided AI bug analysis look like when it actually helps?

AISLE used AI-powered security analysis to discover 12 CVEs in OpenSSL — including a high-severity stack buffer overflow (CVE-2025-15467) enabling remote code execution, and multiple issues dating back to 1998. One of the most scrutinised codebases on the internet.

AISLE also reported over 30 valid security issues to the curl project. Stenberg’s assessment: “amazed by the quality and insights.” His formulation: “A clever person using a powerful tool.”

Their methodology uses context-aware detection with a priority-scoring system to reduce false positives, and human security experts verify every finding before disclosure. Result: 12 CVEs, zero invalid reports. The cost asymmetry mechanism runs in reverse — the expert team absorbs the false-positive filtering cost rather than transferring it to the maintainer.

As Drupal‘s founder Dries Buytaert put it: “AISLE used AI to amplify deep knowledge. The low-quality reports used AI to replace expertise that wasn’t there.” Not AI versus no AI. Whether a qualified human verifies before the maintainer is burdened.


How widespread is the problem? What the 2026 data says about scale

The incidents above are not outliers. They are the visible surface of a pattern the 2026 data documents at scale.

Black Duck‘s 2026 Open Source Security and Risk Analysis (OSSRA) report, based on 947 commercial codebases across 17 industries, found that 93% contained at least one “zombie component” — an open source dependency with no development activity in the past two years, receiving no patches, no bug fixes, no maintenance. When a vulnerability is discovered in a project that hasn’t been touched in years, there is often no maintainer left to fix it.

The vulnerability numbers are stark: open source vulnerabilities per codebase rose 107% year-over-year to an average of 581. 78% of audited codebases contained high-risk vulnerabilities; 44% contained critical-risk issues. 65% of organisations experienced a software supply chain attack in the past year.

arXiv 2601.15494 (“Vibe Coding Kills Open Source”) models how vibe coding severs the engagement loop through which maintainers previously earned returns, while accelerating downstream OSS usage. Its conclusion: “Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.” The Tidelift State of the Open Source Maintainer provides the human-scale evidence.

Each zombie component in your stack is the downstream product of a maintainer who ran out of capacity to continue. The incidents in this article document how that capacity runs out. For what this means for your dependency risk assessment — including how to identify which of your dependencies is at similar risk — the supply-chain risk process framework provides the operational next step. For the full AI-generated contribution pressure as a supply-chain concern across all six dimensions, the pillar guide maps the complete landscape.


Frequently Asked Questions

Is the curl bug bounty program coming back?

No. The January 2026 shutdown was framed as permanent, with language pointing toward possible escalation, not reinstatement. curl now accepts security reports through GitHub’s Private Vulnerability Reporting at github.com/curl/curl/security/advisories and email to [email protected]. No monetary reward. The structural conditions that caused the shutdown have not changed.

How can a project protect itself from AI-generated bug report floods?

There is no single solution. The documented range includes: platform reputation gating (Node.js’s HackerOne Signal score), invitation-only access control (Ghostty), full external PR closure (tldraw), and replacement of bug bounty intake with maintainer-controlled private reporting (curl). GitHub announced partial platform-level mitigation in February 2026. The arXiv 2603.26487 paper provides a taxonomy of 12 governance strategies for a more systematic view.

What is the difference between AI slop and AI-assisted analysis?

Whether a qualified human verifies findings before they reach the maintainer. AI slop: generated and submitted directly, triage cost transferred to the maintainer. Expert-guided AI analysis: reviewed and verified by a domain expert before disclosure. Stenberg’s formulation: “A clever person using a powerful tool” versus volume of unreviewed output. The policy responses that LLVM and EFF developed put this distinction into practice.

Are small projects more vulnerable than large ones?

The real risk axis is maintainer review capacity versus inbound volume, not project size. Small projects with single maintainers have less triage capacity. But large, high-visibility projects are more attractive targets — higher bug bounty rewards, bigger reputation payoff from major CVE attribution. OSSRA 2026’s 93% zombie component figure cuts across project sizes.

What happened with the Node.js 19,000-line AI-generated pull request?

A Node.js TSC member submitted a 19,000-line pull request generated using Claude Code — a complete module refactor that reviewers estimated would take days to assess. Over 80 developers signed a petition calling for a project-wide AI contribution ban. The TSC implemented a minimum HackerOne Signal score requirement rather than an outright ban — filtering low-quality submissions without closing the project to all AI-assisted contributions.

What is a “zombie component” and why does it matter for my software stack?

A zombie component is OSSRA 2026’s term for an open source dependency with no development activity in the past two years — present in active commercial codebases but receiving no patches, no bug fixes, no maintenance. Found in 93% of 947 audited codebases. Any vulnerability discovered in a zombie component will remain unpatched indefinitely. They are the supply-chain artefact of maintainer burnout. arXiv 2601.15494 and the Tidelift State of the Open Source Maintainer document how we got here.

Why did tldraw stop accepting pull requests from outside contributors?

tldraw founder Steve Ruiz closed external PRs in January 2026 after an influx of “well-formed noise” AI-generated contributions made triage unsustainable — PRs that appeared formally correct but were based on incorrect premises or fabricated issues, requiring the same review effort as legitimate contributions to identify as invalid. Ruiz noted that GitHub lacked adequate tools for controlling external contribution intake; GitHub began addressing this with new PR controls in February 2026.

How does expert-guided AI security analysis find vulnerabilities without flooding maintainers?

Context-aware AI analysis, a priority-scoring system that filters out low-confidence findings, and mandatory expert verification before disclosure. Result: 12 CVEs in OpenSSL, some dating to 1998, without a single invalid report reaching the maintainer team. The expert team absorbs the false-positive filtering cost rather than transferring it to the maintainer.

Where can I find Daniel Stenberg’s original post on ending the curl bug bounty?

Stenberg’s blog at daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/ is the primary source. Current disclosure process: github.com/curl/curl/security/advisories and curl.se/.well-known/security.txt. His FOSDEM 2026 talk is at fosdem.org/2026/schedule/event/B7YKQ7-oss-in-spite-of-ai/.

What did “death by a thousand slops” mean?

Stenberg coined the phrase in a July 2025 blog post (daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/) to describe the cumulative burden of AI-generated security reports. It adapts “death by a thousand cuts”: no single report is fatal, but the aggregate volume consumes maintainer time to the point of unsustainability. Widely cited across Ars Technica, Socket.dev, and The Register — because it names a countable category of harm rather than describing the problem abstractly.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter