Your software supply chain risk tooling was built on an assumption: when a vulnerability turns up in a dependency, a patched version exists. That assumption breaks the moment the maintainer who would write the patch has burned out and walked away.
The 2026 OSSRA report from Black Duck found that 93% of commercial codebases contain zombie components — dependencies with no development activity in the past two years. Meanwhile, 92% contain components four or more versions behind, and 68% contain licence conflicts — the largest year-over-year increase in the report’s history. The full landscape of AI contribution pressure on open source explains the structural driver: AI coding tools have made it trivially cheap to generate pull requests while doing nothing to reduce the cost of reviewing them.
Your existing risk process has a blind spot. This article gives you a five-step process to close it — no dedicated OSPO headcount required.
Why does maintainer burnout show up in your SBOM?
Your SBOM knows a component exists. It does not know the maintainer quit six months ago.
Standard SBOM tooling — CycloneDX, SPDX, and the SCA platforms built on them — captures component name, version, licence declaration, and known CVEs. A frozen component accumulates unpatched vulnerabilities that never get a CVE assignment because no researcher bothers triaging a dead project.
The tooling gap is structural. Snyk, Sonatype, and Mend were architected around “find the patched version.” They are not designed to signal “there will be no patched version.”
OpenSSL was maintained by two overworked, underpaid people at the time of Heartbleed. The XZ compromise succeeded because a patient attacker targeted a single overworked maintainer. Maintainer burnout is not a community welfare concern — it is a supply-chain risk signal. The cost asymmetry mechanism that explains why maintainer burnout is a risk signal is the structural driver: AI tools reduce contribution cost to near zero while review cost stays constant and high, accelerating burnout in volunteer-maintained projects. For the broader problem this process addresses — why AI-generated contributions create supply-chain risk across all dimensions — the pillar guide provides the full context.
What is a zombie component, and how do you find yours?
A zombie component is an open source dependency with no development activity in the past two years. The OSSRA 2026 formal definition: no commits, no releases, no issue activity in the last 24 months. Present in 93% of audited commercial codebases. This is not an edge case.
So how do you tell a zombie apart from a mature, stable library that just hasn’t needed any recent commits? Check four signals:
- Last commit date — if more than 24 months have passed, the component meets the zombie definition
- Open security issues with no maintainer response — this is the signal that distinguishes mature from abandoned
- README or CHANGELOG notes indicating “feature complete, no further development” — some projects formally communicate stable-and-done status
- Fork activity — if the community has created and is actively maintaining a fork, the original project’s effective abandonment has already been acknowledged
A CSS reset library with no commits since 2021 and no open CVEs is not the same risk as an authentication library in the same state.
How to surface zombie components with your existing tooling:
Most SCA tools (Snyk, Black Duck, FOSSA) support filtering by last activity date. Enable this filter and set the threshold to 24 months — it is frequently not on by default. OpenSSF Scorecard provides a “Maintained” check scored 0–10; a score of 0 means no recent commits or issues have been handled.
Not all zombie components require the same response speed. Prioritise by function first (authentication, cryptography, network I/O before UI utilities), then direct versus transitive dependency, then whether a maintained fork exists.
The Kubernetes External Secrets Operator case illustrates the documented incidents showing what happens when dependencies reach this state: when its sole active maintainer took vacation, zero pull requests were merged and 20 new issues opened with no response. Recovery took at minimum six months.
How do you calculate the Contributor Absence Factor for a critical dependency?
The Contributor Absence Factor (CAF) is the fewest number of contributors whose departure would account for 50% of all commit activity.
If you have spent any time on Hacker News, you will have come across “bus factor” — the informal shorthand for the same concept. CHAOSS, the Linux Foundation project that maintains metrics for open source software health, formally renamed it Contributor Absence Factor. Use CAF in your risk reports and governance documents.
Why CAF beats total contributor count: A project with 40 contributors can still have a CAF of 1 if one core developer wrote 80% of the commits. Total contributor count is a vanity metric. CAF is the risk metric.
Worked calculation: Eight contributors with commit counts: 1,000; 433; 343; 332; 202; 90; 42; 33. Total: 2,475. The 50% threshold is 1,237.5. The first contributor accounts for 1,000 commits, the second adds 433 — cumulative total now 1,433, which exceeds the threshold. CAF = 2. Two people effectively control this project’s commit activity.
Three ways to compute CAF:
- GitHub API (free): Pull the contributor list with commit counts and walk the list until cumulative commits reach 50% of total. About 30 minutes manually.
- OpenSSF Scorecard: The Contributors check scores 0–10 based on commit distribution. A score below 5 signals concerning concentration.
- Bitergia Risk Radar: Commercial platform that computes a Total Risk Score incorporating CAF directly — suitable for teams assessing many dependencies at once.
CAF of 2 or lower on a Tier 3 or Tier 4 dependency that handles security-sensitive functions warrants escalation. CAF of 1 warrants an active contingency plan.
Elephant Factor is the organisational-concentration companion to CAF: the fewest organisations comprising 50% of project activity. When one company employs all active committers, you face the Terraform/OpenTofu or Redis/Valkey scenario — the commercial backer makes a unilateral decision, and you scramble to work out whether you can continue. Track it for your Tier 2 dependencies.
For evaluating governance quality as part of your dependency health assessment, CAF provides the quantitative complement to qualitative governance review.
What does the CHAOSS viability framework assess, and how do you use it?
The CHAOSS viability framework (a Linux Foundation project) evaluates an open source dependency across four categories: Compliance and Security, Governance, Community, and Strategy. It gives you a documented, reproducible methodology — which matters because you need to compare results quarter-over-quarter, not make ad-hoc judgements each time.
Compliance and Security: Does the project track CVEs? Are releases signed? Is there a security disclosure policy? OpenSSF Scorecard automates most of this — run it first.
Governance: Is there a CONTRIBUTING.md, a code of conduct, documented decision-making? Does the project have an active maintainer group with more than one person? How a project handles AI contribution inflow is now a governance quality signal — the three governance orientations for evaluating governance quality as part of your dependency health assessment map directly onto this category.
Community: CAF and Elephant Factor are the primary metrics. Supplement them with commit frequency trend and issue response latency.
Strategy: Is the project foundation-backed (Apache, CNCF, Linux Foundation)? Is there commercial backing with paid contributors? Foundation-backed projects provide a structural buffer against AI contribution pressure.
For each category, assign Red/Amber/Green. Red in Compliance and Security, or Red in Community = escalation required. Red in Governance = monitor closely. Red in Strategy plus Red in Community = contingency plan required.
A full CHAOSS assessment takes 30–45 minutes manually. CHAOSS recommends quarterly reassessment — this gives you the trend data to make supply-chain risk arguments at the board level.
How do you classify your dependencies by AI contribution pressure exposure?
Existing SCA risk tiers are based on known vulnerability severity — backward-looking. AI contribution pressure exposure is forward-looking: it asks where future vulnerability discovery will slow down or stop. You need both.
Tier 1 — Foundation-backed, paid contributors (CNCF, Apache, Linux Foundation). Low AI pressure exposure — governance absorbs inflow. Annual CHAOSS check. Default: Monitor.
Tier 2 — Commercially-backed, company employs primary contributors. Medium exposure — risk is vendor strategy change, not burnout. Semi-annual CAF + Elephant Factor check. Default: Monitor Elephant Factor.
Tier 3 — High-star / small volunteer team, no formal governance, high-AI-use language ecosystem (Python, JavaScript, TypeScript). High exposure — high visibility invites AI slop inflow; small team with no institutional buffer. Quarterly CHAOSS viability assessment. Default: Contingency plan if CAF is 2 or lower.
Tier 4 — Single-maintainer, no governance docs, no foundation backing. Single point of failure; burnout = abandonment. Quarterly + contingency planning. Default: Active alternative identification required.
Classification decision logic — four sequential questions:
- Is this project backed by a neutral foundation (CNCF, Apache, Linux Foundation)? Yes → Tier 1.
- Does a company employ the primary contributors as part of their paid work? Yes → Tier 2.
- Is this a high-star project in a high-AI-use language ecosystem with fewer than five active committers and no formal governance? Yes → Tier 3.
- Otherwise: Tier 4.
“High-AI-use language ecosystem” means Python, JavaScript/TypeScript, and the Go tool ecosystem. A small volunteer-maintained Python utility library today faces meaningfully more contribution pressure than an equivalent Fortran library.
The practical output is a tiered dependency register — maintainable in your SBOM tooling or a simple spreadsheet. Classify all direct dependencies on initial setup; update classification when structural signals change.
The documented incidents showing what happens when dependencies reach this state — curl, Node.js, Ghostty — provide the evidence for why Tier 3 and Tier 4 classification warrants proactive attention.
What is licence laundering, and why do you need SCA tooling to catch it?
Licence laundering is what happens when AI coding assistants generate code derived from copyleft-licensed sources — GPL, LGPL, AGPL — without retaining the original licence metadata. The result: undisclosed licence obligations embedded in your codebase, or in the codebases of the dependencies you rely on. Standard SCA tools miss this entirely.
OSSRA 2026 records 68% of audited commercial codebases containing licence conflicts — up from 56% the previous year. Only 54% of organisations currently evaluate AI-generated code for IP and licensing risks. One audited codebase contained 2,675 distinct licence conflicts.
Here is the technical pathway: a developer uses an AI assistant to generate a function. The AI reproduces logic derived from GPL-licensed source without attribution. The output file has no licence header. Your SCA tool flags it as “unknown” — typically deprioritised — or misses it entirely because it entered as an inline snippet, not a declared dependency.
And if a dependency you rely on has licence-laundered code embedded in it, you inherit that problem. A copyleft snippet in a proprietary codebase can legally obligate you to release your entire proprietary source code.
Standard SCA tools (Snyk, Sonatype, Mend) check declared licence headers and SPDX identifiers. Two tools have moved ahead of the field for semantic fingerprinting:
-
JFrog Xray: AI-Generated Code Validation using semantic matching — analysing code’s underlying logic, not just text similarity. If a developer attempts to merge a pull request containing an AI-generated snippet that violates licence policies, the merge is blocked.
-
FOSSA Snippet Scanning: Designed specifically for AI coding tool risks. Most scans complete in under five minutes; generates SBOMs and licence attribution reports with traceability between revisions.
AI code validation features are typically not enabled by default in either tool. Explicitly enable them.
The EU Cyber Resilience Act places supply-chain liability on the downstream commercial manufacturer. Undisclosed licence obligations from AI-generated code create both legal and security exposure — worth flagging for the broader problem this process addresses in any company with EU regulatory exposure.
What does a quarterly OSS health review process look like?
CHAOSS recommends a quarterly cadence because it matches typical engineering governance rhythms — quarterly planning, board reporting — and is frequent enough to catch a project in early-stage decline rather than after it has gone fully dark.
Step 1: OpenSSF Scorecard sweep (~30 minutes, mostly automated) Run against all Tier 1 and Tier 2 dependencies via the CLI or GitHub Actions integration. Flag any dependency with a Maintained score below 5 or a Contributors score below 5. Treat these as escalation triggers for manual investigation.
Step 2: CAF and Elephant Factor check (~45 minutes) For all Tier 3 and Tier 4 dependencies, pull contributor commit data via the GitHub API or Bitergia. Flag any dependency with CAF of 2 or lower. For Tier 2 dependencies, flag any where a single organisation employs more than 50% of active committers.
Step 3: SCA licence scan (~30 minutes, mostly automated) Run FOSSA Snippet Scanning or JFrog Xray with AI code provenance scanning explicitly enabled. Flag any new “unknown” or AI-derived licence entries and add to a legal review queue.
Step 4: Zombie component delta review (~30 minutes) Compare this quarter’s SCA activity report against last quarter’s. Flag any component that moved from “active” to “no recent commits.” Check whether a maintained fork exists — if the community has coalesced around one, migration is a defined path.
Step 5: Escalation and recording (~15 minutes) Critical findings — CAF of 1 on a Tier 3 or Tier 4 dependency, new zombie component in a security-sensitive function, licence conflict — go to the next engineering governance meeting with a recommended action. Three standard responses: find an alternative or maintained fork, sponsor the maintainer or contribute engineering hours, vendor fork internally. Contributing back as a proactive risk reduction strategy provides the business case framework for the second option.
For a stack of 50–100 direct dependencies, the full process takes approximately two to three hours per quarter.
Who owns this process at a 50–500 person company without an OSPO?
The OSPO (Open Source Programme Office) function is a set of responsibilities, not a team. The failure mode at most SMB tech companies is not “we don’t have an OSPO” — it is “nobody has explicit ownership.” Assigning it to existing roles costs nothing and eliminates the gap.
CTO owns policy and escalation decisions: setting the OSS dependency policy, approving contingency plans for Tier 4 dependencies, escalating licence findings to legal. For companies with EU regulatory exposure, the quarterly review output becomes the compliance evidence file.
Engineering Leads and Platform Engineers run quarterly review Steps 1–4. They own the tiered dependency register and make fork-vs-replace recommendations. At the smaller end of the range, this may be a single senior engineer or the CTO directly.
Security Function (whoever owns AppSec or SCA tooling) configures and maintains SCA tooling with AI code provenance scanning enabled, owns the licence scanning, and feeds findings into the quarterly review.
First-quarter bootstrap: Run a one-time audit across all direct dependencies; classify them by tier; create the tiered register; identify any immediate findings (CAF of 1, zombie components in security-sensitive functions, licence conflicts); run CHAOSS viability assessment on all Tier 3 and Tier 4 dependencies. This takes one to two days for a 50–100 dependency stack. After that, the quarterly review is just the delta.
For Tier 3 and Tier 4 dependencies, the most effective long-term risk reduction is upstream contribution — funding a maintainer, contributing engineering hours, or steering a dependency toward foundation governance. Contributing back as a proactive risk reduction strategy makes the cost comparison case. For a complete overview of how AI-generated contributions are reshaping open-source supply chain risk across all dimensions — mechanism, incidents, governance, and platform responses — see the full series.
Frequently Asked Questions
Does our SBOM currently capture maintainer health signals?
Almost certainly not. Standard SBOM tools — CycloneDX, SPDX — capture component name, version, licence declaration, and known CVEs. They do not capture commit frequency, contributor count, or CAF. Layer an OpenSSF Scorecard sweep on top of your SBOM output to get maintainer health signals. Some commercial SCA platforms (Black Duck, Mend) are beginning to add activity signals, but typically not enabled by default.
What SCA tools surface licence laundering from AI-generated code?
JFrog Xray is the most capable tool for semantic fingerprinting of AI-generated code provenance. FOSSA Snippet Scanning provides strong licence compliance scanning for AI-generated code contexts. Standard SCA tools (Snyk, Sonatype, Mend) rely on declared licence headers — they will flag “unknown” entries but do not perform semantic fingerprinting. Enable AI code validation features explicitly; they are not on by default.
How do I prioritise which dependencies to assess first?
Function filter first — any dependency handling authentication, cryptography, network protocols, or deserialisation is a priority regardless of tier. Then tier classification — assess Tier 3 and Tier 4 first. For a 50–100 dependency stack, this typically produces a list of 10–15 high-priority items for the first quarter.
Can I do this assessment without any commercial tooling?
Yes. OpenSSF Scorecard is free and open source; the GitHub API is free for public repositories; the CHAOSS viability framework documentation is publicly available at chaoss.community. The limitation is scale — manual CAF calculation becomes time-consuming above approximately 30 direct dependencies. Add commercial tooling when the manual process exceeds approximately half a working day per quarter.
What does the EU Cyber Resilience Act require specifically for OSS dependencies?
The CRA places supply-chain liability on the downstream commercial manufacturer, not the OSS maintainer. Companies shipping software to EU markets must demonstrate supply-chain due diligence — knowing what OSS dependencies they use, their licence status, and their security maintenance status. The quarterly OSS health review process described here produces the documentation that satisfies this requirement. Full CRA obligations phase in by December 2027; consult legal counsel for your jurisdiction.
What should I do when a Tier 4 dependency has no active maintainer?
Three options in order of preference: (1) Find a maintained fork — check GitHub for forks with recent activity; the community often coalesces around one; (2) Sponsor or contribute to restart maintenance — fund a developer or assign engineering hours; this is a supply-chain investment, not charity; (3) Vendor fork internally — fork the repository, assume maintenance, and apply security patches. The business case for upstream investment develops the framework for option 2.
How often should I re-tier a dependency after initial classification?
Re-tier when a structural signal changes: project transitions from independent to foundation-backed (Tier 3 to Tier 1); a commercial backer withdraws support (Tier 2 to Tier 3); CAF drops to 1 due to a core contributor departure; the project freezes. Between tier-change events, the quarterly review process provides sufficient signal to detect trend changes without requiring full re-classification.