How many third-party dependencies does your app use? If you’re running a Node app, probably dozens. A Python backend? Maybe hundreds when you count transitive dependencies. And when the EU Cyber Resilience Act comes into full force in December 2027, you’re going to need a machine-readable Software Bill of Materials (SBOM) for every single release.
This guide is part of our comprehensive software supply chain security resource, where we explore the full landscape from regulatory compliance to incident response. Here we focus specifically on automating SBOM generation in your CI/CD pipeline.
Manual SBOM generation is a pain. You’ll forget to do it. Or you’ll do it wrong. Or someone will update a dependency and the SBOM will be out of date before you even ship.
So in this article we’re going to give you copy-paste ready GitHub Actions workflows that generate, sign, version, and store SBOMs automatically on every build. We’ll cover Syft for SBOM generation, Cosign signing with Sigstore, multi-language support for Node.js, Python, Java, and Go, and VEX integration with Grype.
The basic setup takes 30-45 minutes. The complete multi-language implementation with VEX will take you 2-3 hours. You’ll need a GitHub repository with Actions enabled, basic familiarity with YAML, and an understanding of how your package manager works.
Let’s get into it.
Why automate SBOM generation in CI/CD pipelines?
The regulatory pressure is real. The EU’s Cyber Resilience Act (which has specific SBOM technical requirements you’ll need to meet), PCI-DSS v4.0, and FDA requirements all mandate SBOMs for software releases. And they want them machine-readable, which means no more Excel spreadsheets or Word documents.
When you generate SBOMs manually, you create audit gaps. Steps get skipped. Transitive dependencies – those dependencies of your dependencies – go undetected. And the format varies between releases because different developers handle it differently.
Build-time generation captures exact dependency versions at compile time, eliminating the version drift between what you developed with and what actually ships to production. And cryptographic signing with Cosign gives you tamper-proof proof that your SBOM matches what you actually built.
The CRA’s Article 20 requires 10-year SBOM retention. Good luck maintaining that with manual processes. Automated workflows with systematic versioning and storage make this trivial.
And here’s the kicker – you don’t need to change your development workflow at all. The SBOMs just appear. No developer action required. Automation reduces errors by 80-95% compared to manual processes, and saves your developers 15-30 minutes per release they would have spent generating SBOMs by hand.
The compliance deadlines aren’t going away. CRA enforcement hits December 2027. PCI-DSS v4.0 is already in effect. You need to get this sorted.
What is Syft and why use it for SBOM generation?
Syft is Anchore’s open-source SBOM generation tool. It supports 15+ package managers including npm, pip, Maven, and Go modules, which covers pretty much every mainstream development stack.
It generates both SPDX and CycloneDX formats. This is important because different regulators prefer different formats. SPDX (currently version 2.3, with 3.0 recently released) excels in licensing, compliance, and provenance tracking. CycloneDX 1.7 is optimised for CI/CD integration and vulnerability management.
Syft detects transitive dependencies by analysing lock files like package-lock.json, requirements.txt, and go.sum – files that manual SBOM processes often miss entirely. When you scan container images, it extracts dependencies from multiple layers, catching all software components regardless of how they were installed.
GitHub Actions integration via the official anchore/sbom-action makes setup remarkably straightforward. You’re looking at minimal YAML configuration.
How do I set up basic SBOM generation with GitHub Actions and Syft?
You need to create .github/workflows/sbom-generation.yml in your repository root. That’s where GitHub Actions looks for workflow definitions.
The official action for Syft is anchore/sbom-action@v0. It handles automatic tool installation and caching, so you don’t need to mess around with downloading binaries or managing versions.
Configure the path parameter to tell Syft where to scan. Use . for the repository root, or point it at specific directories if you’ve got a monorepo situation.
Set the format parameter to spdx-json, cyclonedx-json, or both if you need multi-format compliance.
The upload-artifact action stores your generated SBOM as a build artefact. GitHub’s default retention is 90 days, which you can extend if needed.
Trigger the workflow on push events to your main branch and release events for production builds. Add pull_request triggers if you want to validate SBOMs on proposed changes before they get merged.
Here’s the complete working workflow:
name: Generate SBOM
on:
push:
branches: [ main ]
release:
types: [ published ]
pull_request:
branches: [ main ]
jobs:
sbom-generation:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Generate SBOM with Syft
uses: anchore/sbom-action@v0
with:
path: .
format: spdx-json
output-file: sbom.spdx.json
- name: Upload SBOM artefact
uses: actions/upload-artifact@v4
with:
name: sbom-spdx
path: sbom.spdx.json
retention-days: 90
To test it, commit this workflow and check the Actions tab in your repository. Download the artefact and open it to validate the SBOM structure looks right and that dependency detection actually worked.
Common errors you’ll hit include missing permissions – make sure you’ve got contents: read in there. Path issues crop up if your repository structure doesn’t match what you specified. And format validation failures happen if your format parameter has a typo.
How do I add cryptographic signing with Cosign and Sigstore?
Cosign provides cryptographic signing for SBOMs. This gives you tampering detection and provenance verification, which matters a lot when you’re shipping software to customers or need to prove compliance.
Sigstore offers keyless signing using OIDC identity. It leverages your GitHub Actions identity, which means you don’t need to manage private keys. No keys to rotate, no secrets to leak.
Install Cosign in your workflow using the sigstore/cosign-installer@v3 action before your SBOM generation step.
Sign your generated SBOM file using the cosign sign-blob command with the --yes flag for non-interactive execution in CI.
The GitHub Actions OIDC token requires id-token: write permission to authenticate the signing process without you needing to manually configure secrets.
The signing process generates a signature file (.sig) and certificate file (.cert) that get stored alongside your SBOM. These form the verification chain.
Here’s the extended workflow with signing:
name: Generate and Sign SBOM
on:
push:
branches: [ main ]
release:
types: [ published ]
jobs:
sbom-generation:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
actions: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Cosign
uses: sigstore/cosign-installer@v3
- name: Generate SBOM with Syft
uses: anchore/sbom-action@v0
with:
path: .
format: spdx-json
output-file: sbom.spdx.json
- name: Sign SBOM with Cosign
run: |
cosign sign-blob --yes \
--output-signature sbom.spdx.json.sig \
--output-certificate sbom.spdx.json.cert \
sbom.spdx.json
- name: Upload SBOM and signatures
uses: actions/upload-artifact@v4
with:
name: sbom-signed
path: |
sbom.spdx.json
sbom.spdx.json.sig
sbom.spdx.json.cert
retention-days: 90
Recipients can validate your signed SBOM using cosign verify-blob --cert sbom.spdx.json.cert --signature sbom.spdx.json.sig sbom.spdx.json. The certificate contains identity claims that confirm the build provenance – proving it came from your official build process, not someone’s laptop.
How do I implement multi-language SBOM generation for polyglot repositories?
If you’ve got a polyglot repository – say a Node.js frontend, Python backend, and Go microservices – you need language-specific scanning strategies.
Syft auto-detects package managers, but explicit path configuration improves accuracy and speeds up your builds.
Use GitHub Actions matrix strategy to run parallel SBOM generation for each language or component. This keeps your CI time reasonable even with multiple services.
Language-specific considerations matter. Node.js needs package-lock.json present. Python requires requirements.txt or poetry.lock. Java looks for pom.xml or build.gradle. Go reads go.mod.
Here’s a complete multi-language workflow:
name: Multi-Language SBOM Generation
on:
push:
branches: [ main ]
release:
types: [ published ]
jobs:
sbom-generation:
runs-on: ubuntu-latest
strategy:
matrix:
component:
- name: frontend
path: frontend/
language: nodejs
- name: backend
path: backend/
language: python
- name: api-service
path: services/api/
language: go
permissions:
contents: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Cosign
uses: sigstore/cosign-installer@v3
- name: Generate SBOM for ${{ matrix.component.name }}
uses: anchore/sbom-action@v0
with:
path: ${{ matrix.component.path }}
format: spdx-json
output-file: sbom-${{ matrix.component.name }}.spdx.json
- name: Sign SBOM
run: |
cosign sign-blob --yes \
--output-signature sbom-${{ matrix.component.name }}.spdx.json.sig \
--output-certificate sbom-${{ matrix.component.name }}.spdx.json.cert \
sbom-${{ matrix.component.name }}.spdx.json
- name: Upload SBOM artefacts
uses: actions/upload-artifact@v4
with:
name: sbom-${{ matrix.component.name }}
path: |
sbom-${{ matrix.component.name }}.spdx.json
sbom-${{ matrix.component.name }}.spdx.json.sig
sbom-${{ matrix.component.name }}.spdx.json.cert
For monorepo optimisation, cache your language-specific lock files to skip scanning unchanged components. This can cut your CI time by 40-60%.
What SBOM versioning strategy should I use for the 10-year retention requirement?
EU CRA Article 20 mandates 10-year SBOM retention with version correlation to product releases. You can’t just store them randomly and hope for the best.
Use semantic versioning alignment. Your SBOM version matches your software release version. So v1.2.3 of your software becomes sbom-v1.2.3.spdx.json.
Include the Git commit SHA in your SBOM metadata. This creates an audit trail linking the SBOM to the exact source code state that produced it.
For storage, use S3 or Azure Blob with lifecycle policies. Keep it in Standard storage for 90 days, then transition to Archive. This reduces costs by 70-90% while maintaining accessibility.
Your directory structure should be sboms/YYYY/MM/product-version-commit.spdx.json for chronological retrieval when auditors come knocking.
Here’s a workflow with S3 upload:
name: SBOM Generation with Long-term Storage
on:
release:
types: [ published ]
jobs:
sbom-generation:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Cosign
uses: sigstore/cosign-installer@v3
- name: Get version and commit info
id: version
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
echo "COMMIT=${GITHUB_SHA:0:8}" >> $GITHUB_OUTPUT
echo "YEAR=$(date +%Y)" >> $GITHUB_OUTPUT
echo "MONTH=$(date +%m)" >> $GITHUB_OUTPUT
- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
path: .
format: spdx-json
output-file: sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json
- name: Sign SBOM
run: |
cosign sign-blob --yes \
--output-signature sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json.sig \
--output-certificate sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json.cert \
sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
- name: Upload to S3 with versioned path
run: |
aws s3 cp sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json \
s3://${{ secrets.SBOM_BUCKET }}/sboms/${{ steps.version.outputs.YEAR }}/${{ steps.version.outputs.MONTH }}/sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json
aws s3 cp sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json.sig \
s3://${{ secrets.SBOM_BUCKET }}/sboms/${{ steps.version.outputs.YEAR }}/${{ steps.version.outputs.MONTH }}/sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json.sig
aws s3 cp sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json.cert \
s3://${{ secrets.SBOM_BUCKET }}/sboms/${{ steps.version.outputs.YEAR }}/${{ steps.version.outputs.MONTH }}/sbom-${{ steps.version.outputs.VERSION }}-${{ steps.version.outputs.COMMIT }}.spdx.json.cert
Configure S3 lifecycle policies to automatically transition objects from Standard to Glacier after 90 days, then to Glacier Deep Archive after 365 days. Set it and forget it.
How do I implement GitLab CI equivalent workflows?
GitLab CI uses .gitlab-ci.yml instead of the .github/workflows/ directory structure that GitHub uses.
Syft installation uses container images like anchore/syft:latest rather than marketplace actions. Same tool, different packaging.
Job stages are build, sbom, and sign with explicit dependencies to prevent things running in the wrong order.
Artefact storage uses GitLab’s artifacts: directive with 30-day default retention that you can configure to unlimited if you need it.
Here’s the complete GitLab CI workflow:
stages:
- build
- sbom
- sign
- upload
variables:
SBOM_FORMAT: "spdx-json"
generate-sbom:
stage: sbom
image: anchore/syft:latest
script:
- syft dir:. -o ${SBOM_FORMAT} > sbom.spdx.json
artifacts:
paths:
- sbom.spdx.json
expire_in: 90 days
only:
- main
- tags
sign-sbom:
stage: sign
image:
name: gcr.io/projectsigstore/cosign:latest
entrypoint: [""]
dependencies:
- generate-sbom
script:
- cosign sign-blob --yes
--output-signature sbom.spdx.json.sig
--output-certificate sbom.spdx.json.cert
sbom.spdx.json
artifacts:
paths:
- sbom.spdx.json
- sbom.spdx.json.sig
- sbom.spdx.json.cert
expire_in: 90 days
only:
- main
- tags
upload-to-storage:
stage: upload
image: amazon/aws-cli:latest
dependencies:
- sign-sbom
script:
- export VERSION=${CI_COMMIT_TAG:-${CI_COMMIT_SHORT_SHA}}
- export YEAR=$(date +%Y)
- export MONTH=$(date +%m)
- aws s3 cp sbom.spdx.json
s3://${SBOM_BUCKET}/sboms/${YEAR}/${MONTH}/sbom-${VERSION}.spdx.json
- aws s3 cp sbom.spdx.json.sig
s3://${SBOM_BUCKET}/sboms/${YEAR}/${MONTH}/sbom-${VERSION}.spdx.json.sig
- aws s3 cp sbom.spdx.json.cert
s3://${SBOM_BUCKET}/sboms/${YEAR}/${MONTH}/sbom-${VERSION}.spdx.json.cert
only:
- tags
If you’re on GitLab Ultimate tier, you get SBOM display in the security dashboard built in.
How do I integrate VEX (Vulnerability Exploitability eXchange) with SBOM generation?
VEX documents link SBOMs with vulnerability analysis. They tell you which CVEs are actually exploitable in your specific context versus just being present but harmless.
Grype is Anchore’s vulnerability scanner. It generates VEX-compatible output after scanning your SBOM against CVE databases. For a deeper dive into implementing comprehensive vulnerability scanning with reachability analysis, which can reduce false positives by up to 80%, check out our dedicated SCA guide.
The integration workflow is straightforward. Generate your SBOM with Syft. Scan it with Grype. Produce the VEX document. Store all three artefacts together.
VEX status values are not_affected (the vulnerable dependency is present but the code path is never invoked), affected (actually exploitable), fixed (you’ve deployed a patched version), and under_investigation (you’re still analysing it).
Reachability analysis available in Grype with call graph plugins automates the not_affected determination. This reduces false positives by around 80%.
Here’s the complete SBOM plus VEX workflow:
name: SBOM Generation with VEX Integration
on:
push:
branches: [ main ]
release:
types: [ published ]
jobs:
sbom-vex-generation:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Cosign
uses: sigstore/cosign-installer@v3
- name: Generate SBOM with Syft
uses: anchore/sbom-action@v0
with:
path: .
format: spdx-json
output-file: sbom.spdx.json
- name: Install Grype
run: |
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
- name: Scan SBOM for vulnerabilities
run: |
grype sbom:sbom.spdx.json -o json > vulnerabilities.json
grype sbom:sbom.spdx.json -o sarif > grype.sarif
- name: Generate VEX document
run: |
cat vulnerabilities.json | jq '{
"@context": "https://openvex.dev/ns/v0.2.0",
"@id": "https://example.com/vex/\(env.GITHUB_SHA)",
"author": "GitHub Actions SBOM Pipeline",
"timestamp": (now | strftime("%Y-%m-%dT%H:%M:%SZ")),
"version": 1,
"statements": [.matches[] | {
"vulnerability": {
"name": .vulnerability.id,
"description": .vulnerability.description
},
"products": [{
"name": env.GITHUB_REPOSITORY,
"version": env.GITHUB_SHA
}],
"status": (if .vulnerability.severity == "Critical" or .vulnerability.severity == "High" then "affected" else "under_investigation" end),
"justification": (if .vulnerability.severity == "Negligible" or .vulnerability.severity == "Low" then "vulnerable_code_not_in_execute_path" else null end)
}]
}' > vex.json
- name: Upload Grype SARIF to GitHub Security
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: grype.sarif
- name: Sign SBOM
run: |
cosign sign-blob --yes \
--output-signature sbom.spdx.json.sig \
--output-certificate sbom.spdx.json.cert \
sbom.spdx.json
- name: Upload SBOM, VEX, and signatures
uses: actions/upload-artifact@v4
with:
name: sbom-vex-complete
path: |
sbom.spdx.json
sbom.spdx.json.sig
sbom.spdx.json.cert
vulnerabilities.json
vex.json
grype.sarif
retention-days: 90
Security teams use VEX documents to prioritise remediation by filtering on affected versus not_affected status. You stop wasting time on vulnerabilities that don’t actually matter in your application.
FAQ Section
What SBOM format should I generate: SPDX or CycloneDX?
Generate both formats if you’re uncertain about regulatory requirements. SPDX (ISO/IEC 5962) is preferred for licensing compliance and government contracts including FDA submissions and federal procurement. CycloneDX (ECMA-424) is optimised for security use cases with better vulnerability correlation and faster CI/CD integration. For detailed field-by-field mapping of SPDX requirements for EU CRA compliance, refer to our regulatory implementation guide.
Syft supports concurrent generation of both formats with minimal performance impact. Add --output spdx-json --output cyclonedx-json to your commands. Most organisations generate CycloneDX for internal security workflows and SPDX for customer and regulator distribution.
How do I handle private dependencies and internal packages in SBOMs?
Syft detects private npm and pip packages from lock files automatically. For internal packages, just make sure version metadata is correctly specified in package.json or setup.py.
Redact sensitive information like internal repository URLs or proprietary component names using SPDX ExternalRefs or CycloneDX externalReferences with generic identifiers. Configure Syft --exclude flags to omit development dependencies or test fixtures that don’t ship to production.
For compliance purposes, document your redaction policy and provide auditors with access to full unredacted SBOMs under NDA if they require it.
Can I use this workflow for container images instead of source code?
Yes. Syft excels at container image analysis. Modify the path parameter to reference your Docker image in the anchore/sbom-action@v0 configuration with image: my-app:latest.
Syft scans all image layers, extracting packages installed via apt, yum, apk, and language package managers. For multi-stage builds, scan the final production image to exclude build-time dependencies.
Use docker/build-push-action@v4 to build your image in the workflow, then pass the image reference to the SBOM action. Container SBOMs capture OS packages like glibc and openssl that source code scanning misses entirely.
How long does SBOM generation add to build time?
A typical Node.js application takes 30-90 seconds. A large monorepo with 50+ microservices needs 3-8 minutes with parallel matrix jobs. Java or Maven projects with many dependencies take 1-3 minutes. Container image scanning needs 1-5 minutes depending on image size and layer count.
Optimise by caching the Syft database (roughly 200MB, updated weekly) to avoid re-downloading it every time. Use conditional workflow execution to run only on release tags. Run SBOM generation in parallel with your tests. This cost is minimal compared to the 15-30 minutes saved per release from eliminating manual generation.
Do I need to sign SBOMs if they’re stored in private repositories?
Yes, for compliance and incident response readiness. CRA Article 20 requires integrity and authenticity guarantees. Signing prevents tampering if your repository gets compromised or you face insider threat scenarios.
Sigstore keyless signing adds 15-30 seconds with zero ongoing maintenance since there’s no private key rotation to worry about. Verification enables customers to validate that the SBOM matches the software they received, which is essential for supply chain trust.
Even for internal use, signed SBOMs create an audit trail proving the SBOM was generated during the official build process, not manually created after an incident.
How do I verify a signed SBOM received from a vendor?
Download three files: the SBOM (.spdx.json), signature (.sig), and certificate (.cert).
Install Cosign using brew install cosign on macOS or download the binary for your platform.
Run verification with cosign verify-blob --cert sbom.cert --signature sbom.sig sbom.spdx.json.
Inspect the certificate using openssl x509 -in sbom.cert -text -noout to confirm the GitHub Actions identity and repository match what you expect.
Check the Sigstore transparency log. The certificate includes a Rekor log index for independent verification. If verification fails, that indicates tampering or an invalid signing process. Reject the SBOM and contact the vendor.
Can I automate SBOM generation for legacy applications without CI/CD?
Yes, using scheduled workflows or manual triggers. GitHub Actions supports workflow_dispatch for manual runs via the web UI and schedule for cron-based triggers.
For legacy apps, create a dedicated SBOM repository with a workflow that checks out your application code, generates the SBOM, and stores the result. Run it monthly or on-demand before releases.
An alternative approach is local generation using the Syft CLI with syft dir:./app -o spdx-json=sbom.spdx.json then manual upload to your compliance storage.
This isn’t ideal for 10-year retention compliance, but it enables SBOM adoption for your legacy estate before you’ve migrated everything to CI/CD.
What permissions does the GitHub Actions workflow need?
Minimum permissions are contents: read to check out repository code and id-token: write for Sigstore keyless signing via OIDC. For artefact upload, you need actions: write which is included by default. For release attachment, add contents: write.
Security best practice is using the permissions: block to explicitly grant minimal required permissions. This prevents workflow compromise from escalating privileges.
Example: permissions: contents: read, id-token: write, actions: write.
Self-hosted runners may require additional AWS or Azure credentials for storage upload, configured as repository secrets.
How do I handle SBOM generation for monorepos with 50+ microservices?
Use GitHub Actions matrix strategy to generate SBOMs in parallel for each service. Define the matrix with service paths: matrix: service: [api, frontend, worker, ...].
Each job runs Syft on a specific directory: path: services/${{ matrix.service }}.
Aggregate results by uploading individual SBOMs as separate artefacts or creating a composite SPDX document with relationships linking the services together.
Optimise using paths filter to trigger only affected service jobs when code changes: on: push: paths: 'services/api/**'. This reduces CI time by 70-90% for large monorepos by skipping unchanged components.
What’s the difference between SBOM generation and Software Composition Analysis (SCA)?
SBOM is the inventory of components – the list of dependencies and their versions. SCA is the analysis of that inventory for vulnerabilities, licences, and risks.
Syft generates SBOMs. Grype performs SCA by scanning the SBOM against CVE databases.
The workflow sequence is: Syft creates the SBOM, Grype scans for vulnerabilities, then the results inform your remediation efforts. For implementing a complete SCA workflow with PR blocking and reachability analysis, see our detailed integration guide.
Enterprise SCA platforms like Snyk and Mend combine both capabilities with reachability analysis and auto-remediation features. This tutorial focuses on SBOM generation. You’ll want to integrate with SCA tools for complete supply chain security.
How do I update SBOMs when dependencies change mid-release?
Re-run the SBOM generation workflow after dependency updates.
For hotfixes, create a patch release like v1.2.4 with a new SBOM version matching it.
For major dependency updates, generate a new SBOM and compare it with the previous version using diff tools. SPDX-Tools has comparison utilities built in.
Maintain a changelog documenting dependency changes between SBOM versions.
The automated approach is triggering the SBOM workflow on package-lock.json changes with the pull_request event. Generate a preview SBOM as a PR comment showing added and removed dependencies. When the changes are approved and merged, the final SBOM generates on release.
Can I use this workflow for firmware or embedded systems?
Partially. Syft detects software dependencies like Linux packages and Python libraries on embedded Linux systems.
For firmware components including bootloader, RTOS, and proprietary blobs, you’ll need manual SBOM creation using SPDX editors.
A hybrid approach works. Use Syft to generate the software layer SBOM, then manually add firmware components in the SPDX Relationships section.
Tools are emerging from the NTIA SBOM Tooling working group for firmware SBOM extraction.
The CRA compliance challenge is the “all integrated components” mandate includes hardware and firmware. These are currently underserved by automated tools. For regulated hardware products in medical devices or automotive, you might want to consider specialist firmware SBOM consultancy.
Next Steps: Now that you have automated SBOM generation in place, explore our complete supply chain security strategy covering regulatory compliance, incident response, and the full security tool stack required for comprehensive protection.