Guidelines for developing & contributing to Anchore Open Source projects
Welcome! We appreciate all contributions to Anchore’s open source projects.
Whether you’re fixing a bug, adding a feature, or improving documentation, your help makes these tools better for everyone.
Getting Help
The Anchore open source community is here to help. Use Discourse for questions, discussions, and troubleshooting.
Use GitHub for reporting bugs, requesting features, and submitting code contributions. See Issues vs Discussions for guidance on which channel to use.
General discussion: Ideas, use cases, and community chat
Help requests: Troubleshooting your specific setup
Best practices: Sharing knowledge and experiences
Why separate channels?
GitHub issues track work items that require code changes.
Each issue represents a potential task for the development team.
Discourse provides a better format for conversations, questions, and community support without cluttering the issue tracker.
If you’re unsure which to use, start with Discourse. The community can help identify if an issue should be created.
Security Issues
If you discover a security vulnerability, please report it privately rather than creating a public issue.
See our Security Policy for details on how to report security issues responsibly. This gives us time to fix the problem and protect users before details become public.
2 - Syft
Developer guidelines when contributing to Syft
Getting started
In order to test and develop in the Syft repo you will need the following dependencies installed:
Golang
Docker
Python (>= 3.9)
make
Initial setup
Run once after cloning to install development tools:
make bootstrap
Make sure you’ve updated your docker settings so the default docker socket path is available.
Go to docker → settings → advanced and ensure “Allow the default Docker socket to be used” is checked.
Use the default docker context, run: docker context use default
Useful commands
Common commands for ongoing development:
make help - List all available commands
make lint - Check code formatting and linting
make lint-fix - Auto-fix formatting issues
make unit - Run unit tests
make integration - Run integration tests
make cli - Run CLI tests
make snapshot - Build release snapshot with all binaries and packages
Testing
Levels of testing
unit (make unit): The default level of test which is distributed throughout the repo are unit tests. Any _test.go file that
does not reside somewhere within the /test directory is a unit test. Other forms of testing should be organized in
the /test directory. These tests should focus on the correctness of functionality in depth. % test coverage metrics
only considers unit tests and no other forms of testing.
integration (make integration): located within cmd/syft/internal/test/integration, these tests focus on the behavior surfaced by the common library
entrypoints from the syft package and make light assertions about the results surfaced. Additionally, these tests
tend to make diversity assertions for enum-like objects, ensuring that as enum values are added to a definition
that integration tests will automatically fail if no test attempts to use that enum value. For more details see
the “Data diversity and freshness assertions” section below.
cli (make cli): located with in test/cli, these are tests that test the correctness of application behavior from a
snapshot build. This should be used in cases where a unit or integration test will not do or if you are looking
for in-depth testing of code in the cmd/ package (such as testing the proper behavior of application configuration,
CLI switches, and glue code before syft library calls).
acceptance (make install-test): located within test/compare and test/install, these are smoke-like tests that ensure that application
packaging and installation works as expected. For example, during release we provide RPM packages as a download
artifact. We also have an accompanying RPM acceptance test that installs the RPM from a snapshot build and ensures the
output of a syft invocation matches canned expected output. New acceptance tests should be added for each release artifact
and architecture supported (when possible).
Data diversity and freshness assertions
It is important that tests against the codebase are flexible enough to begin failing when they do not cover “enough”
of the objects under test. “Cover” in this case does not mean that some percentage of the code has been executed
during testing, but instead that there is enough diversity of data input reflected in testing relative to the
definitions available.
For instance, consider an enum-like value like so:
type Language stringconst (
Java Language = "java" JavaScript Language = "javascript" Python Language = "python" Ruby Language = "ruby" Go Language = "go")
Say we have a test that exercises all the languages defined today:
funcTestCatalogPackages(t *testing.T) {
testTable := []struct {
// ... the set of test cases that test all languages }
for _, test :=range cases {
t.Run(test.name, func (t *testing.T) {
// use inputFixturePath and assert that syft.CatalogPackages() returns the set of expected Package objects// ... })
}
}
Where each test case has a inputFixturePath that would result with packages from each language. This test is
brittle since it does not assert that all languages were exercised directly and future modifications (such as
adding a new language) won’t be covered by any test cases.
To address this, the enum-like object should have a definition of all objects that can be used in testing:
type Language string// const( Java Language = ..., ... )var AllLanguages = []Language{
Java,
JavaScript,
Python,
Ruby,
Go,
Rust,
}
Allowing testing to automatically fail when adding a new language:
funcTestCatalogPackages(t *testing.T) {
testTable := []struct {
// ... the set of test cases that (hopefully) covers all languages }
// new stuff... observedLanguages := strset.New()
for _, test :=range cases {
t.Run(test.name, func (t *testing.T) {
// use inputFixturePath and assert that syft.CatalogPackages() returns the set of expected Package objects// ...// new stuff...for _, actualPkg :=range actual {
observedLanguages.Add(string(actualPkg.Language))
}
})
}
// new stuff...for _, expectedLanguage :=range pkg.AllLanguages {
if !observedLanguages.Contains(expectedLanguage) {
t.Errorf("failed to test language=%q", expectedLanguage)
}
}
}
This is a better test since it will fail when someone adds a new language but fails to write a test case that should
exercise that new language. This method is ideal for integration-level testing, where testing correctness in depth
is not needed (that is what unit tests are for) but instead testing in breadth to ensure that units are well integrated.
A similar case can be made for data freshness; if the quality of the results will be diminished if the input data
is not kept up to date then a test should be written (when possible) to assert any input data is not stale.
An example of this is the static list of licenses that is stored in internal/spdxlicense for use by the SPDX
presenters. This list is updated and published periodically by an external group and syft can grab and update this
list by running go generate ./... from the root of the repo.
An integration test has been written to grabs the latest license list version externally and compares that version
with the version generated in the codebase. If they differ, the test fails, indicating to someone that there is an
action needed to update it.
Key Takeaway
Try and write tests that fail when data assumptions change and not just when code changes.
Snapshot tests
The format objects make a lot of use of “snapshot” testing, where you save the expected output bytes from a call into the
git repository and during testing make a comparison of the actual bytes from the subject under test with the golden
copy saved in the repo. The “golden” files are stored in the test-fixtures/snapshot directory relative to the go
package under test and should always be updated by invoking go test on the specific test file with a specific CLI
update flag provided.
Many of the Format tests make use of this approach, where the raw SBOM report is saved in the repo and the test
compares that SBOM with what is generated from the latest presenter code. The following command can be used to
update the golden files for the various snapshot tests:
make update-format-golden-files
These flags are defined at the top of the test files that have tests that use the snapshot files.
Snapshot testing is only as good as the manual verification of the golden snapshot file saved to the repo! Be careful
and diligent when updating these files.
Test fixtures
Syft uses a sophisticated test fixture caching system to speed up test execution. Test fixtures include pre-built test images,
language-specific package manifests, and other test data. Rather than rebuilding fixtures on every checkout, Syft can download
a pre-built cache from GitHub Container Registry.
Common fixture commands:
make fixtures - Intelligently download or rebuild fixtures as needed
make build-fixtures - Manually build all fixtures from scratch
make clean-cache - Remove all cached test fixtures
make check-docker-cache - Verify docker cache size is within limits
When to use each command:
First time setup: Run make fixtures after cloning the repository. This will download the latest fixture cache.
Tests failing unexpectedly: Try make clean-cache followed by make fixtures to ensure you have fresh fixtures.
Working offline: Set DOWNLOAD_TEST_FIXTURE_CACHE=false and run make build-fixtures to build fixtures locally without downloading.
Modifying test fixtures: After changing fixture source files, run make build-fixtures to rebuild affected fixtures.
The fixture system tracks input fingerprints and only rebuilds fixtures when their source files change. This makes the
development cycle faster while ensuring tests always run against the correct fixture data.
Code generation
Syft generates several types of code and data files that need to be kept in sync with external sources or internal structures:
What gets generated:
JSON Schema - Generated from Go structs to define the Syft JSON output format
SPDX License List - Up-to-date list of license identifiers from the SPDX project
CPE Dictionary Index - Index of Common Platform Enumeration identifiers for vulnerability matching
When to regenerate:
Run code generation after:
Modifying the pkg.Package struct or related types (requires JSON schema regeneration)
SPDX releases a new license list
CPE dictionary updates are available
Generation commands:
make generate - Run all generation tasks
make generate-json-schema - Generate JSON schema from Go types
make generate-license-list - Download and generate latest SPDX license list
make generate-cpe-dictionary-index - Generate CPE dictionary index
After running generation commands, review the changes carefully and commit them as part of your pull request. The CI pipeline
will verify that generated files are up to date.
The task system orchestrates all catalogers through CreateSBOMConfig,
which manages task execution, parallelism, and configuration.
generic.NewCataloger is an abstraction syft used to make writing common components easier (see the alpine cataloger for example usage).
It takes the following information as input:
A catalogerName to identify the cataloger uniquely among all other catalogers.
Pairs of file globs as well as parser functions to parse those files.
These parser functions return a slice of pkg.Package as well as a slice of artifact.Relationship to describe how the returned packages are related.
See this the alpine cataloger parser function as an example.
Identified packages share a common pkg.Package struct so be sure that when the new cataloger is constructing a new package it is using the Package struct.
If you want to return more information than what is available on the pkg.Package struct then you can do so in the pkg.Package.Metadata field, which accepts any type.
Metadata types tend to be unique for each pkg.Type but this is not required.
See the pkg package for examples of the different metadata types that are supported today.
When encoding to JSON, metadata type names are determined by reflection and mapped according to internal/packagemetadata/names.go.
Finally, here is an example of where the package construction is done within the alpine cataloger:
If you have questions about implementing a cataloger, feel free to file an issue or reach out to us on discourse!
Troubleshooting
Cannot build test fixtures with Artifactory repositories
Some companies have Artifactory setup internally as a solution for sourcing secure dependencies.
If you’re seeing an issue where the unit tests won’t run because of the below error then this section might be relevant for your use case.
[ERROR] [ERROR] Some problems were encountered while processing the POMs
If you’re dealing with an issue where the unit tests will not pull/build certain java fixtures check some of these settings:
a settings.xml file should be available to help you communicate with your internal artifactory deployment
this can be moved to syft/pkg/cataloger/java/test-fixtures/java-builds/example-jenkins-plugin/ to help build the unit test-fixtures
you’ll also want to modify the build-example-jenkins-plugin.sh to use settings.xml
For more information on this setup and troubleshooting see issue 1895
Next Steps
Understanding the Codebase
Architecture - Learn about package structure, core library flow, cataloger design patterns, and file searching
API Reference - Explore the public Go API, type definitions, and function signatures
Contributing Your Work
Pull Requests - Guidelines for submitting PRs and working with reviewers
In order to test and develop in the Grype repo you will need the following dependencies installed:
Golang
Docker
Python (>= 3.9)
make
SQLite3 (optional – for database inspection)
Initial setup
Run once after cloning to install development tools:
make bootstrap
Make sure you’ve updated your docker settings so the default docker socket path is available.
Go to docker → settings → advanced and ensure “Allow the default Docker socket to be used” is checked.
Use the default docker context, run: docker context use default
Useful commands
Common commands for ongoing development:
make help - List all available commands
make lint - Check code formatting and linting
make lint-fix - Auto-fix formatting issues
make format - Auto-format source code
make unit - Run unit tests
make integration - Run integration tests
make cli - Run CLI tests
make quality - Run vulnerability matching quality tests
make snapshot - Build release snapshot with all binaries and packages
Testing
Levels of testing
unit (make unit): The default level of test which is distributed throughout the repo are unit tests.
Any _test.go file that does not reside somewhere within the /test directory is a unit test.
Other forms of testing should be organized in the /test directory.
These tests should focus on the correctness of functionality in depth.
% test coverage metrics only considers unit tests and no other forms of testing.
integration (make integration): located within test/integration, these tests focus on the behavior surfaced by the Grype library entrypoints and make
assertions about vulnerability matching results.
The integration tests also update the vulnerability database and run with the race detector enabled to catch concurrency issues.
cli (make cli): located within test/cli, these are tests that test the correctness of application behavior from a snapshot build.
This should be used in cases where a unit or integration test will not do or if you are looking for in-depth testing of code in the cmd/ package (such as
testing the proper behavior of application configuration, CLI switches, and glue code before grype library calls).
quality (make quality): located within test/quality, these are tests that verify vulnerability matching quality by comparing Grype’s results against known-good results (quality gates).
These tests help ensure that changes to vulnerability matching logic don’t introduce regressions in match quality. The quality tests use a pinned database version to ensure consistent results.
See the quality gate architecture documentation for how the system works and the test/quality README for practical development workflows.
install (part of acceptance testing): located within test/install, these are smoke-like tests that ensure that application packaging and installation works as expected.
For example, during release we provide RPM packages as a download artifact.
We also have an accompanying RPM acceptance test that installs the RPM from a snapshot build and ensures the output of a grype invocation matches canned expected output.
Quality Gates
Quality gates validate that code changes don’t cause performance regressions in vulnerability matching. The system compares your PR’s matching results against a baseline using a pinned database to isolate code changes from database volatility.
What quality gates validate:
F1 score (combination of true positives, false positives, and false negatives)
False negative count (should not increase)
Indeterminate matches (should remain below 10%)
Common development workflows:
make capture - Download SBOMs and generate match results
make validate - Analyze output and evaluate pass/fail
yardstick label explore [UUID] - Interactive TUI for labeling matches
Grype uses Syft as a library for all-things related to obtaining and parsing the given scan target (pulling container images, parsing container images,
indexing directories, cataloging packages, etc). Releases of Grype should always use released versions of Syft (commits that are tagged and show up in the GitHub releases page).
However, continually integrating unreleased Syft changes into Grype incrementally is encouraged (e.g. go get github.com/anchore/syft@main) as long as by the time
a release is cut the Syft version is updated to a released version (e.g. go get github.com/anchore/syft@v<semantic-version>).
Inspecting the database
The v6 database is a highly normalized database with JSON data, making queries more difficult than the previous version.
The easiest way to understand what’s in the database is using the grype db search subcommand, e.g.:
go run ./cmd/grype db search CVE-2025-1234 -o json
If you need to inspect the database directly, the current database format is Sqlite3. Install sqlite3 in your system and ensure that the sqlite3 executable is available in your path.
Ask grype about the location of the database, which will be different depending on the operating system:
$ go run ./cmd/grype db status
Path: /Users/kzantow/Library/Caches/grype/db/6/vulnerability.db
Schema: v6.1.3
Built: 2025-12-01T16:28:25Z
From: https://grype.anchore.io/databases/v6/vulnerability-db_v6.1.3_2025-12-01T11:57:14Z_1764606505.tar.zst?checksum=sha256%3A8d34ad53aebced159559e767e1ccedddc41dfeb3f70492bdbb1b94df629def05
Status: valid
To retrieve basic information for a specific vulnerability, join the vulnerability_handles table with blobs, for example:
sqlite>select*from blobs b join vulnerability_handles h on b.id = h.blob_id where h.name ='CVE-2025-1234';
id value id name status published_date modified_date withdrawn_date provider_id blob_id
------- ------------------------------------------------------------ ------ ------------- -------- ----------------------------- ----------------------------- -------------- ----------- -------
1016723 {"id":"CVE-2025-1234","assigner":["cve@gitlab.com"],"descrip 287704 CVE-2025-1234 rejected 2025-07-05 23:15:24.613+00:00 2025-07-05 23:15:24.613+00:00 nvd 1016723
tion":"Rejected reason: This CVE ID has been rejected or wit
hdrawn by its CVE Numbering Authority.","refs":[{"url":"http
s://nvd.nist.gov/vuln/detail/CVE-2025-1234"}]}
Next Steps
Understanding the Codebase
Architecture - Learn about package structure, core library flow, and matchers
API Reference - Explore the public Go API, type definitions, and function signatures
Contributing Your Work
Pull Requests - Guidelines for submitting PRs and working with reviewers
✓ Updated in-repo documentation if your changes affect user-facing behavior
✓ Written a clear PR title that describes the user-facing impact
✓ Followed existing code style and patterns in the project
Each of these items helps maintainers review your contribution more effectively and merge it faster.
PR Title
Your PR title is important—it becomes the changelog entry in release notes. Write titles that are meaningful to end users, not just developers.
Guidelines
Start with an action verb: “Add”, “Fix”, “Update”, “Remove”
Be specific: “Add support for Alpine 3.19” rather than “Update Alpine”
Keep it concise: Under 72 characters when possible
Focus on user impact: What changed for users, not implementation details
Examples
Good titles:
Add support for Python 3.12 package detection
Fix crash when parsing malformed RPM databases
Update documentation for custom template usage
Poor titles:
Updates (too vague—updates to what?)
Fixed bug (which bug?)
WIP: trying some things (not ready for review)
Refactor parseRPM function (implementation detail, not a user-facing change)
Note
We use chronicle to automatically generate changelogs from issue and PR titles, so a well-written title goes a long way.
PR Description
A clear description helps reviewers understand your changes quickly. Include these key sections:
What to include
Summary: Briefly describe what changed
Motivation: Explain why this change is needed or what problem it solves
Approach: If your solution isn’t obvious, explain your approach
Testing: Describe how you tested the changes
Related issues: Link to issues or discussions that provide context
Template
## Summary
Brief description of the change.
## Motivation
Why is this change needed? What problem does it solve?
## Changes
- Bullet point list of key changes
- Include any breaking changes or migration steps
## Type of change
<!-- Delete any that are not relevant -->
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (please discuss with the team first; Syft is 1.0 software and we won't accept breaking changes without going to 2.0)
- [ ] Documentation (updates the documentation)
- [ ] Chore (improve the developer experience, fix a test flake, etc, without changing the visible behavior of Syft)
- [ ] Performance (make Syft run faster or use less memory, without changing visible behavior much)
## Checklist
- [ ] I have added unit tests that cover changed behavior
- [ ] I have tested my code in common scenarios and confirmed there are no regressions
- [ ] I have added comments to my code, particularly in hard-to-understand sections
Closes #123
Commit History
We use squash merging for all pull requests, which means:
Your entire PR becomes a single commit on the main branch
You don’t need to maintain a clean commit history in your PR
Merge commits in your feature branch are perfectly fine
You can commit as frequently as you like during development
The PR title (not individual commit messages) becomes the changelog entry
This approach keeps the main branch clean and linear while reducing friction for contributors. Focus on code quality rather than commit structure—reviewers care about the changes, not how you got there.
Size Matters
Small PRs get reviewed faster. Here’s how to make your PR easier to review:
Keep changes focused: Try to address one concern per PR
Avoid mixing unrelated changes: Don’t combine bug fixes with new features
Split large PRs when possible: If a PR is unavoidably large, provide extra context in the description
Consider breaking work into multiple PRs if you’re making both refactoring changes and feature additions. Reviewers can process smaller, focused changes more quickly.
What to Expect
Review Feedback
It’s normal and expected for reviewers to have questions and suggestions:
Questions about your approach: Be prepared to explain your decisions
Code style adjustments: You may be asked to match existing project patterns
Additional tests: Reviewers might request more test coverage
Scope changes: You might be asked to split or narrow the PR
How to respond to feedback
Address feedback promptly: Respond when you can, even if just to acknowledge
Ask for clarification: If something isn’t clear, ask questions
Explain your reasoning: It’s okay to discuss alternatives respectfully
Make changes in new commits: This makes incremental review easier
Mark conversations as resolved: When you’ve addressed a comment
Remember that review feedback is about the code, not about you. Reviewers want to help make the contribution successful.
After Approval
Once approved, a maintainer will merge your PR. Depending on the project, you might be asked to:
Rebase on the latest main branch if there are conflicts
Update the PR title or description for clarity
Make final adjustments based on last-minute feedback
Check the project-specific contributing guide for any additional requirements
Contributing to open source can feel intimidating at first, but the community is here to support you. Don’t hesitate to ask questions.
5 - Grype DB
Developer guidelines when contributing to Grype DB
Getting started
Adding Data Sources
Grype DB is responsible for building the database used by Grype, aggregating data provided by Vunnel providers.
If you’re interested in adding a data source, you probably want to start with the Vunnel documentation.
This codebase is primarily Go, however, there are also Python scripts critical to the daily DB publishing process as
well as acceptance testing. You will require the following:
Python 3.11+ installed on your system (Python 3.11-3.13 supported). Consider using pyenv if you do not have a
preference for managing python interpreter installations.
zstd binary utility if you are packaging v6+ DB schemas
(optional)xz binary utility if you have specifically overridden the package command options
uv installed for Python package and virtualenv management
To download Go tooling used for static analysis, dependent Go modules, and Python dependencies run:
make bootstrap
Useful commands
Common commands for ongoing development:
make help - List all available commands
make lint - Check code formatting and linting
make lint-fix - Auto-fix formatting issues
make unit - Run unit tests (Go and Python)
make cli - Run CLI tests
make db-acceptance schema=<version> - Run DB acceptance tests for a schema version
make snapshot - Build release snapshot with all binaries and packages
make download-all-provider-cache - Download pre-built vulnerability data cache
Development workflows
Getting vulnerability data
In order to build a grype DB you will need a local cache of vulnerability data:
make download-all-provider-cache
This will populate the ./data directory locally with everything needed to run grype-db build (without needing to run grype-db pull).
This data being pulled down is the same data used in the daily DB publishing workflow, so it should be relatively fresh.
Creating a new DB schema
Create a new v# schema package in the grype repo (within grype/db)
Create a new v# schema package in the grype-db repo (use the bump-schema.py helper script) that uses the new changes from grype-db
Modify the manager/src/grype_db_manager/data/schema-info.json to pin the last-latest version to a specific version of grype and add the new schema version pinned to the “main” branch of grype (or a development branch)
Update all references in grype to use the new schema
Use the Staging DB Publisher workflow to test your DB changes with grype in a flow similar to the daily DB publisher workflow
Testing with staging databases
While developing a new schema version it may be useful to get a DB built for you by the Staging DB Publisher GitHub Actions workflow.
This code exercises the same code as the Daily DB Publisher, with the exception that only a single schema is built and is validated against a given development branch of grype.
When these DBs are published you can point grype at the proper listing file like so:
unit (make unit): Unit tests for both Go code in the main codebase and Python scripts in the manager/ directory.
These tests focus on correctness of individual functions and components. Coverage metrics track Go test coverage.
cli (make cli): CLI tests for both Go and Python components. These validate that command-line interfaces work correctly with various inputs and configurations.
db-acceptance (make db-acceptance schema=<version>): Acceptance tests that verify a specific DB schema version works correctly with Grype.
These tests build a database, run Grype scans, and validate that vulnerability matches are correct and complete.
Running tests
To run unit tests for Go code and Python scripts:
make unit
To verify that a specific DB schema version interops with Grype:
make db-acceptance schema=<version>
# Note: this may take a while... go make some coffee.
Next Steps
Understanding the Codebase
Architecture - Learn about the ETL pipeline, schema support, and publishing workflow
Vunnel Documentation - Understand the vulnerability data provider system that feeds Grype DB
Contributing Your Work
Pull Requests - Guidelines for submitting PRs and working with reviewers
posix shell (bash, zsh, etc… needed for the make dev “development shell”)
Once you have python and uv installed, get the project bootstrapped by cloning grype, grype-db, and vunnel next to each other:
# clone grype and grype-db, which is needed for provider developmentgit clone git@github.com:anchore/grype.git
git clone git@github.com:anchore/grype-db.git
# note: if you already have these repos cloned, you can skip this step. However, if they# reside in a different directory than where the vunnel repo is, then you will need to# set the `GRYPE_PATH` and/or `GRYPE_DB_PATH` environment variables for the development# shell to function. You can add these to a local .env file in the vunnel repo root.# clone the vunnel repogit clone git@github.com:anchore/vunnel.git
cd vunnel
# get basic project toolingmake bootstrap
# install project dependenciesuv sync --all-extras --dev
Pre-commit is used to help enforce static analysis checks with git hooks:
uv run pre-commit install --hook-type pre-push
Development environment
Development shell
The easiest way to develop providers is to use the development shell, selecting the specific provider(s) you’d like to focus your development workflow on:
# Specify one or more providers you want to develop on.# Any provider from the output of "vunnel list" is valid.# Specify multiple as a space-delimited list:# make dev providers="oracle wolfi nvd"$ make dev provider="oracle"Entering vunnel development shell...
• Configuring with providers: oracle ...
• Writing grype config: /Users/wagoodman/code/vunnel/.grype.yaml ...
• Writing grype-db config: /Users/wagoodman/code/vunnel/.grype-db.yaml ...
• Activating virtual env: /Users/wagoodman/code/vunnel/.venv ...
• Installing editable version of vunnel ...
• Building grype ...
• Building grype-db ...
Note: development builds grype and grype-db are now available in your path.
To update these builds run 'make build-grype' and 'make build-grype-db' respectively.
To run your provider and update the grype database run 'make update-db'.
Type 'exit' to exit the development shell.
The development shell provides local builds of grype and grype-db from adjacent directories. You can configure custom paths using environment variables:
# example .env file in the root of the vunnel repoGRYPE_PATH=~/somewhere/else/grype
GRYPE_DB_PATH=~/also/somewhere/else/grype-db
Example: Running make update-db
You can run the provider you specified in the make dev command, build an isolated grype DB, and import the DB into grype:
$ make update-db
• Updating vunnel providers ...[0000] INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
...[0000] INFO configured providers parallelism=1 providers=1[0000] DEBUG └── oracle
[0000] DEBUG all providers started, waiting for graceful completion...[0000] INFO running vulnerability provider provider=oracle
[0000] DEBUG oracle: 2023-03-0715:44:13 [INFO] running oracle provider
[0000] DEBUG oracle: 2023-03-0715:44:13 [INFO] downloading ELSA from https://linux.oracle.com/security/oval/com.oracle.elsa-all.xml.bz2
[0019] DEBUG oracle: 2023-03-0715:44:31 [INFO] wrote 6298 entries
[0019] DEBUG oracle: 2023-03-0715:44:31 [INFO] recording workspace state
• Building grype-db ...[0000] INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
[0000] INFO reading all provider state
[0000] INFO building DB build-directory=./build providers=[oracle] schema=5• Packaging grype-db ...[0000] INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
[0000] INFO packaging DB from="./build"for="https://toolbox-data.anchore.io/grype/databases"[0000] INFO created DB archive path=build/vulnerability-db_v5_2023-03-07T20:44:13Z_405ae93d52ac4cde6606.tar.gz
• Importing DB into grype ...Vulnerability database imported
Example: Scanning with the dev database
You can now run grype that uses the newly created DB:
$ grype oraclelinux:8.4
✔ Pulled image
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [195 packages] ✔ Scanning image... [193 vulnerabilities] ├── 0 critical, 25 high, 146 medium, 22 low, 0 negligible
└── 193 fixed
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
bind-export-libs 32:9.11.26-4.el8_4 32:9.11.26-6.el8 rpm ELSA-2021-4384 Medium
bind-export-libs 32:9.11.26-4.el8_4 32:9.11.36-3.el8 rpm ELSA-2022-2092 Medium
bind-export-libs 32:9.11.26-4.el8_4 32:9.11.36-3.el8_6.1 rpm ELSA-2022-6778 High
bind-export-libs 32:9.11.26-4.el8_4 32:9.11.36-5.el8 rpm ELSA-2022-7790 Medium
# note that we're using the database we just built...$ grype db status
Location: /Users/wagoodman/code/vunnel/.cache/grype/6 # <--- this is the local DB we just built...
# also note that we're using a development build of grype$ which grype
/Users/wagoodman/code/vunnel/bin/grype
Rebuilding development tools
To rebuild the grype and grype-db binaries from local source, run:
make build-grype
make build-grype-db
Recommended development workflow
For most provider development, follow this iterative workflow:
Clone all three repos side-by-side: vunnel, grype, and grype-db
Enter the development shell: make dev provider="<your-provider>"
Make changes to your provider code in vunnel
Build and test: Run make update-db to build a database with your provider’s data. Run make build-grype or make build-grype-db if these tools have code changes.
Validate with grype: Scan test images to verify matching works correctly
Iterate: Adjust code and repeat steps 4-5
If you need to make changes to grype or grype-db during development, use make build-grype or make build-grype-db to rebuild with your changes.
Common commands
This project uses Make for running common development tasks:
make # run static analysis and unit testingmake static-analysis # run static analysismake unit # run unit testsmake format # format the codebasemake lint-fix # attempt to automatically fix linting errors
To see all available commands:
make help
Snapshot tests
Many providers have snapshot tests, which assert that a fixed set of inputs will always produce the expected outputs. These tests provide end-to-end validation of the transformation logic within the vunnel provider.
Snapshot tests run as part of make unit.
To update snapshots, pass --snapshot-update to pytest:
uv run pytest ./tests/unit/providers/debian/test_debian.py -k test_provider_via_snapshot --snapshot-update
Quality gate tests
All vunnel providers are protected by a quality gate. A quality gate essentially does the following:
Use vunnel and grype-db to build a vulnerability database
Use Syft to create an SBOM
Use grype to scan the SBOM with the vulnerability database
Before implementing a provider, understand how the pieces fit together:
Vulnerability matching overview
Syft: Catalogs packages from images/filesystems with metadata (type, name, version, distro, etc.)
Vunnel: Provides vulnerability data from various sources
Grype DB: Transforms and stores vulnerability data with ecosystem metadata
Grype: Matches packages against vulnerabilities in the database built by grype-db
Affected vs. unaffected package handles
Grype uses two types of package records:
Affected: “If a package meets this version constraint, it IS vulnerable”
Unaffected: “If a package meets this version constraint, it is NOT vulnerable”
Most providers emit affected package records. Some providers (like AlmaLinux) emit unaffected records to filter matches from other sources (Red Hat in AlmaLinux’s case).
Examples in code:
Affected packages: Most distro providers (Red Hat, Debian, Ubuntu, etc.)
Vulnerability data must conform to a structured schema. Vunnel supports several schemas including OSV, OpenVEX, NVD, and GitHub Security Advisory. Schema selection is covered in detail below.
Architecture details
For detailed information about Vunnel’s internal architecture, including provider abstraction, workspace conventions, and integration with Grype DB, see the Vunnel Architecture page.
Adding a new provider
Before you start: Understanding requirements
Schema selection
Choosing the right schema
Legacy Vunnel providers emit vulnerabilities in the OS schema, but generally, new providers should use an externally specified schema like OSV or OpenVEX.
Schema preference hierarchy for new providers:
OSV (strongly preferred) - Use for most vulnerability data
Other externally specified schemas - OpenVEX, CSAF VEX, etc.
Existing internal schemas - NVD, GitHub Security Advisory, OS (only if data naturally fits)
Custom schemas - Requires discussion with maintainers
Why we prefer externally specified schemas:
Based on open standards (OSV, OpenVEX, CSAF)
Better interoperability with other tools
Reduced maintenance burden
Broader ecosystem support
The OS schema is an internal format that exists primarily to support legacy providers. While it’s still supported, we encourage new providers to use externally specified schemas when possible.
Decision tree: What schema should my provider use?
Source already provides OSV format? → Use OSV
Easy to transform to OSV without data loss? → Use OSV
Source already uses an externally specified format (OpenVEX, CSAF VEX, etc.)? → Probably use that format (may need to add a transformer to grype-db; check with maintainers in an issue)
Not already in an external format and not easy to make into OSV? → Check with maintainers in an issue
Before implementing a provider, answer these questions to understand what changes will be needed:
1. Can Syft identify which packages should be matched against your data?
This is critical—Grype needs to know when to use your vulnerability data. Examples:
New Linux distro: Does the distro have something distinctive in /etc/os-release that Syft can detect? (Concretely, does syft -o json my-test-image | jq .distro produce something correct and specific to your vulnerability feed?)
Vendor-specific patches: Do patched packages have a distinctive version pattern (e.g., .<vendor_name> in dpkg versions)?
Language ecosystem: Does your data apply to all packages from a specific package manager?
If Syft can’t distinguish your packages, you may need changes to Syft first. See contributing to Syft.
2. Do you have public test artifacts?
You must have publicly accessible test images or artifacts that:
Contain packages your provider has data for
Can be scanned by Syft
Can be used in CI/CD for ongoing validation
Without test artifacts, we cannot validate that your provider works correctly.
3. Is your vulnerability feed comprehensive or supplementary?
Comprehensive: Contains both vulnerability disclosures AND fixes (e.g., most Linux distro security advisories, GHSA)
Supplementary: Contains only fixes that layer over another source (e.g., AlmaLinux provides fixes on top of Red Hat data; Alpine SecDB provides fixes on top of NVD CVE data)
Supplementary feeds typically require additional Grype changes to filter existing matches.
4. What schema is your vulnerability data available in?
Already OSV? Great—minimal work needed
Already in another external format (OpenVEX, CSAF, etc.)? May need to add a transformer to grype-db
Custom format? You’ll need to transform it to an external schema (preferably OSV)
See the schema selection section above for guidance.
5. How should Vunnel retrieve your data?
Important: Vunnel must be able to enumerate and fetch the entire vulnerability feed. APIs that only provide individual vulnerability lookups (e.g., GET /vuln-id without a GET /all-vulns endpoint) are very difficult to integrate.
Common patterns:
Downloadable archive (tar.gz, zip, etc. with vulnerability data)—provide the URL
Public HTTP API with enumeration support—provide the endpoint
Public Git repository (with JSON/YAML/XML files)—provide the repo URL
Requires authentication or special access? Discuss with maintainers in an issue
What these answers tell you:
Syft changes needed? Question 1
Grype changes needed? Questions 1 and 3
Grype DB changes needed? Question 4
Feasibility? Question 2 is a hard requirement
Complexity? All questions together determine overall complexity
If you’re unsure about any of these, open an issue to discuss with maintainers before starting implementation.
Initial prerequisites
“Vulnerability matching” is the process of taking a list of vulnerabilities and matching them against a list of packages. A provider in this repo is responsible for the “vulnerability” side of this process. The “package” side is handled by Syft. A prerequisite for adding a new provider is that Syft can catalog the package types that the provider is feeding vulnerability data for, so Grype can perform the matching from these two sources.
For a detailed example on the implementation details of a provider, see the “example” provider.
Understanding multi-repository coordination
Adding a new provider often requires PRs in multiple repositories.
Depending on your answers to the key questions above, you may need PRs in vunnel, grype-db, and grype. This is normal and expected.
Recommended approach:
Set up all three repos as siblings: Clone vunnel, grype, and grype-db in the same parent directory
Make changes across all needed repos: Create branches in each repo that needs changes
Test locally with the dev shell: Use make dev provider=<your-new-provider> in vunnel, which will use your local grype and grype-db branches
Validate end-to-end: Run make update-db and then grype <your-test-artifacts> to verify matching works correctly
Open PRs in all repos: When your local branches work together correctly, open PRs. If possible, make sure maintainers have permission to edit your PRs.
Add quality gate tests: Add a block to tests/quality/config.yaml that exercises your provider. You may need to temporarily make config.yaml point to your branches of Grype or Grype-DB in order for the validation to pass.
Maintainers coordinate merging: Once all PRs are approved, maintainers will coordinate getting them merged and update version references
Which PRs do you need?
Vunnel PR: Always needed—implements the provider and emits vulnerability data
Grype DB PR: Needed if adding a new schema transformer (may not be needed for OSV, OpenVEX, etc.)
Grype PR: Needed if adding new matching logic, distro types, or filtering behavior
Don’t be discouraged by the multi-repo requirement—this is a well-established workflow. Open draft PRs early and maintainers can help guide you through the process.
Step-by-step: Implementing your provider
Step 1: Prove Syft can find your artifacts
Before implementing anything, verify that Syft can catalog the packages you want to provide vulnerability data for:
syft -q <your-test-image> | grep <expected-pattern>
# You should see packages that your provider will have data for
For distro-specific providers, verify Syft detects the distro correctly:
syft -o json <your-test-image> | jq .distro
# Should show the correct distro name and version
If Syft can’t find your packages or detect your distro, you may need Syft changes before proceeding. Generally, we are happy for Syft to learn to parse new distros and package types. See contributing to Syft.
Step 2: Find or create test artifacts showing incorrect matching
Identify concrete test cases that your provider will fix. Run Grype on your test artifact:
grype <your-test-artifact>
Document what’s wrong:
Missing vulnerabilities: Grype should report CVE-X for package Y but doesn’t
False positives: Grype reports CVE-X for package Y but shouldn’t (e.g., the package is patched)
These incorrect matches are what you’ll use to validate your provider works correctly.
Step 3: Set up the three-repo workspace
Clone vunnel, grype, and grype-db as siblings in the same parent directory:
# Fork the repos on GitHub first, then:git clone git@github.com:your-username/vunnel.git
git clone git@github.com:your-username/grype.git
git clone git@github.com:your-username/grype-db.git
# Or clone from upstream and add forks as remotes:cd vunnel
git remote add fork git@github.com:your-username/vunnel.git
# (repeat for grype and grype-db)
Create branches in each repo where you’ll make changes:
cd vunnel
git checkout -b add-my-provider
cd ../grype
git checkout -b support-my-provider
cd ../grype-db
git checkout -b transform-my-provider-data
Step 4: Implement the Vunnel provider
Take a look at the example provider in the example directory. You are encouraged to copy it as a starting point:
# from the root of the vunnel repocp -a example/awesome src/vunnel/providers/YOURPROVIDERNAME
Create a provider class under /src/vunnel/providers/<name> that inherits from provider.Provider and implements:
name(): A unique and semantically-useful name for the provider
update(): Downloads and processes raw data, writing all results with self.results_writer()
Wire up your provider:
Add an entry to the dispatch table in src/vunnel/providers/__init__.py mapping your provider name to the class
Add provider configuration to src/vunnel/cli/config.py (specifically the Providers dataclass)
Validation:
vunnel list # Should show your providervunnel run <your-provider-name> # Should execute successfully
Need help with your provider?
At this point you can open a draft Vunnel PR and ask maintainers for guidance on the next steps.
Step 5: Implement Grype DB changes (if needed)
When needed:
If your provider uses a schema that doesn’t already have a transformer, add one in grype-db:
Add unmarshaling logic in pkg/provider/unmarshal
Add processing/transformation logic in pkg/process/v6 (the v5 data is only consumed by old versions of Grype, and new providers generally should not change v5 or older code in Grype DB)
Use the dev shell to test that your vulnerability data flows into Grype’s database:
cd vunnel
make dev provider="<your-provider-name>"# This enters a shell with local builds of grype and grype-dbmake update-db
# Check that data was imported (you can inspect the SQLite database if needed)
Step 6: Determine if Grype changes are needed
Test whether Grype automatically picks up your new data:
# In the dev shell from step 5grype <your-test-artifact>
Compare against the incorrect matches you documented in step 2. If Grype now correctly reports previously missing vulnerabilities or filters out false positives, you’re done with Grype changes—skip to step 7.
Common reasons for needing Grype changes:
Grype does not support the distro type and it needs to be added. See the grype/distro/type.go file to add the new distro.
Grype supports the distro already, but matching is disabled. See the grype/distro/distro.go file to enable the distro explicitly.
If you’re using the developer shell (make dev ...) then you can run make build-grype to get a build of grype with your changes, then test again with grype <your-test-artifact>.
Note: Steps 5 and 6 are iterative—you may go back and forth between provider implementation, grype-db transformers, and grype matchers until everything works correctly.
Step 7: Add test configuration
Add your provider and test images to tests/quality/config.yaml:
These images are used to test the provider on PRs and nightly builds to verify the provider is working. Always use both the image tag and digest for all container image entries. Pick an image that has a good representation of the package types that your new provider is adding vulnerability data for.
Before continuing, validate your test images:
syft -q <your-test-image> | grep <expected-pattern>
# You should see packages that your provider has vulnerability data for
Common mistake: Test images that don’t contain relevant packages. Always verify before proceeding!
Step 8: Update quality gate configuration for your branches
If you have Grype or Grype DB changes, update the yardstick.tools[*] entries in tests/quality/config.yaml to use versions that point to your fork:
yardstick:
tools:
- name: grype
version: your-username/grype@your-branch-name
# ...# (similar for grype-db if needed)
If you don’t have any grype or grype-db changes, you can skip this step.
Step 9: Add vulnerability match labels
In order to evaluate the quality of the new provider, we need to know what the expected results are. This is done by annotating Grype results with “True Positive” labels (good results) and “False Positive” labels (bad results). We’ll use Yardstick to do this:
cd tests/quality
# Capture results with the development version of grype (from your fork)make capture provider=<your-provider-name>
# List your resultsuv run yardstick result list | grep grype
d415064e-2bf3-4a1d-bda6-9c3957f2f71a docker.io/anc... grype@v0.58.0 2023-03...
75d1fe75-0890-4d89-a497-b1050826d9f6 docker.io/anc... grype[custom-db]@bdcefd2 2023-03...
# Use the "grype[custom-db]" result UUID and explore the results and add labels to each entryuv run yardstick label explore 75d1fe75-0890-4d89-a497-b1050826d9f6
In the Yardstick TUI:
Press T to label a row as a True Positive (correct match)
Press F to label a row as a False Positive (incorrect match)
Press Ctrl-Z to undo a label
Press Ctrl-S to save your labels
Press Ctrl-C to quit when you are done
Later we’ll open a PR in the vulnerability-match-labels repo to persist these labels. For the meantime we can iterate locally with the labels we’ve added.
Step 10: Run the quality gate
cd tests/quality
# Runs your specific provider to gather vulnerability data, builds a DB, and runs grype with the new DBmake capture provider=<your-provider-name>
# Evaluate the quality gatemake validate
This uses the latest Grype DB release to build a DB and the specified Grype version with a DB containing only data from the new provider.
You are looking for a passing run before continuing further.
Troubleshooting:
Quality gate failing? Check that labels are correctly applied
Matches not appearing? Verify your provider is writing data correctly
Images not scanning? Verify test image accessibility and digests
Step 11: Persist labels to vulnerability-match-labels repo
Vunnel uses the labels in the vulnerability-match-labels repo via a git submodule. We’ve already added labels locally within this submodule in an earlier step. To persist these labels we need to push them to a fork and open a PR:
# Fork the github.com/anchore/vulnerability-match-labels repo, but you do not need to clone it...# From the Vunnel repo...cd tests/quality/vulnerability-match-labels
git remote add fork git@github.com:your-fork-name/vulnerability-match-labels.git
git checkout -b 'add-labels-for-<your-provider-name>'git status
# You should see changes from the labels/ directory for your provider that you addedgit add .
git commit -m 'add labels for <your-provider-name>'git push fork add-labels-for-<your-provider-name>
Note: You will not be able to open a Vunnel PR that passes PR checks until the labels are merged into the vulnerability-match-labels repo.
Once the PR is merged in the vulnerability-match-labels repo you can update the submodule in Vunnel to point to the latest commit in the vulnerability-match-labels repo:
cd tests/quality
git submodule update --remote vulnerability-match-labels
Step 12: Open PRs in all repos
Open PRs in all repos where you made changes:
Vunnel PR: Always needed
Grype DB PR: If you added transformer logic
Grype PR: If you added matching logic
The PR will also run all of the same quality gate checks that you ran locally.
In your PR descriptions:
Link to the related PRs in other repos
Describe what incorrect matching behavior this fixes
Reference your test artifacts
Note the test images you added to config.yaml
Before the Vunnel PR can merge:
Grype DB PR must be merged (if you have one)
Grype PR must be merged (if you have one)
Vulnerability-match-labels PR must be merged
Update tests/quality/config.yaml to point back to the latest versions (not branch names)
Getting help:
Open draft PRs early and ask maintainers for guidance. Maintainers are experienced with multi-repo coordination and can help you navigate the process. Maintainers may take over coordination and merge the PRs.
Adding a provider with a new schema
If you’re adding a provider that uses a completely new schema (not OSV, OpenVEX, etc.), follow the steps above with these additional requirements:
You will need to add the new schema to the Vunnel repo in the schemas directory
Grype DB will need to be updated to support the new schema in the pkg/provider/unmarshal and pkg/process/v* directories
The Vunnel tests/quality/config.yaml file will need to be updated to use development grype-db.version, pointing to your fork
The final Vunnel PR will not be able to be merged until the Grype DB PR is merged and the tests/quality/config.yaml file is updated to point back to the latest Grype DB version
Consider carefully: Adding a new schema is complex and increases maintenance burden. Prefer externally specified schemas like OSV whenever possible.
Troubleshooting
My test image doesn’t show any packages
# Verify the image contains expected packagessyft -q <image> | grep <pattern>
# Check the package typesyft -q <image> -o json | jq '.artifacts[] | select(.name=="<pkg>") | .type'# Verify the image is accessibledocker pull <image>
If packages aren’t appearing, the image may not contain what you expect. Review your test image selection.
Quality gate is failing
Verify labels are correctly applied (T for true positive, F for false positive)
Check that test images are accessible and have correct digests
Ensure grype and grype-db versions in config.yaml are correct
Run make capture and manually inspect the results with uv run yardstick result list
Grype isn’t matching my vulnerabilities
Check your provider’s output: Use vunnel run <name> and inspect the generated data
Verify schema conformance: Ensure your data matches the schema you’ve chosen
Check Grype DB transformation: Inspect the generated SQLite database to see if data was transformed correctly
Add debug logging: Use the dev shell and add logging to Grype matchers to understand why matches aren’t happening
Verify package metadata: Ensure Syft is cataloging packages with the metadata your matcher needs
I’m not sure if I need Grype changes
Try running end-to-end in the dev shell first (make update-db, then scan an image)
If matching doesn’t work as expected, you likely need Grype changes
Look for similar providers and see what Grype changes they required
Ask a maintainer in a draft PR—they can help you determine what’s needed
Getting help
Open a draft PR with your progress so far
Include specific questions or blockers you’re encountering
Share test images so maintainers can reproduce issues
Maintainers are happy to help guide you through the process
Expect response within a few business days
Next Steps
Understanding the Codebase
Vunnel Architecture - Learn about provider abstraction, workspace conventions, and vulnerability schemas
Example Provider - Detailed walkthrough of creating a new provider
Contributing Your Work
Pull Requests - Guidelines for submitting PRs and working with reviewers
In order to test and develop in the Grant repo you will need the following dependencies installed:
Golang
Docker
make
Initial setup
Run once after cloning to install development tools:
make bootstrap
Make sure you’ve updated your docker settings so the default docker socket path is available.
Go to docker → settings → advanced and ensure “Allow the default Docker socket to be used” is checked.
Use the default docker context, run: docker context use default
Useful commands
Common commands for ongoing development:
make help - List all available commands
make lint - Check code formatting and linting
make lint-fix - Auto-fix formatting issues
make unit - Run unit tests
make test - Run all tests
make snapshot - Build release snapshot with all binaries and packages (also available as make build)
make generate - Generate SPDX license index and license patterns
Testing
Levels of testing
unit (make unit): The default level of test which is distributed throughout the repo are unit tests.
Any _test.go file that does not reside somewhere within the /tests directory is a unit test.
These tests focus on the correctness of functionality in depth. % test coverage metrics only consider unit tests and no other forms of testing.
integration (make test): located in tests/integration_test.go, these tests focus on policy loading, license evaluation, and core library behavior.
They test the interaction between different components like policy parsing, license matching with glob patterns, and package evaluation logic.
cli (part of make test): located in tests/cli/, these are tests that test the correctness of application behavior from a snapshot build.
These tests execute the actual Grant binary and verify command output, exit codes, and behavior of commands like check, list, and version.
Testing conventions
Unit tests should focus on correctness of individual functions and components
Integration tests validate that core library components work together correctly (policy evaluation, license matching, etc.)
CLI tests ensure user-facing commands produce expected output and behavior
Current coverage threshold is 8% (see Taskfile.yaml)
Use table-driven tests where appropriate to test multiple scenarios
Linting
You can run the linter for the project by running:
make lint
This checks code formatting with gofmt and runs golangci-lint checks.
To automatically fix linting issues:
make lint-fix
Code generation
Grant generates code and data files that need to be kept in sync with external sources:
What gets generated:
SPDX License Index - Up-to-date list of license identifiers from the SPDX project for license identification and validation
License File Patterns - Generated patterns to identify license files in scanned directories
When to regenerate:
Run code generation after:
The SPDX license list has been updated
Adding new license file naming patterns
Contributing changes to license detection logic
Generation commands:
make generate - Run all generation tasks
make generate-spdx-licenses - Download and generate latest SPDX license list
make generate-license-patterns - Generate license file patterns (depends on SPDX license index)
After running generation commands, review the changes carefully and commit them as part of your pull request.
Package structure
Grant is organized into two main areas: the public library API and the CLI application. For detailed API documentation, see the Grant Go package reference.
grant/ - Public Library API
The top-level grant/ package is the public library that other projects can import and use. This is what you’d reference with import "github.com/anchore/grant/grant".
This package contains the core functionality:
License evaluation and matching
Policy loading and validation
Package analysis and filtering
Most contributions to core Grant functionality belong in this package.
cmd/grant/ - CLI Application
The CLI application is built on top of the grant/ library and contains application-specific code:
How to sign-off commits with the Developer’s Certificate of Origin
Sign off your work
All commits require a simple sign-off line to confirm you have the right to contribute your code.
This is a standard practice in open source called the Developer Certificate of Origin (DCO).
How to sign off
The easiest way is to use the -s or --signoff flag when committing:
git commit -s -m "your commit message"
This automatically adds a sign-off line to your commit message:
Signed-off-by: Your Name <your.email@example.com>
Tip: You can configure Git to always sign off commits automatically:
git config --global format.signoff true
Verify your sign-off
To check that your commit includes the sign-off, look at the log output:
git log -1
You should see the Signed-off-by: line at the end of your commit message:
commit 37ceh170e4hb283bb73d958f2036ee5k07e7fde7
Author: Your Name <your.email@example.com>
Date: Mon Aug 1 11:27:13 2020 -0400
your commit message
Signed-off-by: Your Name <your.email@example.com>
Why we require sign-off
In plain English: By adding a sign-off line, you’re confirming that:
You wrote the code yourself, OR
You have permission to submit it, AND
You’re okay with it being released under the project’s open source license
This protects both you and the project. It’s a simple legal formality that takes just a few seconds to add to each commit.
If you’ve already committed without a sign-off (easy to do!), you can add it retroactively.
For your most recent commit
git commit --amend --signoff
This updates your last commit to include the sign-off line.
For older commits
If you need to add sign-off to commits further back in your history:
git rebase --signoff HEAD~N
Replace N with the number of commits you need to sign. For example, HEAD~3 signs off the last 3 commits.
Note: If you’ve already pushed these commits, you’ll need to force-push after rebasing:
git push --force-with-lease
If you’re new to rebasing
Rebasing rewrites commit history, which can be tricky if you’re not familiar with it. If you run into issues:
Ask for help in the PR comments
Or, create a fresh branch from the latest main and cherry-pick your changes
The maintainers can also help you fix sign-off issues during the review process
What the DCO means (technical details)
The Developer Certificate of Origin (DCO) is a legal attestation that you have the right to submit your contribution under the project’s license.
Here’s the full text:
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
The DCO protects both contributors and the project by creating a clear record of contribution rights and licensing terms.
9 - SBOM Action
Developer guidelines when contributing to sbom-action
Getting started
In order to test and develop in the sbom-action repo you will need the following dependencies installed:
Node.js (>= 20.11.0)
npm
Docker
Initial setup
Run once after cloning to install dependencies and development tools:
npm install
This command installs all dependencies and sets up Husky git hooks that automatically format code and rebuild the distribution files before commits.
Useful commands
Common commands for ongoing development:
npm run build - Check TypeScript compilation (no output files)
npm run lint - Check code with ESLint
npm run format - Auto-format code with Prettier
npm run format-check - Check code formatting without changes
npm run package - Build distribution files with ncc (outputs to dist/)
npm test - Run Jest tests
npm run all - Complete validation suite (build + format + lint + package + test)
Testing
The sbom-action uses Jest for testing. To run the test suite:
npm test
The CI workflow handles any additional setup automatically (like Docker registries). For local development, you just need to install dependencies and run tests.
Test types
The test suite includes two main categories:
Unit tests (e.g., tests/GithubClient.test.ts, tests/SyftGithubAction.test.ts): Test individual components in isolation by mocking GitHub Actions context and external dependencies.
Integration tests (tests/integration/): Execute the full action workflow with real Syft invocations against test fixtures in tests/fixtures/ (npm-project, yarn-project). These tests use snapshot testing to validate SBOM output and GitHub dependency snapshot uploads.
Snapshot testing
Integration tests extensively use Jest’s snapshot testing to validate SBOM output. When you run integration tests, Jest compares the generated SBOMs against saved snapshots in tests/integration/__snapshots__/.
The tests normalize dynamic values (timestamps, hashes, IDs) before comparison to ensure consistent snapshots across runs.
Updating snapshots:
When you intentionally change SBOM output format or content, update the snapshots:
npm run test:update-snapshots
Important: Always manually review snapshot changes before committing. Snapshots capture expected behavior, so changes should be intentional and correct.
Development workflow
Pre-commit hooks
The sbom-action uses Husky to run automated checks before each commit:
The hook is defined in .husky/pre-commit and runs the precommit npm script.
Why commit dist/?
GitHub Actions can’t install dependencies or compile code at runtime. The action must include pre-built JavaScript files in the dist/ directory. The ncc compiler bundles all TypeScript source and dependencies into standalone JavaScript files.
Code organization
The sbom-action consists of three GitHub Actions, each with its own entry point:
Main action (action.yml):
Entry point: src/runSyftAction.ts
Compiled to: dist/runSyftAction/index.js
Generates SBOMs and uploads as workflow artifacts and release assets
Or test locally using act if you have it installed.
Action runtime
The sbom-action uses the Node.js 20 runtime (runs.using: node20 in action.yml). This runtime is provided by GitHub Actions and doesn’t require separate installation in workflows.
How to report security vulnerabilities in Anchore OSS projects
Security is a top priority for Anchore’s open source projects.
We appreciate the security research community’s efforts in responsibly disclosing vulnerabilities to help keep our users safe.
Supported Versions
Security updates are applied only to the most recent release of each project.
We strongly recommend staying up to date with the latest versions to ensure you have the most recent security patches and fixes.
If you’re using an older version and concerned about a security issue, please upgrade to the latest release.
For questions about specific versions, reach out on Discourse.
Reporting a Vulnerability
Found a security vulnerability? Please report security issues privately by emailing security@anchore.com rather than creating a public GitHub issue.
This gives us time to fix the problem and protect users before details become public.
What to Include in Your Report
To help us understand and address the issue quickly, please include as much detail as you can:
Description: A clear description of the vulnerability and its potential impact
Steps to reproduce: Detailed steps to recreate the issue
Affected versions: Which versions of the tool are vulnerable
Proof of concept: If available, a minimal example demonstrating the issue
Suggested mitigation: If you have ideas for how to fix or mitigate the issue
Urgency level: Your assessment of the severity (Critical, High, Medium, or Low)
Don’t worry if you can’t provide every detail –partial reports are still valuable and welcome.
We’ll work with you to understand the issue.
What to Expect
After you submit a report:
Acknowledgment: You’ll receive an initial response confirming we’ve received your report
Assessment: The security team will investigate and assess the severity and impact
Updates: We’ll keep you informed of our progress and any questions we have
Resolution: Once a fix is developed, if necessary, we’ll coordinate disclosure timing with you
Credit: With your permission, we’ll acknowledge your responsible disclosure in release notes
Disclosure Policy
Anchore follows a coordinated disclosure process:
Security issues are addressed privately until a fix is available
Fixes are released as quickly as possible based on severity
Security advisories are published after fixes are released
Credit is given to security researchers who report responsibly
Thank you for helping keep Anchore’s open source projects and their users secure.
11 - Code of Conduct
Community standards and guidelines for respectful collaboration
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
Our Standards
Examples of behavior that contributes to a positive environment for our community include:
Demonstrating empathy and kindness toward other people
Being respectful of differing opinions, viewpoints, and experiences
Giving and gracefully accepting constructive feedback
Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
The use of sexualized language or imagery, and sexual attention or advances of any kind
Trolling, insulting or derogatory comments, and personal or political attacks
Public or private harassment
Publishing others’ private information, such as a physical or email address, without their explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting
Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official email address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at opensource@anchore.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
1. Warning
Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: The original post will be edited or removed and a warning issued to the offender.
2. Temporary Ban
Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time.
No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
3. Permanent Ban
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
Developer guidelines when contributing to scan-action
Getting started
In order to test and develop in the scan-action repo you will need the following dependencies installed:
Node.js (>= 20.11.0)
npm
Docker
Initial setup
Run once after cloning to install dependencies and development tools:
npm install
This command installs all dependencies and sets up Husky git hooks that automatically format code and rebuild the distribution files before commits.
Useful commands
Common commands for ongoing development:
npm run build - Bundle with ncc and normalize line endings
npm run lint - Check code with ESLint
npm run prettier - Auto-format code with Prettier
npm test - Complete test suite (lint + install Grype + build + run tests)
npm run run-tests - Run Jest tests only
npm run test:update-snapshots - Update test expectations (lint + install Grype + run tests with snapshot updates)
npm run audit - Run security audit on production dependencies
npm run update-deps - Update dependencies with npm-check-updates
Testing
Tests require Grype to be installed locally and a Docker registry for integration tests. Set up your test environment:
Install Grype locally:
npm run install-and-update-grype
Start local Docker registry:
docker run -d -p 5000:5000 --name registry registry:2
Tests automatically disable Grype database auto-update and validation to ensure consistent test results.
CI environment:
The GitHub Actions test workflow automatically:
Starts a Docker registry service on port 5000
Tests on Ubuntu, Windows, and macOS
Validates across multiple configurations (image/path/sbom sources, output formats)
Test types
The scan-action uses Jest for testing with several categories:
Unit tests (e.g., tests/action.test.js, tests/grype_command.test.js): Test individual functions in isolation by mocking GitHub Actions context and external dependencies.
Integration tests: Execute the full action workflow with real Grype invocations. These tests validate end-to-end functionality including downloading Grype, running scans, and generating output files.
SARIF validation tests (tests/sarif_output.test.js): Validate SARIF report structure and content using the @microsoft/jest-sarif library to ensure consistent output format and compliance with the SARIF specification.
Distribution tests (tests/dist.test.js): Verify that the committed dist/ directory is up-to-date with the source code.
Test fixtures:
The tests/fixtures/ directory contains sample projects and files for testing:
npm-project/ - Sample npm project for directory scanning
yarn-project/ - Sample yarn project for directory scanning
test_sbom.spdx.json - Sample SBOM file for SBOM scanning tests
SARIF output testing
The SARIF output tests validate report structure using the @microsoft/jest-sarif library. Tests normalize dynamic values (versions, fully qualified names) before validation to ensure consistent results across test runs.
The tests validate that:
Generated SARIF reports are valid according to the SARIF specification
Expected vulnerabilities are detected in test fixtures
Output structure remains consistent across runs
If you need to update test expectations, run:
npm run test:update-snapshots
Important: Always manually review test changes before committing. Tests capture expected behavior, so changes should be intentional and correct.
Development workflow
Pre-commit hooks
The scan-action uses Husky to run automated checks before each commit:
Code formatting - lint-staged runs Prettier on staged JavaScript files
Distribution rebuild - Runs npm run precommit to rebuild dist/ directory
The hook is defined in .husky/pre-commit and ensures that distribution files are always synchronized with source code.
Why commit dist/?
GitHub Actions can’t install dependencies or compile code at runtime. The action must include pre-built JavaScript files in the dist/ directory. The ncc compiler bundles all source code and dependencies into standalone JavaScript files.
Code organization
The scan-action has a straightforward single-file architecture:
Or test locally using act if you have it installed.
Action runtime
The scan-action uses the Node.js 20 runtime (runs.using: node20 in action.yml). This runtime is provided by GitHub Actions and doesn’t require separate installation in workflows.
This style guide is for the Anchore OSS documentation.
The style guide helps contributors to write documentation that readers can understand quickly and correctly.
The Anchore OSS docs aim for:
Consistency in style and terminology, so that readers can expect certain
structures and conventions. Readers don’t have to keep re-learning how to use
the documentation or questioning whether they’ve understood something
correctly.
Clear, concise writing so that readers can quickly find and understand the
information they need.
Capitalize only the first letter of each heading within the page. (That is, use sentence case.)
Capitalize (almost) every word in page titles. (That is, use title case.)
The little words like “and”, “in”, etc, don’t get a capital letter.
In page content, use capitals only for brand names, like Syft, Anchore, and so on.
See more about brand names below.
Don’t use capital letters to emphasize words.
Spell out abbreviations and acronyms on first use
Always spell out the full term for every abbreviation or acronym the first time you use it on the page.
Don’t assume people know what an abbreviation or acronym means, even if it seems like common knowledge.
Example: “To run Grype locally in a virtual machine (VM)”
Use contractions if you want to
For example, it’s fine to write “it’s” instead of “it is”.
Use full, correct brand names
When referring to a product or brand, use the full name.
Capitalize the name as the product owners do in the product documentation.
Do not use abbreviations even if they’re in common use, unless the product owner has sanctioned the abbreviation.
Use this
Instead of this
Anchore
anchore
Kubernetes
k8s
GitHub
github
Be consistent with punctuation
Use punctuation consistently within a page.
For example, if you use a period (full stop) after every item in list, then use a period on all other lists on the page.
Check the other pages if you’re unsure about a particular convention.
Examples:
Most pages in the Anchore OSS docs use a period at the end of every list item.
There is no period at the end of the page subtitle and the subtitle need not be a full sentence.
(The subtitle comes from the description in the front matter of each page.)
Use active voice rather than passive voice
Passive voice is often confusing, as it’s not clear who should perform the action.
Use active voice
Instead of passive voice
You can configure Grype to
Grype can be configured to
Add the directory to your path
The directory should be added to your path
Use simple present tense
Avoid future tense (“will”) and complex syntax such as conjunctive mood (“would”, “should”).
Use simple present tense
Instead of future tense or complex syntax
The following command provisions a virtual machine
The following command will provision a virtual machine
If you add this configuration element, the system is open to
the Internet
If you added this configuration element, the system would be open to
the Internet
Exception: Use future tense if it’s necessary to convey the correct meaning. This requirement is rare.
Address the audience directly
Using “we” in a sentence can be confusing, because the reader may not know whether they’re part of the “we” you’re describing.
For example, compare the following two statements:
“In this release we’ve added many new features.”
“In this tutorial we build a flying saucer.”
The words “the developer” or “the user” can be ambiguous.
For example, if the reader is building a product that also has users,
then the reader does not know whether you’re referring to the reader or the users of their product.
Address the reader directly
Instead of "we", "the user", or "the developer"
Include the directory in your path
The user must make sure that the directory is included in their path
In this tutorial you build a flying saucer
In this tutorial we build a flying saucer
Use short, simple sentences
Keep sentences short. Short sentences are easier to read than long ones.
Below are some tips for writing short sentences.
Use fewer words instead of many words that convey the same meaning
Use this
Instead of this
You can use
It is also possible to use
You can
You are able to
Split a single long sentence into two or more shorter ones
Use this
Instead of this
You do not need a running GKE cluster. The deployment process
creates a cluster for you
You do not need a running GKE cluster, because the deployment
process creates a cluster for you
Use a list instead of a long sentence showing various options
Use this
Instead of this
To scan a container for vulnerabilities:
Package the software in an OCI container.
Upload the container to an online registry.
Run Grype with the container name as a parameter.
To scan a container, you must package the software in an OCI container,
upload the container to an online registry, and run Grype with the container
name as a parameter.
Avoid too much text styling
Use bold text when referring to UI controls or other UI elements.
Use code style for:
filenames, directories, and paths
inline code and commands
object field names
Avoid using bold text or capital letters for emphasis.
If a page has too much textual highlighting it becomes confusing and even annoying.
Use angle brackets for placeholders
For example:
export SYFT_PARALLELISM=<number>
--email <your email address>
Style your images
The Anchore OSS docs recognize Bootstrap classes to style images and other content.
The following code snippet shows the typical styling that makes an image show up nicely on the page:
The Google Developer Documentation Style Guide contains detailed information about specific aspects of writing clear, readable, succinct documentation for a developer audience.
Next steps
Take a look at the documentation README for guidance on contributing to the Anchore OSS docs.