Snyk helps you find, fix and monitor for known vulnerabilities in your dependencies, both on an ad hoc basis and as part of your CI (Build) system.
Snyk API project importer. This script is intended to help import projects into Snyk with a controlled pace utilizing available Snyk APIs to avoid rate limiting from Github/Gitlab/Bitbucket etc and to provide a stable import. The script will kick off an import in batches, wait for completion and then keep going. Any failed requests will be retried before they are considered a failure and logged.
If you need to adjust concurrency you can stop the script, change the concurrency variable and start again. It will skip previous repos/targets that have been requested for import.
What you will need to have setup in advance:
- your Snyk organizations should be setup before running an import
- your Snyk organizations configured with some connection to SCM (Github/Gitlab/Bitbucket etc) as you will need the
integrationId
to generate the import files. - Recommended: have notifications disabled for emails etc to avoid receiving import notifications
- Recommended: have the fix PRs and PR checks disabled until import is complete to avoid sending extra requests to SCMs (Github/Gitlab/Bitbucket etc)
By default the import
command will run if no command specified.
import
- kick off a an API powered import of repos/targets into existing Snyk orgs defined in import configuration file.help
- show help & all available commands and their optionsorgs:data
- util generate data required to create Orgs via API.orgs:create
- util to create the Orgs in Snyk based on data file generated withorgs:data
command.import:data
- util to generate data required to kick off an import.list:imported
- util to generate data to help skip previously imported targets during import.
The logs can be explored using Bunyan CLI
- Utilities
- Kicking off an import
- Contributing
Error: ENFILE: file table overflow, open
or Error: EMFILE, too many open files
If you see these errors then you may need to bump ulimit to allow more open file operations. In order to keep the operations more performant tool logs as soon as it is convenient rather than wait until very end of a loop and log a huge data structure. This means depending on number of concurrent imports set the tool may exceed the system default ulimit.
Some of these resources may help you bump the ulimit: