The StackRox Kubernetes Security Platform performs a risk analysis of the container environment, delivers visibility and runtime alerts, and provides recommendations to proactively improve security by hardening the environment. StackRox integrates with every stage of container lifecycle: build, deploy and runtime.
Note: the StackRox Kubernetes Security platform is built on the foundation of the product formerly known as Prevent, which itself was called Mitigate and Apollo. You may find references to these previous names in code or documentation.
You can reach out to us through Slack (#stackrox).
For alternative ways, stop by our Community Hub stackrox.io.
To quickly deploy the latest development version of StackRox to your kubernetes cluster in the stackrox namespace:
git clone git@github.com:stackrox/stackrox.git
cd stackrox
MAIN_IMAGE_TAG=latest ./deploy/k8s/deploy.sh
If you are using docker for desktop or minikube use
./deploy/k8s/deploy-local.sh
. And for openshift:
./deploy/openshift/deploy.sh
.
When the deployment has completed a port-forward should exist so you can connect
to https://localhost:8000/. Credentials for the 'admin' user can be found in
./deploy/k8s/central-deploy/password
(deploy/openshift/central-deploy/password
in the OpenShift case).
UI Dev Docs: please refer to ui/README.md
E2E Dev Docs: please refer to qa-tests-backend/README.md
The following tools are necessary to test code and build image(s):
- Make
- Go
- Get the version specified in EXPECTED_GO_VERSION.
- Various Go linters and RocksDB dependencies that can be installed using
make reinstall-dev-tools
. - UI build tooling as specified in ui/README.md.
- Docker (make sure you
docker login
to your company DockerHub account) - RocksDB (follow Mac or Linux guide)
- Xcode command line tools (macOS only)
- Bats is used to run certain shell tests.
You can obtain it with
brew install bats
ornpm install -g bats
. - oc OpenShift cli tool
Click to expand
Usually you would have these already installed by brew. However if you get an error when building the golang x/tools, try first making sure the EULA is agreed by:- starting XCode
- building a new blank app project
- starting the blank project app in the emulator
- close both the emulator and the XCode, then
- run the following commands:
xcode-select --install
sudo xcode-select --switch /Library/Developer/CommandLineTools # Enable command line tools
sudo xcode-select -s /Applications/Xcode.app/Contents/Developer
For more info, see nodejs/node-gyp#569
# Create a GOPATH: this is the location of your Go "workspace".
# (Note that it is not – and must not – be the same as the path Go is installed to.)
# The default is to have it in ~/go/, or ~/development, but anything you prefer goes.
# Whatever you decide, create the directory, set GOPATH, and update PATH:
export GOPATH=$HOME/go # Change this if you choose to use a different workspace.
export PATH=$PATH:$GOPATH/bin
# You probably want to permanently set these by adding the following commands to your shell
# configuration (e.g. ~/.bash_profile)
cd $GOPATH
mkdir -p bin pkg
mkdir -p src/github.com/stackrox
cd src/github.com/stackrox
git clone git@github.com:stackrox/stackrox.git
To sweeten your experience, install the workflow scripts beforehand.
First install RocksDB. Follow Mac or Linux guidelines
$ cd $GOPATH/src/github.com/stackrox/stackrox
$ make install-dev-tools
$ make image
Now, you need to bring up a Kubernetes cluster yourself before proceeding.
Development can either happen in GCP or locally with
Docker Desktop or Minikube.
Note that Docker Desktop is more suited for macOS development, because the cluster will have access to images built with make image
locally without additional configuration. Also, the collector has better support for Docker Desktop than Minikube where drivers may not be available.
# To keep the StackRox central's rocksdb state between restarts, set:
$ export STORAGE=pvc
# To save time on rebuilds by skipping UI builds, set:
$ export SKIP_UI_BUILD=1
# When you deploy locally make sure your kube context points to the desired kubernetes cluster,
# for example Docker Desktop.
# To check the current context you can call a workflow script:
$ roxkubectx
# To deploy locally, call:
$ ./deploy/k8s/deploy-local.sh
# Now you can access StackRox dashboard at https://localhost:8000
# or simply call another workflow script:
$ logmein
See the deployment guide for further reading. To read more about the environment variables see the deploy/README.md.
# Build image, this will create `stackrox/main` with a tag defined by `make tag`.
$ make image
# Compile all binaries
$ make main-build-dockerized
# Displays the docker image tag which would be generated
$ make tag
# Note: there are integration tests in some components, and we currently
# run those manually. They will be re-enabled at some point.
$ make test
# Apply and check style standards in Go and JavaScript
$ make style
# enable pre-commit hooks for style checks
$ make init-githooks
# Compile and restart only central
$ make fast-central
# Compile only sensor
$ make fast-sensor
# Only compile protobuf
$ make proto-generated-srcs
The workflow repository contains some helper scripts
which support our development workflow. Explore more commands with roxhelp --list-all
.
# Change directory to rox root
$ cdrox
# Handy curl shortcut for your StackRox central instance
# Uses https://localhost:8000 by default or ROX_BASE_URL env variable
# Also uses the admin credentials from your last deployment via deploy.sh
$ roxcurl /v1/metadata
# Run quickstyle checks, faster than roxs' "make style"
$ quickstyle
# The workflow repository includes some tools for supporting
# working with multiple inter-dependent branches.
# Examples:
$ smart-branch <branch-name> # create new branch
... work on branch...
$ smart-rebase # rebase from parent branch
... continue working on branch...
$ smart-diff # check diff relative to parent branch
... git push, etc.
If you're using GoLand for development, the following can help improve the experience.
Make sure Protocol Buffer Editor
plugin is installed. If it isn't, use Help | Find Action...
, type Plugins
and hit
enter, then switch to Marketplace
, type its name and install the plugin.
This plugin does not know where to look for .proto
imports by default in GoLand therefore you need to explicitly
configure paths for this plugin. See https://github.com/jvolkman/intellij-protobuf-editor#path-settings.
- Go to
File | Settings | Languages & Frameworks | Protocol Buffers
. - Uncheck
Configure automatically
. - Click on
+
button, navigate and select./proto
directory in the root of the repo. - Optionally, also add
$HOME/go/pkg/mod/github.com/gogo/googleapis@1.1.0
and$HOME/go/pkg/mod/github.com/gogo/protobuf@v1.3.1/
. - To verify: use menu
Navigate | File...
type any.proto
file name, e.g.alert_service.proto
, and check that all import strings are shown green, not red.
With GoLand, you can naturally use breakpoints and debugger when running unit tests in IDE.
If you would like to debug local or even remote deployment, follow the procedure below.
Kubernetes debugger setup
- Create debug build locally by exporting
DEBUG_BUILD=yes
:Alternatively, debug build will also be created when the branch name contains$ DEBUG_BUILD=yes make image
-debug
substring. This works locally withmake image
and in CircleCI. - Deploy the image using instructions from this README file. Works both with
deploy-local.sh
anddeploy.sh
. - Start the debugger (and port forwarding) in the target pod using
roxdebug
command fromworkflow
repo.# For central $ roxdebug # For sensor $ roxdebug deploy/sensor # See usage help $ roxdebug --help
- Configure GoLand for remote debugging (should be done only once):
- Open
Run | Edit Configurations …
, click on the+
icon to add new configuration, chooseGo Remote
template. - Choose
Host:
localhost
andPort:
40000
. Give this configuration some name. - Select
On disconnect:
Leave it running
(this prevents GoLand forgetting breakpoints on reconnect).
- Open
- Attach GoLand to debugging port: select
Run | Debug…
and choose configuration you've created.
If all done right, you should seeConnected
message in theDebug | Debugger | Variables
window at the lower part of the screen. - Set some code breakpoints, trigger corresponding actions and happy debugging!
See Debugging go code running in Kubernetes for more info.
Deployment configurations are under the deploy/
directory, organized
per orchestrator.
The deploy script will:
- Launch Central.
- Create a cluster configuration and a service identity
- Deploy the cluster sensor using that configuration and those credentials
You can set the environment variable MAIN_IMAGE_TAG
in your shell to
ensure that you get the version you want.
If you check out a commit, the scripts will launch the image corresponding to
that commit by default. The image will be pulled if needed.
Further steps are orchestrator specific.
Kubernetes
./deploy/k8s/deploy.sh
To avoid typing in docker registry username and password each time you deploy,
set credentials in REGISTRY_USERNAME
and REGISTRY_PASSWORD
environment variables,
or, more securely, configure credentials in docker credentials store for your OS as
suggested here.
Openshift
Before deploying on Openshift, ensure that you have the oc - OpenShift Command Line installed.
On MacOS, you can install it via the following command:
brew install openshift-cli
Afterwards, deploy rox
:
# Automatically creates a OS route for exposing all relevant services
export LOAD_BALANCER=route
./deploy/openshift/deploy.sh
To avoid typing in docker registry username and password each time you deploy,
set credentials in REGISTRY_USERNAME
and REGISTRY_PASSWORD
environment variables,
or, more securely, configure credentials in docker credentials store for your OS as
suggested here.
Kubernetes
docker run -i --rm stackrox.io/main:<tag> interactive > k8s.zip
This will run you through an installer and generate a k8s.zip
file.
unzip k8s.zip -d k8s
bash k8s/central.sh
Now Central has been deployed. Use the UI to deploy Sensor.
OpenShift
Note: If using a host mount, you need to allow the container to access it by using
sudo chcon -Rt svirt_sandbox_file_t <full volume path>
Take the image-setup.sh script from this repo and run it to do the pull/push to local OpenShift registry. This is a prerequisite for every new cluster.
bash image-setup.sh
docker run -i --rm stackrox.io/main:<tag> interactive > openshift.zip
This will run you through an installer and generate a openshift.zip
file.
unzip openshift.zip -d openshift
bash openshift/central.sh