The Carbon Black Cloud Container Operator runs within a Kubernetes cluster. The Container Operator is a set of controllers which deploy and manage the VMware Carbon Black Cloud Container components.
Capabilities
- Deploy and manage the Container Essentials product bundle (including the configuration and the image scanning for Kubernetes security)!
- Automatically fetch and deploy the Carbon Black Cloud Container private image registry secret
- Automatically register the Carbon Black Cloud Container cluster
- Manage the Container Essentials validating webhook - dynamically manage the admission control webhook to avoid possible downtime
- Monitor and report agent availability to the Carbon Black console
The Carbon Black Cloud Container Operator utilizes the operator-framework to create a GO operator, which is responsible for managing and monitoring the Cloud Container components deployment.
Kubernetes 1.13+ is supported.
export OPERATOR_VERSION=v4.0.0
export OPERATOR_SCRIPT_URL=https://setup.containers.carbonblack.io/operator-$OPERATOR_VERSION-apply.sh
curl -s $OPERATOR_SCRIPT_URL | bash
{OPERATOR_VERSION} is of the format "v{VERSION}"
Versions list: Releases
Clone the git project and deploy the operator from the source code
By default, the operator utilizes CustomResourceDefinitions v1, which requires Kubernetes 1.16+. Deploying an operator with CustomResourceDefinitions v1beta1 (deprecated in Kubernetes 1.16, removed in Kubernetes 1.22) can be done - see the relevant section below.
make docker-build docker-push IMG={IMAGE_NAME}
make deploy IMG={IMAGE_NAME}
- View Developer Guide to see how deploy the operator without using an image
kubectl create secret generic cbcontainers-access-token \
--namespace cbcontainers-dataplane --from-literal=accessToken=\
{API_Secret_Key}/{API_ID}
The operator implements controllers for the Carbon Black Container custom resources definitions
Full Custom Resources Definitions Documentation
cbcontainersagents.operator.containers.carbonblack.io
This is the CR you'll need to deploy in order to trigger the operator to deploy the data plane components.
apiVersion: operator.containers.carbonblack.io/v1
kind: CBContainersAgent
metadata:
name: cbcontainers-agent
spec:
account: {ORG_KEY}
clusterName: {CLUSTER_GROUP}:{CLUSTER_NAME}
version: {AGENT_VERSION}
gateways:
apiGateway:
host: {API_HOST}
coreEventsGateway:
host: {CORE_EVENTS_HOST}
hardeningEventsGateway:
host: {HARDENING_EVENTS_HOST}
runtimeEventsGateway:
host: {RUNTIME_EVENTS_HOST}
- notice that without applying the api token secret, the operator will return the error:
couldn't find access token secret k8s object
make undeploy
- Notice that the above command will delete the Carbon Black Container custom resources definitions and instances.
The operator metrics are protected by kube-auth-proxy.
You will need to grant permissions to your Prometheus server to allow it to scrape the protected metrics.
You can create a ClusterRole and bind it with ClusterRoleBinding to the service account that your Prometheus server uses.
If you don't have such cluster role & cluster role binding configured, you can use the following:
Cluster Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cbcontainers-metrics-reader
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
Cluster Role binding creation:
kubectl create clusterrolebinding metrics --clusterrole=cbcontainers-metrics-reader --serviceaccount=<prometheus-namespace>:<prometheus-service-account-name>
spec:
components:
basic:
monitor:
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 30m
memory: 64Mi
enforcer:
resources:
#### DESIRED RESOURCES SPEC - for hardening enforcer container
stateReporter:
resources:
#### DESIRED RESOURCES SPEC - for hardening state reporter container
runtimeProtection:
resolver:
resources:
#### DESIRED RESOURCES SPEC - for runtime resolver container
sensor:
resources:
#### DESIRED RESOURCES SPEC - for node-agent runtime container
clusterScanning:
imageScanningReporter:
resources:
#### DESIRED RESOURCES SPEC - for image scanning reporter pod
clusterScanner:
resources:
#### DESIRED RESOURCES SPEC - for node-agent cluster-scanner container
Use the following ServiceMonitor to start scraping metrics from the CBContainers operator:
- Make sure that your Prometheus custom resource service monitor selectors match it.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
control-plane: operator
name: cbcontainers-operator-metrics-monitor
namespace: cbcontainers-dataplane
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
path: /metrics
port: https
scheme: https
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: operator
Configuring the Carbon Black Cloud Container services to use HTTP proxy can be done by setting HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables.
In order to configure those environment variables in the Operator, use the following command to patch the Operator deployment:
kubectl set env -n cbcontainers-dataplane deployment cbcontainers-operator HTTP_PROXY="<proxy-url>" HTTPS_PROXY="<proxy-url>" NO_PROXY="<kubernetes-api-server-ip>/<range>"
In order to configure those environment variables for the basic, Runtime and Image Scanning components,
update the CBContainersAgent
CR using the proxy environment variables (kubectl edit cbcontainersagents.operator.containers.carbonblack.io cbcontainers-agent
):
spec:
components:
basic:
enforcer:
env:
HTTP_PROXY: "<proxy-url>"
HTTPS_PROXY: "<proxy-url>"
NO_PROXY: "<kubernetes-api-server-ip>/<range>"
stateReporter:
env:
HTTP_PROXY: "<proxy-url>"
HTTPS_PROXY: "<proxy-url>"
NO_PROXY: "<kubernetes-api-server-ip>/<range>"
runtimeProtection:
resolver:
env:
HTTP_PROXY: "<proxy-url>"
HTTPS_PROXY: "<proxy-url>"
NO_PROXY: "<kubernetes-api-server-ip>/<range>"
sensor:
env:
HTTP_PROXY: "<proxy-url>"
HTTPS_PROXY: "<proxy-url>"
NO_PROXY: "<kubernetes-api-server-ip>/<range>,cbcontainers-runtime-resolver.cbcontainers-dataplane.svc.cluster.local"
clusterScanning:
clusterScanner:
env:
HTTP_PROXY: "<proxy-url>"
HTTPS_PROXY: "<proxy-url>"
NO_PROXY: "<kubernetes-api-server-ip>/<range>,cbcontainers-image-scanning-reporter.cbcontainers-dataplane.svc.cluster.local"
imageScanningReporter:
env:
HTTP_PROXY: "<proxy-url>"
HTTPS_PROXY: "<proxy-url>"
NO_PROXY: "<kubernetes-api-server-ip>/<range>"
It is very important to configure the NO_PROXY environment variable with the value of the Kubernetes API server IP.
Finding the API-server IP:
kubectl -n default get service kubernetes -o=jsonpath='{..clusterIP}'
When using non transparent HTTPS proxy you will need to configure the agent to use the proxy certificate authority:
spec:
gateways:
gatewayTLS:
rootCAsBundle: <Base64 encoded proxy CA>
Another option will be to allow the agent communicate without verifying the certificate. this option is not recommended and exposes the agent to MITM attack.
spec:
gateways:
gatewayTLS:
insecureSkipVerify: true
The operator supports Kubernetes clusters from v1.13+. The CustomResourceDefinition APIs were in beta stage in those cluster and were later promoted to GA in v1.16. They are no longer served as of v1.22 of Kubernetes.
To maintain compatibility, this operator offers 2 sets of CustomResourceDefinitions - one under the apiextensions/v1beta1
API and one under apiextensons/v1
.
By default, all operations in the repository like deploy
or install
work with the v1 version of the apiextensions
API. Utilizing v1beta1
is supported by passing the CRD_VERSION=v1beta1
option when running make.
Note that both apiextensions/v1
and apiextensions/v1beta1
versions of the CRDs are generated and maintained by make
- only commands that use the final output work with 1 version at a time.
For example, this command will deploy the operator resources on the current cluster but utilizing the apiextensions/v1beta1
API version for them.
make deploy CRD_VERSION=v1beta1