- Table of Contents
- Nexus Operator
A Nexus OSS Kubernetes Operator based on the Operator SDK.
You can find us at OperatorHub or at the "Operators" tab in your OpenShift 4.x web console, just search for "Nexus". If you don't have access to OLM, try installing it manually following our quick installation guide.
If you have any questions please either open an issue or send an email to the mailing list: nexus-operator@googlegroups.com.
kubectl
installed- Kubernetes (1.16+) or OpenShift (4.5+) cluster available (minikube or crc also supported)
- Cluster admin credentials to install the Operator
Note: since version 0.6.0 we do not support OpenShift 3.11 or Kubernetes 1.11 anymore. If you need to install in these clusters, please use version 0.5.0 instead.
The installation procedure will create a Namespace named nexus-operator-system
and will install every resources needed for the operator to run:
# requires python and kubectl
bash <(curl -s https://github.com/m88i/nexus-operator/blob/main/hack/install.sh)
Alternatively, you can manually elect a released version:
VERSION=<version from GitHub releases page>
kubectl apply -f https://github.com/m88i/nexus-operator/releases/download/${VERSION}/nexus-operator.yaml
You can choose any flavors of Nexus 3.x server from our examples
directory and apply the YAML in any namespace in your cluster.
Use these examples as a starting point to customize the server to meet your requirements.
If you're running the Operator on Openshift (4.5+) and you're not using Red Hat image with persistence enabled, that's anything other than spec.useRedHatImage: true
and spec.persistence.persistent: true
,
it's also necessary to configure a Security Context Constraints (SCC) resource.
This is necessary because the Nexus image requires its container to be ran as UID 200.
The use of the restricted
default SCC in Openshift results in a failure when starting the pods, as seen in Issue #41 and Issue #51 (see this issue for more details on why can't the Operator handle this for you as things are now).
Valid SCC resources can be found at the examples/
directory. You must associate the SCC with the ServiceAccount
in use.
For persistent configurations:
$ oc apply -f examples/scc-persistent.yaml
For volatile configurations:
$ oc apply -f examples/scc-volatile.yaml
Note: you must choose one or the other, applying both will result in using the one applied last. Note: These have changed with the introduction of Nexus Operator version 0.6.0 to include
configMap
volumes.
Once the SCC has been created, run:
$ oc adm policy add-scc-to-user allow-nexus-userid-200 -z <ServiceAccountName>
This command will bind the SCC we just created with the ServiceAccount
being used to create the Pods.
If you're using a custom ServiceAccount, replace "<ServiceAccountName>
" with the name of that account.
If you're not using a custom ServiceAccount
, the operator has created a default one which has the same name as your Nexus CR, replace "<ServiceAccountName>
" with that.
Considering that you ran the install command above, to remove the operator completely from your cluster, just run:
make uninstall
The Nexus Operator is capable of conducting automatic updates within a minor (the y
in x.y.z
) when using the community default image (docker.io/sonatype/nexus3
). In the future Red Hat images will also be supported by this feature.
Note: custom images will not be supported as there is no guarantee that they follow semantic versioning and as such, updates within the same minor may be disruptive.
Two fields within the Nexus CR control this behavior:
spec.automaticUpdate.disabled
(boolean): Whether the Operator should perform automatic updates. Defaults tofalse
(auto updates are enabled). Is set tofalse
ifspec.image
is not empty and is different from the default community image.spec.automaticUpdate.minorVersion
(integer): The Nexus image minor version the deployment should stay in. If left blank and automatic updates are enabled the latest minor is set.
Note: if you wish to set a specific tag when using the default community image you must first disable automatic updates.
Important: a change of minors will not be monitored or acted upon as an automatic update. Changing the minor is a manual process initiated by the human operator and as such must be monitored by the human operator.
The state of ongoing updates is written to status.updateConditions
, which can be easily accessed with kubectl
:
$ kubectl describe nexus
# (output omitted)
Update Conditions:
Starting automatic update from 3.26.0 to 3.26.1
Successfully updated from 3.26.0 to 3.26.1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdateSuccess 59s nexus3 Successfully updated to 3.26.1
Note: do not modify these conditions manually, the Operator reconstructs the update state from these.
Once an update finishes successfully, an Event is raised. You may view the events from a particular Nexus CR by describing it:
$ kubectl describe <Nexus CR>
Or you may query all events:
$ kubectl get events
A successful update event looks like:
apiVersion: v1
count: 1
eventTime: null
firstTimestamp: "2020-08-26T13:56:16Z"
involvedObject:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
name: nexus3
namespace: update
resourceVersion: "66087"
uid: f017e60f-21b5-4b14-b67c-341e029afae3
kind: Event
lastTimestamp: "2020-08-26T13:56:16Z"
message: Successfully updated to 3.26.1
# (output omitted)
reason: UpdateSuccess
reportingComponent: ""
reportingInstance: ""
source:
component: nexus3
type: Normal
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
12m Normal UpdateSuccess nexus/nexus3 Successfully updated to 3.26.1
# (output omitted)
When an update fails, since the Deployments produced by the Operator use a Rolling Deployment Strategy there is no disruption and the previous version is still available. The Operator will then:
- disable automatic updates
- set
spec.image
to the version that was set before the update began - raise a failure event
A failed update event looks like:
apiVersion: v1
count: 1
eventTime: null
firstTimestamp: "2020-08-21T18:29:11Z"
involvedObject:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
name: nexus3
namespace: update
resourceVersion: "51602"
uid: 2e9ef49a-7d37-4c96-bfae-0642a9487c95
kind: Event
lastTimestamp: "2020-08-21T18:29:11Z"
message: Failed to update to 3.26.1. Human intervention may be required
# (output omitted)
reason: UpdateFailed
reportingComponent: ""
reportingInstance: ""
source:
component: nexus3
type: Warning
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
9m45s Warning UpdateFailed nexus/nexus3 Failed to update to 3.26.1. Human intervention may be required
# (output omitted)
Starting on version 0.6.0, the operator now mounts a ConfigMap with
the contents of the nexus.properties
file
in the path $NEXUS_DATA/etc/nexus.properties
.
The Nexus Operator mount this file with the contents of the field Spec.Properties
using the Java properties format.
If you change this field, the operator will deploy a new pod immediately to reflect the changes applied in the ConfigMap
.
Don't update the managed ConfigMap
directly, otherwise the operator will replace its contents with Spec.Properties
field.
Always use the Nexus CR as the only source of truth. See this example to
learn how to properly set your properties directly in the CR.
Beware! Since we don't support HA yet, the server will be unavailable until the next pod comes up. Try to update the configuration only when you can afford to have the server unavailable.
There are three flavours for exposing the Nexus server deployed with the Nexus Operator: NodePort
, Route
(for OpenShift) and Ingress
(for Kubernetes).
You can expose the Nexus server via NodePort
by setting the following parameters in the CR:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
metadata:
name: nexus3
spec:
(...)
networking:
expose: true
exposeAs: "NodePort"
nodePort: 31031
It's not the recommended approach, but fits whatever Kubernetes flavour you have.
On OpenShift, the Nexus server can be exposed via Routes. Set the following parameters in the CR:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
metadata:
name: nexus3
spec:
(...)
networking:
expose: true
On Kubernetes, we leverage from an Ingress
to expose the Nexus service:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
metadata:
name: nexus3
spec:
(...)
networking:
expose: true
exposeAs: "Ingress"
host: "nexus.example.com"
Please note that host
is a required parameter when exposing via Ingress
.
Just make sure that that the host resolves to your cluster.
If you're running on Minikube, take a look in the article "Set up Ingress on Minikube with the NGINX Ingress Controller"
If you've deployed the NGINX Ingress controller, you might see 413 ERROR - Entity too large
in uploading the artifacts to the Nexus server.
You would need to enter the maximum size allowed for the data packet in the configMap
for the controller.
If you've deployed the Ingress controller in Minikube it'll be available in the kube-system
namespace
$ kubectl get deploy -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 47h
ingress-nginx-controller 1/1 1 1 47h
For checking out the name of the configMap
you can run:
$ kubectl get deploy/ingress-nginx-controller -o yaml -n kube-system | grep "\--configmap"
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
Now you would need to edit the config map:
$ kubectl edit configmaps nginx-load-balancer-conf -n kube-system
In the root of the opened yaml file add:
data:
proxy-body-size: 10m
Note: If you want to have no limit for the data packet you can specify the proxy-body-size: 0m
Route and Ingress resources are highly configurable, and often the need to change them arises. For example, further configuration can be performed by webhooks, but these changes get undone by the Operator as soon as it detects them.
Starting at version 0.6.0 you may specify that the Operator should ignore external changes made to Ingress and Route
resources. This is controlled by the spec.networking.ignoreUpdates
boolean field in the Nexus resource. It defaults to
false
, meaning the Operator will change the Ingress/Route specification to match its state as defined by this
resource. Set to true
in order to prevent the Operator from undoing external changes in the resources' configuration.
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
metadata:
name: nexus3
spec:
networking:
ignoreUpdates: true
For details about TLS configuration check out our TLS guide.
You may provide custom labels and annotations to Route/Ingress resources by setting them
on .spec.networking.annotations
and .spec.networking.labels
. For example:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
metadata:
name: nexus3
spec:
networking:
annotations:
my-cool-annotation: "even-cooler-value"
my-other-cool-annotation: "not-as-cool-value"
labels:
my-cool-label: "even-cooler-value"
Starting at version 0.6.0 you may specify extra volumes to be mounted at the pod running Nexus, which comes in handy for
migrating existing blob stores, for example. These volumes are controlled by the spec.persistence.extraVolumes
field.
For example, if you wanted to mount an AWS EBS volume, some PVC of yours and an EmptyDir volume:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
metadata:
name: nexus3
spec:
persistence:
extraVolumes:
- name: "my-cool-ebs-vol"
mountPath: "/path/for/AWS-EBS/"
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: "<volume id>"
fsType: ext4
- name: "my-cool-claim-vol"
mountPath: "/path/for/persistent-vol-claim/"
# This PVC must exist on the same namespace
persistentVolumeClaim:
claimName: "my-cool-claim"
- name: "my-cool-empty-dir-vol"
mountPath: "/path/for/emptyDir/"
emptyDir: { }
Each item of this extraVolumes
array provides:
mountPath
: a string representing the path at which this volume should be mounted- a Kubernetes
Volume
specification
For more information about Kubernetes Volumes refer to their documentation and each specific plugin documentation. For additional details about Persistent Volumes and using Claims as volumes refer to the documentation.
Important: updating the
spec.persistence.extraVolumes
field may lead to temporary unavailability while the new deployment with the new volume configuration rolls out.
On Minikube the dynamic PV creation might fail. If this happens in your environment, before creating the Nexus server, create a PV with this template: examples/pv-minikube.yaml. Then give the correct permissions to the directory in Minikube VM:
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ sudo chown 200:200 -R /data/pv0001/
$ ls -la /data/
total 8
drwxr-xr-x 3 root root 4096 Apr 26 15:42 .
drwxr-xr-x 19 root root 500 Apr 26 20:47 ..
drwxr-xr-x 2 200 200 4096 Apr 26 15:42 pv0001
It is possible to use a custom ServiceAccount
to perform your Deployments with the Nexus Operator via:
spec.serviceAccountName
(string): ServiceAccountName is the name of the ServiceAccount used to run the Pods. If left blank, a default ServiceAccount is created with the same name as the Nexus CR.
Important: the Operator handles the creation of default resources necessary to run. If you choose to use a custom ServiceAccount be sure to also configure Role
and RoleBinding
resources.
By default, from version 0.3.0 the Nexus Operator does not generate a random password for the admin
user. This means that you can login in the server right away with the default administrator credentials (admin/admin123). Comes in handy for development purposes, but consider changing this password right away on production environments.
To enable random password generation, you can set the attribute generateRandomAdminPassword
in the Nexus CR spec to true
. Then the Nexus service will create a random password in the file system. You have to grab the password from a file inside the Nexus Server container in order to login in the web console:
$ kubectl exec <nexus server pod name> -- cat /nexus-data/admin.password
Use this password to login into the web console with the username admin
.
If you have access to Red Hat Catalog, you might change the flag spec.useRedHatImage
to true
.
You'll have to set your Red Hat credentials in the namespace where Nexus is deployed to be able to pull the image.
In future versions the Operator will handle this step for you.
You can control the pods Image Pull Policy using the spec.imagePullPolicy
field. It accepts either of the following values:
Always
IfNotPresent
Never
If this field is set to an invalid value this configuration will be omitted, deferring to Kubernetes default behavior, which is Always
if the image's tag is "latest" and IfNotPresent
otherwise.
Leaving this field blank will also result in deferring to Kubernetes default behavior.
From 0.3.0 version, the Operator will try to create an administrator user to be used on internal operations, such as creating community Maven repositories.
The default Nexus user admin
is used to create the nexus-operator
user, whose credentials are then stored in a secret with the same name as the Nexus CR.
It's possible to disable the operator user creation by setting spec.serverOperatons.disableOperatorUserCreation
to true
. In this case, the admin
user will be used instead. This configuration is not recommended, since you can track all the operations, change the operator user permissions and enable or disable it if you need. By disabling the operator user creation, the Operator will use the default admin
credentials to perform all server operations, which will fail if you change the default credentials (something that must be done when aiming for a secure environment).
The Operator also will create three Maven repositories by default:
All of these repositories will be also added to the maven-public
group. This group will gather the vast majority of jars needed by the most common use cases out there. If you won't need them, just disable this behavior by setting the attribute spec.serverOperatons.disableRepositoryCreation
to true
in the Nexus CR.
All of these operations are disabled if the attribute spec.generateRandomAdminPassword
is set to true
, since default credentials are needed to create the nexus-operator
user. You can safely change the default credentials after this user has been created.
For now, the Nexus Operator won't accept a number higher than 1
to the spec.replicas
attribute.
This is because the Nexus server can't share its mounted persistent volume with other pods. See #191 for more details.
Horizontal scaling will only work once we add HA support to the operator (see #61). If you need to scale the server, you should take the vertical approach and increase the numbers of resource limits used by the Nexus server. For example:
apiVersion: apps.m88i.io/v1alpha1
kind: Nexus
metadata:
name: nexus3
spec:
replicas: 1
# Set the resources requests and limits for Nexus pods. See: https://help.sonatype.com/repomanager3/system-requirements
resources:
limits:
cpu: "4"
memory: "8Gi"
requests:
cpu: "1"
memory: "2Gi"
persistence:
persistent: true
volumeSize: 10Gi
We are working to support HA in the future.
Please read our Contribution Guide.