- What is it?
- Prerequisites
- How to run it
- How to access the VM's Docker daemon
- How to access the OpenShift registry
- OpenShift Logins
- Known issues
- Misc
This repository contain a Vagrant setup to start a Vagrant virtual machine running a containerized version of OpenShift Enterprise using CDK 2 (Beta3).
The following prerequisites need to be met prior to creating and provisioning the virtual machine:
- RHEL employee subscription credentials available
- Active VPN connection during the creation and provisioning of the VM
- VirtualBox installed
- Vagrant installed
- vagrant-registration plugin (>=1.0.0) installed
- Run
vagrant plugin install vagrant-registration
to install plugin - vagrant-adbinfo plugin (>=0.0.9) installed
- Run
vagrant plugin install vagrant-adbinfo
to install plugin - On Windows:
- Ensure PuTTY utilities, including pscp, are installed and on the Path. See also vagrant-adbinfo issue #20
- Ensure Cygwin is installed with rsync AND openssh. The default installation does not include these packages.
$ cd cdk-v2
$ export SUB_USERNAME=<your-subscription-username>
$ export SUB_PASSWORD=<your-subscription-password>
$ vagrant up
This will start and provision the VM, as well as start an all-in-One OpenShift
Enterprise instance. There are currently no scripts to start/stop OpenShift.
To restart OpenShift after an vagrant halt
, run vagrant up && vagrant provision
.
Provisioning steps which have already occurred will be skipped.
Run vagrant adbinfo
:
$ eval "$(vagrant adbinfo)"
Due to an issue with adbinfo, the first execution of vagrant adbinfo
will currently kill your OpenShift
container. You need to run vagrant provision
to restart the VM. This only occurs
on the first call to adbinfo.
The OpenShift registry is per default exposed as hub.cdk.10.1.2.2.xip.io. You can push to this registry directly after logging in. Assuming one logs in as user 'foo':
$ oc login 10.1.2.2:8443
$ docker login -u foo -p `oc whoami -t` -e foo@bar.com hub.cdk.10.1.2.2.xip.io
Once up an running the OpenShift console is accessible under https://10.1.2.2:8443/console/.
The OpenShift instance setup with no authentication, so you can choose any username you like. If the username does not exist a user is create. The password can be arbitrary (on each login).
There is one user - test-admin which is pre-configured. This user has view permissions for the default namespace. This can be handy, since in this namespace the docker-registry and the router are running.
This user has no permissions to change anything in the default namespace!
To make any administrative changes to the system, one has to cluster admin and run the appropriate oc/oadm commands. To do so log onto the vagrant vm and use the command line tools with the --config option referencing the system configuration.
$ vagrant ssh
$ oadm --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig <whatever oadm command>
$ oc --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig <whatever oc command>
Alternatively you can set the KUBECONFIG environment variable and skip the --config option.
$ export KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig
However, be careful that when you in this case login as a different user, OpenShift will attempt to overwrite admin.kubeconfig. Probably better to just define an alias.
- Causes of failure on Windows
- Ensure
VAGRANT_DETECTED_OS=cygwin
is set
Assuming a user foo, you can do the following to run for example the Node.js based blogging framework Ghost.
$ oc login 10.1.2.2:8443
Authentication required for https://10.1.2.2:8443 (openshift)
Username: foo
Password:
Login successful.
$ oc new-project my-ghost
Now using project "my-ghost" on server "https://10.1.2.2:8443".
$ docker pull ghost
$ docker tag ghost hub.cdk.10.1.2.2.xip.io/my-ghost/ghost
$ docker login -u foo -p `oc whoami -t` -e foo@bar.com hub.cdk.10.1.2.2.xip.io
$ docker push hub.cdk.10.1.2.2.xip.io/my-ghost/ghost
$ oc new-app --image-stream=ghost --name=ghost
$ oc expose service ghost --hostname=my-ghost-blog.10.1.2.2.xip.io
Then visit http://my-ghost-blog.10.1.2.2.xip.io/ with your browser.
First step is to export the configuration from the existing project:
$ oc export is,bc,dc,svc,route -o json > project-config.json
At this stage you probably want to edit the json and change the route.
You can do this also after the import by oc edit route
.
Then on the second instance, create a new project, import the resources and trigger a new build:
$ oc new-project foo
$ oc create -f project-config.json
$ oc new-build <build-config-name>
The OpenShift HAProxy is configured to expose some statistics about the routes. This can sometimes be helpful when debugging problem or just to monitor traffic. To access the statistics use http://10.1.2.2:1936/.
The username is 'admin' and the password gets generated during the creation of the router pod. You can run the following to find the password:
$ eval "$(vagrant adbinfo)"
$ docker ps # You want the container id of the ose-haproxy-router container
$ docker exec <container-id-of-router> less /var/lib/haproxy/conf/haproxy.config | grep "stats auth"
Since the created VM is only visible on the host, GitHub webhooks won't work, since GitHub cannot reach the VM. Obviously you can just trigger the build via oc:
$ oc start-build <build-config-name>
If you want to ensure that the actual webhooks work though, you can trigger them via curl as well. First determine the URLs of the GitHub and generic URL:
$ oc describe <build-config-name>
To trigger the generic hook run:
$ curl -k -X POST <generic-hook-url>
To trigger the GitHub hook run:
$ curl -k \
-H "Content-Type: application/json" \
-H "X-Github-Event: push" \
-X POST -d '{"ref":"refs/heads/master"}' \
<github-hook-url>
The GitHub payload is quite extensive, but the only thing which matters from an OpenShift perspective at the moment is that the ref matches.
This is based using the jboss-eap-6/eap-openshift:6.4 image from registry.access.redhat.com. This image is for example used by the eap6-basic-sti template.
The startup script standalone.sh for the EAP instance within this image checks the variable DEBUG to check whether to enable remote debugging on port 8787.
# Get the name of the deployment config.
$ oc get dc
NAME TRIGGERS LATEST VERSION
eap-app ImageChange 1
# Check the current environment variables (optional)
$ oc env dc/eap-app --list
OPENSHIFT_DNS_PING_SERVICE_NAME=eap-app-ping
OPENSHIFT_DNS_PING_SERVICE_PORT=8888
HORNETQ_CLUSTER_PASSWORD=mVxpNmqt
HORNETQ_QUEUES=
HORNETQ_TOPICS=
# Set the DEBUG variable
$ oc env dc/eap-app DEBUG=true
# Double check the variable is set
$oc env dc/eap-app --list
OPENSHIFT_DNS_PING_SERVICE_NAME=eap-app-ping
OPENSHIFT_DNS_PING_SERVICE_PORT=8888
HORNETQ_CLUSTER_PASSWORD=mVxpNmqt
HORNETQ_QUEUES=
HORNETQ_TOPICS=
DEBUG=true
# Redeploy the latest image
$ oc deploy eap-app --latest -n eap
# Get the name of the running pod using the deployment config name as selector
$ oc get pods -l deploymentConfig=eap-app
NAME READY STATUS RESTARTS AGE
eap-app-3-rw4ko 1/1 Running 0 1h
# Port forward the debug port
$ oc port-forward eap-app-3-rw4ko 8787:8787
Once the oc port-forward
command is executed, you can attach a remote
debugger to port 8787 on localhost.
In conjunction with trying to run arbitrary Docker images on OpenShift, it can be hard to track down deployment errors. If the deployment of a pot fails, OpenShift will try to reschedule a deployment and the original pod won't be available anymore. In this case you can try accessing the logs of the failing container directly via Docker commands against the Docker daemon running within the VM (the Docker daemon of the VM is used by the OpenShift instance itself as well).
View the docker logs
$ vagrant ssh
# Find the container id of the failing container (looking for the latest created container)
$ docker ps -l -q
5b37abf17fb6
$ docker logs 5b37abf17fb6
Try this:
- Open this link in a browser
- Paste the URL of the OpenShift instance "https://10.1.2.2:8443/swaggerapi/oapi/v1" into the input field