Stan's Robot Shop is a sample microservice application you can use as a sandbox to test and learn containerised application orchestration and monitoring techniques. It is not intended to be a comprehensive reference example of how to write a microservices application, although you will better understand some of those concepts by playing with Stan's Robot Shop. To be clear, the error handling is patchy and there is not any security built into the application.
You can get more detailed information from my blog post about this sample microservice application.
This sample microservice application has been built using these technologies:
- NodeJS (Express)
- Java (Spark Java)
- Python (Flask)
- Golang
- PHP (Apache)
- MongoDB
- Redis
- MySQL (Maxmind data)
- RabbitMQ
- Nginx
- AngularJS (1.x)
The various services in the sample application already include all required Instana components installed and configured. The Instana components provide automatic instrumentation for complete end to end tracing, as well as complete visibility into time series metrics for all the technologies.
To see the application performance results in the Instana dashboard, you will first need an Instana account. Don't worry a trial account is free.
To optionally build from source (you will need a newish version of Docker to do this) use Docker Compose. Optionally edit the .env file to specify an alternative image registry and version tag; see the official documentation for more information.
$ docker-compose build
If you modified the .env file and changed the image registry, you may need to push the images to that registry
$ docker-compose push
You can run it locally for testing.
If you did not build from source, don't worry all the images are on Docker Hub. Just pull down those images first using:
$ docker-compose pull
Fire up Stan's Robot Shop with:
$ docker-compose up
If you are running it locally on a Linux host you can also run the Instana agent locally, unfortunately the agent is currently not supported on Mac.
There is also limited support on ARM architectures at the moment.
The manifests for robotshop are in the DCOS/ directory. These manifests were built using a fresh install of DCOS 1.11.0. They should work on a vanilla HA or single instance install.
You may install Instana via the DCOS package manager, instructions are here: https://github.com/dcos/examples/tree/master/instana-agent/1.9
You can run Kubernetes locally using minikube or on one of the many cloud providers.
The Docker container images are all available on Docker Hub. The deployment and service definition files using these images are in the K8s directory, use these to deploy to a Kubernetes cluster. If you pushed your own images to your registry the deployment files will need to be updated to pull from your registry.
$ kubectl create namespace robot-shop
$ kubectl -n robot-shop apply -f K8s/descriptors
To deploy the Instana agent to Kubernetes, just use the helm chart.
$ helm install --name instana-agent --namespace instana-agent \
--set agent.key=INSTANA_AGENT_KEY \
--set agent.endpointHost=HOST \
--set agent.endpointPort=PORT \
--set zone.name=CLUSTER_NAME \
stable/instana-agent
If you are having difficulties getting helm running with your K8s install, it is most likely due to RBAC, most K8s now have RBAC enabled by default. Therefore helm requires a service account to have permission to do stuff.
If you are running the store locally via docker-compose up then, the store front is available on localhost port 8080 http://localhost:8080
If you are running the store on Kubernetes via minikube then, to make the store front accessible edit the web service definition and change the type to NodePort and add a port entry nodePort: 30080.
$ kubectl -n robot-shop edit service web
Snippet
spec:
ports:
- name: "8080"
port: 8080
protocol: TCP
targetPort: 8080
nodePort: 30080
selector:
service: web
sessionAffinity: None
type: NodePort
The store front is then available on the IP address of minikube port 30080. To find the IP address of your minikube instance.
$ minikube ip
If you are using a cloud Kubernetes / Openshift / Mesosphere then it will be available on the load balancer of that system.
A separate load generation utility is provided in the load-gen directory. This is not automatically run when the application is started. The load generator is built with Python and Locust. The build.sh script builds the Docker image, optionally taking push as the first argument to also push the image to the registry. The registry and tag settings are loaded from the .env file in the parent directory. The script load-gen.sh runs the image, it takes a number of command line arguments. You could run the container inside an orchestration system (K8s) as well if you want to, an example descriptor is provided in K8s/autoscaling. For more details see the README in the load-gen directory.
To enable End User Monitoring (EUM) see the official documentation for how to create a configuration. There is no need to inject the javascript fragment into the page, this will be handled automatically. Just make a note of the unique key and set the environment variable INSTANA_EUM_KEY for the web image, see docker-compose.yaml for an example.
If you are running the Instana backend on premise, you will also need to set the Reporting URL to your local instance. Set the environment variable INSTANA_EUM_REPORTING_URL as above. See the Instana EUM API reference
The cart and payment services both have Prometheus metric endpoints. These are accessible on /metrics
. The cart service provides:
- Counter of the number of items added to the cart
The payment services provides:
- Counter of the number of items perchased
- Histogram of the total number of items in each cart
- Histogram of the total value of each cart
To test the metrics use:
$ curl http://<host>:8080/api/cart/metrics
$ curl http://<host>:8080/api/payment/metrics