This repository contains various artifacts to create Infinispan server images.
To get started with infinispan server on your local machine simply execute:
docker run -p 11222:11222 infinispan/server
By default the image has authentication enabled on all exposed endpoints. When executing the above command the image automatically generates a username/password combo, prints the values to stdout and then starts the Infinispan server with the authenticated Hotrod and Rest endpoints exposed on port 11222. Therefore, it's necessary to utilise the printed credentials when attempting to access the exposed endpoints via clients.
It's also possible to provide a username/password combination via environment variables like so:
docker run -p 11222:11222 -e USER="Titus Bramble" -e PASS="Shambles" infinispan/server
We recommend utilising the auto-generated credentials or USER & PASS env variables for initial development only. Providing authentication and authorization configuration via a Identities yaml file allows for much greater control.
When connecting a HotRod client to the image, the following SASL properties must be configured on your client (with the username and password properties changed as required):
infinispan.remote.auth-realm=default
infinispan.remote.auth-server-name=infinispan
infinispan.remote.auth-username=Titus Bramble
infinispan.remote.auth-password=Shambles
The infinispan image utilies two yaml configuration files. The identities file provides all identity information, such as user credentials, role mapping, oauth service etc. Whereas the configuration yaml is optional, but supplies configuration information required by Infinispan during server startup. This can be used in order to configure JGroups, Endpoints etc.
Below shows how a docker volume can be created and mounted in order to run the Infinispan image with a provided the identity and configuration file.
docker volume create example-vol
cp config.yaml identities.yaml /var/lib/docker/volumes/example-vol/_data
docker run -v example-vol:/user-config -e IDENTITIES_PATH="/user-config/identities.yaml" -e CONFIG_PATH="/user-config/config.yaml" infinispan/server
Below is an example Identities yaml, that provides a list of user credentials. All of the users specified in this file are loaded by the server and there credentials can then be used to access the configured endpoints, e.g. HotRod.
credentials:
- username: Alan Shearer
password: striker9
roles:
- admin
- username: Nolberto Solano
password: winger7
roles:
- dev
It's also possible to provide predigested user passwords by setting
preDigestedPassword: true
on each of the provided credential objects.
Below is an example configuration file which shows the current default values used by the image if not provided by the user configuration yaml.
infinispan:
clusterName: infinispan
endpoints:
hotrod:
auth: true
enabled: true
qop: auth
serverName: infinispan
memcached:
enabled: false
rest:
auth: true
enabled: true
jgroups:
diagnostics: false
encrypt: false
transport: tcp
dnsPing:
address: ""
recordType: A
keystore:
alias: server
selfSignCert: false
xsite:
masterCandidate: true
logging:
console:
level: trace
pattern: '%K{level}%d{HH\:mm\:ss,SSS} %-5p [%c] (%t) %s%e%n'
file:
level: trace
path: server/log
pattern: '%d{yyyy-MM-dd HH\:mm\:ss,SSS} %-5p [%c] (%t) %s%e%n'
categories:
com.arjuna: warn
org.infinispan: info
org.jgroups: warn
However, it is not necessary to provide all of these fields when configuring your image. Instead you can just provide the relevant parts. For example, to utilise udp for transport and enable the memcached endpoint, your config woudl be as follows:
endpoints:
memcached:
enabled: true
jgroups:
transport: udp
The default JGroups stack for the image is currently tcp.
When running in a managed environment such as Kubernetes, it is not possible to utilise multicasting for initial node
discovery, thefore we must utilise the JGroups DNS_PING protocol to
discover cluster members. To enable this, we must provide the jgroups.dnsPing.query
element in the configuration yaml.
This causes the default discovery protocol of either the udp or tcp stacks to be overridden by the DNS_PING protocol.
For example, to utilise the tcp stack with DNS_PING, the following config is required:
jgroups:
transport: tcp
dnsPing:
query: infinispan-dns-ping.myproject.svc.cluster.local
The JGroups encryption protocol ASYM_ENCRYPT can be enabled by defining the following in the yaml:
jgroups:
encrypt: true
The Infinispan image exposes both the REST and HotRod endpoints via a single port 11222
.
The memcached port is also available via port 11221
, however it currently does not support authentication and therefore
it must be enabled in the config yaml as show below:
---
endpoints:
memcached:
enabled: true
Similarly, it's also possible to disable the HotRod and/or REST endpoints by setting enabled: false
on the respective
endpoint's configuration element.
By default encryption is disabled on our endpoints, however it can be enabled by one of two ways.
It's possible to provide a path to a directory accessible to the image, that contains certificate/key pairs in the format tls.key and tls.crt respectively. This results in a pkcs12 keystore being created and loaded by the server to enable endpoint encryption.
---
keystore:
crtPath: /var/run/secrets/openshift.io/serviceaccount
password: customPassword # Optional field, which determines the keystore's password, otherwise a default is used.
This is ideal for managed environments such as Openshift/Kubernetes, as we can simply pass the certificates of the services CA, i.e.
/var/run/secrets/kubernetes.io/serviceaccount
.
Alternatively, existing pkcs12 keystores can be utilised by providing the absolute path of the keystore.
path: /user-config/keystore.p12
password: customPassword # Required in order to be able to access the keystore
To configure logging you can add the following to your config yaml:
logging:
categories:
org.infinispan.factories: trace
org.infinispan.commons.marshall: warn
By default, all specified log levels will be output to both the console and log file (/opt/infinispan/server/log/server.log). If you require different log levels for the console and log file, this is possible by explicitly setting the required levels like so:
logging:
console:
level: info
file:
level: trace
categories:
org.infinispan.factories: trace
It's also possible to specify the formatting of a log by providing a pattern
string for the console and/or
log file element:
logging:
file:
pattern: '%K{level}%d{HH\:mm\:ss,SSS} %-5p [%c] (%t) %s%e%n'
Finally, if you require your log file to be located at a specific location, such as a mounted volume, it's possible to specify the path of the directory in which it will be stored via:
logging:
file:
path: some/example/path
It's possible to configure the CORS rules for the REST endpoint as follows:
endpoints:
rest:
cors:
- name: restrict-host1
allowedOrigins:
- http://host1
- https://host1
allowedMethods:
- GET
- name: allow-all
allowCredentials: true
allowedOrigins:
- '*'
allowedMethods:
- GET
- OPTIONS
- POST
- PUT
- DELETE
allowedHeaders:
- X-Custom-Header
- Upgrade-Insecure-Requests
exposeHeaders:
- Key-Content-Type
maxAgeSeconds: 1
The name
, allowedOrigins
and allowedMethods
keys are mandatory.
The rules are evaluated sequentially based on the "Origin" header set by the browser; in the example above if the origin is either "http://host1" or "https://host1" the rule "restrict host1" will apply, otherwise the next rule will be tested. Since the rule "allow ALL" permits all origins, any script coming from a different origin will be able to perform the methods specified and use the headers supplied. Detailed information about the different configuration parameters can be found in the Infinispan REST guide.
In order to configure the image for xsite replication, it's necessary to provide the external address and port of the
local site as well as the external address and port of all remote sites as part of the config.yaml
at startup.
Below shows the expected format:
---
xsite:
address: # Externally accessible IP Address of local site
name: LON
port: 7200
backups:
- address: # Externally accessible IP address of NYC site
name: NYC
port: 7200
If you require more control of the server's configuration than it is also possible to configure the Infinispan server directly using XML. To do this, it is necessary to set the entrypoint of the docker image to /opt/infinispan/bin/server.sh
and for the custom Infinispan/JGroups xml files to be copied to a mounted docker volume like so:
docker volume create example-vol
cp custom-infinispan.xml custom-jgroups.xml /var/lib/docker/volumes/example-vol/_data
docker run -it -v example-vol:/user-config --entrypoint "/opt/infinispan/bin/server.sh" infinispan/server -b SITE_LOCAL -c /user-config/custom-infinispan.xml
The image consists of two Cekit modules, modules/dependencies
and modules/runtimes
. The
dependencies module is a simply yaml file that should be used for installing all dependencies required by the image.
Whereas the runtimes module contains all scripts required by the server during image execution. Files at the root of
the runtimes
modules are used to extract the default server distribution and files/dir in the added
dir are copied to the extracted server's root in order to add/overwrite existing files in the distribution.
The entrypoint for the image is modules/runtimes/added/bin/launch.sh
, which is a minimal bash script that calls the
ConfigGenerator program to generate the server
configuration based upon the user supplied yaml files, before then launching the server.
It's recommended to utilise Infinispan's REST endpoint in order to determine if the server is ready/live. To do this, you can utilise the Kubernetes httpGet probes as follows:
livenessProbe:
httpGet:
path: /rest/v2/cache-managers/clustered/health/status
port: 11222
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /rest/v2/cache-managers/clustered/health/status
port: 11222
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 1
timeoutSeconds: 10
All of our images are created using the Cekit tool. Installation instructions can be found here.
The exact dependencies that you will require depends on the "builder" that you want to use in order to create your image. For example OSBS has different requirements to Docker.
We recommend pulling stable image releases from Quay.io or Docker Hub, however it is also possible to recreate stable releases of an image.
To recreate a given release, it's necessary to checkout the corresponding git tag and build using cekit build <build-engine>
. For example:
git checkout 10.0.0.CR1-4
cekit build docker
In order to create the image using a local SNAPSHOT version of the Infinispan server, execute the following command, updating the path attribute to be equal to the local path of your SNAPSHOT distribution zip.
cekit build --overrides '{"version": "SNAPSHOT", "artifacts": [{"name": "server.zip", "path": "infinispan-server-10.0.0-SNAPSHOT.zip"}]}' docker
Similarly in order to build an image using a SNAPSHOT of the config generator, issue the following commands:
cekit build --overrides '{"version": "SNAPSHOT", "artifacts": [{"name": "config-generator.jar", "path": "config-generator-1.0.0-SNAPSHOT.jar"}]}' docker
We also pass the 'version' as part of the override to prevent the existing tag of the image being used for the created snapshot image.
In order to create an image using the Red Hat Data Grid server, it's necessary to have an active Red Hat kerberos session. The image can then be created using the following command:
cekit build --overrides-file dg-override.yaml docker
See License.