How to apply custom configuration to outputs using the universalforwarder image
FutureSharks opened this issue · 12 comments
Hi,
I have these env variables to suit our environment:
SPLUNK_START_ARGS="--accept-license"
SPLUNK_FORWARD_SERVER=splunk_server:9997
SPLUNK_USER=root
SPLUNK_ADD_1='monitor /var/log/containers -sourcetype docker_json'
After the container starts this is what the /opt/splunk/etc/system/local/outputs.conf
file looks like:
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = splunk_server:9997
[tcpout-server://splunk_server:9997]
But I need to make changes for our environment like this otherwise the forwarder doesn't connect properly to the indexer:
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = splunk_server:9997
[tcpout-server://splunk_server:9997]
sslCertPath = $SPLUNK_HOME/etc/auth/server.pem
sslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.pem
sslVerifyServerCert = false
sslPassword = some_password
How could I achieve this?
I don't think it's possible to configure sslCertPath
, sslRootCAPath
etc using the Splunk CLI therefore I cannot use the SPLUNK_CMD
env variable.
I thought I could override ENTRYPOINT
and CMD
something like this:
ENTRYPOINT ["/bin/bash -c"]
CMD ["mkdir -p /opt/splunk/etc/system/local; echo '[tcpout]' > /opt/splunk/etc/system/local/outputs.conf; echo 'sslCertPath = /opt/splunk/etc/auth/server.pem' >> /opt/splunk/etc/system/local/outputs.conf; echo 'sslRootCAPath = /opt/splunk/etc/auth/cacert.pem' >> /opt/splunk/etc/system/local/outputs.conf; echo 'sslVerifyServerCert = false' >> /opt/splunk/etc/system/local/outputs.conf; /sbin/entrypoint.sh start-service"]
But this is really hacky and also I don't know how to find the sslPassword
parameter because it's created randomly in /opt/splunk/etc/system/local/server.conf
after splunk has started.
Am I missing something? Is there a simple way to do this? I'd rather not have to create my own custom Docker image.
Thanks,
Max
@FutureSharks you are correct, the image won't currently help with this configuration. Your workaround to update outputs.conf file at container creation should work. You'll want to actually specify sslPassword in the same place, and at the next restart, Splunk will scrub the password from the .conf file and store it encrypted
Quoting from the docs:
Note that when you save the file in $SPLUNK_HOME/etc/system/local/outputs.conf, Splunk encrypts and overwrites the clear-text server certificate password when splunkd restarts.
@mchene how are we tracking enhancement requests? We could add more ENV parameters to help configuring these settings. I think that the way some have handled this is by automating pre and post-image build steps with out-of-band Python or make scripts.
@halr9000 I actually explored the simplest way to make it work and came out with the script which I have been using in my cluster aware image https://github.com/outcoldman/docker-splunk-cluster
You can take a look on example how I configure images using this python script https://github.com/outcoldman/docker-splunk-cluster/tree/master/examples/docker
In the end I had to use my work around but change it slightly. I am running it on Google Container Engine (Kubernetes).
I used this basic config:
https://gist.github.com/FutureSharks/ece4dbd233a421b3b2581eab92745697
And then had to change the command and args as follows:
command: [ "/bin/bash", "-c" ]
args: [
"timeout 10 /sbin/entrypoint.sh start-service; \
echo '[tcpout]' >> /opt/splunk/etc/system/local/outputs.conf && \
echo 'sslCertPath = /opt/splunk/etc/auth/server.pem' >> /opt/splunk/etc/system/local/outputs.conf && \
echo 'sslRootCAPath = /opt/splunk/etc/auth/cacert.pem' >> /opt/splunk/etc/system/local/outputs.conf && \
echo 'sslVerifyServerCert = false' >> /opt/splunk/etc/system/local/outputs.conf && \
grep sslPassword /opt/splunk/etc/system/local/server.conf >> /opt/splunk/etc/system/local/outputs.conf && \
/sbin/entrypoint.sh start-service"
]
You can see I had to start splunk first, let it initialise to create all the files/directories, then kill it, then apply our config and then start it again. This is pretty hacky I think.
Anyway, thanks for the input. Feel free to close.
@FutureSharks help me to understand what is happening in your entrypoint:
- You are invoking "/sbin/entrypoint.sh start-service" but give it only 10 seconds to execute, which means that bash and splunk will going to be killed
- After that you modify all the configurations and start it again.
I see two problems here:
- What if Splunk will take more than 10 seconds to start?
- What if you are restarting container or upgrading - you will execute this logic again.
@outcoldman, yes you are correct.
To answer your questions:
- This could be a problem but hasn't happened in my testing.
- If the container (or Kubernetes pod) is restarted or upgraded then the whole process is done again, fresh. There is no config left from previous containers, they are effectively stateless.
The problem is that Splunk needs to start up in order to create the directories and files correctly and adjustments can only happen after this. Hence start, kill, add config, restart. Do you have a better idea or method to achieve this?
@FutureSharks the one I suggested above :) Or you can write your own logic by using BEFORE_START_CMD
, BEFORE_START_CMD_1
, ..., something like
BEFORE_START_CMD=splunk version --accept-license
BEFORE_START_CMD_1=splunk cmd ...."
Where first command will initialize all default configurations, second command will allow you to invoke anything you want from $SPLUNK_HOME/bin, where it can be your own bash script for example.
@FutureSharks no need to kill Splunk to make config changes: start, add config, restart
is a common flow. However in your case, you're not using CLI so you don't need Splunk to be up, and you can make config changes before start (Splunk package is already extracted before Splunk starts up). So you can add (or better yet, symlink) to custom config files using BEFORE_START_CMD
as suggested by @outcoldman
Thanks for the replies.
You can take a look on example how I configure images using this python script https://github.com/outcoldman/docker-splunk-cluster/tree/master/examples/docker
Unless I have misunderstood, this solution requires me to create my own image.
I'd rather not have to create my own custom Docker image.
Or you can write your own logic by using BEFORE_START_CMD, BEFORE_START_CMD_1
I didn't actually try this because it looks like these environment variables are just prepended with ${SPLUNK_HOME}/bin/splunk
in the entrypoint.sh file and I can't create all my configuration with the Splunk CLI. For example I can't see how I could retrieve the sslPassword
in order to pipe into outputs.conf?
@FutureSharks no need to kill Splunk to make config changes: start, add config, restart is a common flow
I'm not sure how this would work. If I start splunk like normal then /sbin/entrypoint.sh
is the main process running and this cannot exit. How would I start the container from splunk/universalforwarder:latest
and then add configuration?
you're not using CLI so you don't need Splunk to be up, and you can make config changes before start (Splunk package is already extracted before Splunk starts up). So you can add (or better yet, symlink) to custom config files
I know I can do all of this if I create my own image but like I said in the beginning I'm trying to avoid that.
@FutureSharks other option is to use Deployment Server to deliver configurations to the forwarders.
From my opinion is a good thing to have an infrastructure with your own registry and a way to build your customer images.
We do have our own container registry and many custom images but I was hoping in this case I could avoid it. But OK, no worries. I'll see if I can use a deployment server.
Thanks for the suggestions @outcoldman and @rarsan