which: no javac when installing local gem file
Closed this issue · 19 comments
I have an issue when installing any local gem file in the 5.4.0 logstash container.
An example is this plugin: https://github.com/lukewaite/logstash-input-cloudwatch-logs If I build the gem locally, e.g. gem build logstash-input-cloudwatch-logs.gemspec, then copy the resulting gem file into my dockerfile, and attempt to install with:
bin/logstash-plugin install logstash-input-cloudwatch-logs.gem
It will fail with:
which: no javac in (/usr/share/logstash/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)
It doesn't seem to matter which plugin I install. But the logstash image will build without errors if I'm using 5.3.2.
Here's a fragment of the Dockerfile I'm using:
FROM docker.elastic.co/logstash/logstash:5.4.0
ENV PATH_CONFIG=/usr/share/logstash/pipeline/prod/
COPY *.gem /usr/share/logstash/
RUN cd /usr/share/logstash && ls *.gem | xargs bin/logstash-plugin install
I have also posted to the ES forum, but have not yet had a response:
https://discuss.elastic.co/t/which-no-javac-for-local-plugin-installation/86181
Thanks for for the report.
Previously, we were installing the CentOS package java-1.8.0-openjdk-headless
, which counter-intuitively does not contain the Java compiler, despite being a "JDK" package, not a "JRE" package.
We now install java-1.8.0-openjdk-devel
which includes the compiler, and an updated image for :5.4.0
has been pushed to the registry.
@lgarvey wrote:
Hi Jarpy,
Happy to move on 5.4.0, but I've just tested it and see that whilst the javac error has disappeared, it's >now not installing any local gem plugins. It just hangs.
Well, that's interesting. It doesn't actually hang, but it does take an extraordinarily long time:
$ time docker run --rm -it -v $PWD:/mnt:ro docker.elastic.co/logstash/logstash:5.4.0 logstash-plugin install /mnt/logstash-input-cloudwatch_logs-0.10.3.gem
Validating /mnt/logstash-input-cloudwatch_logs-0.10.3.gem
Installing logstash-input-cloudwatch_logs
Installation successful
real 7m26.305s
user 0m0.044s
sys 0m0.020s
HI Jarpy,
Apologies - I got this working yesterday, but didn't update the ticket. The first few times I tried it it seemed to be hanging for > 10 minutes with no output beyond "installing plugins..". Then I tried again and applied some patience, and the docker image eventually built.
So it's all working fine for me.
L
No problem at all. I'm still interested in why the container is taking 7 minutes while the host takes 3 seconds(!). However, I'm really glad you can continue.
strace
of the Java processes:
$ docker exec -u root -it 0e9ee2de45d3 bash
[root@0e9ee2de45d3 /]# ps -ef
UID PID PPID C STIME TTY TIME CMD
logstash 1 0 73 09:43 ? 00:00:08 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFracti
logstash 47 1 84 09:43 ? 00:00:03 /usr/bin/java -classpath /usr/share/logstash/vendor/bundle/jruby/1.9/gems/ruby-maven-li
root 78 0 0 09:43 ? 00:00:00 bash
root 92 78 0 09:43 ? 00:00:00 ps -ef
[root@0e9ee2de45d3 /]# strace -p 1,47
Process 1 attached
Process 47 attached
[pid 1] futex(0x7fd69226a9d0, FUTEX_WAIT, 31, NULL <unfinished ...>
[pid 47] futex(0x7f552b39b9d0, FUTEX_WAIT, 64, NULL
docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0
I exec into the container and then try to install a plugin:
bash-4.2$ logstash-plugin install logstash-output-influxdb
Validating logstash-output-influxdb
Installing logstash-output-influxdb
Error: JAVA_HOME is not defined correctly.
We cannot execute java
Error Bundler::InstallError, retrying 1/10
An error occurred while installing logstash-core (5.6.0), and Bundler cannot continue.
Make sure that `gem install logstash-core -v '5.6.0'` succeeds before bundling.
Is it not possible to install plugins in the current docker image?
It seems OK to me:
$ docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0 logstash-plugin install logstash-output-influxdb
Validating logstash-output-influxdb
Installing logstash-output-influxdb
Installation successful
$ docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0 echo $JAVA_HOME
/usr/lib/jvm/java-8-openjdk-amd64
Is something in your environment changing $JAVA_HOME
?
This is from inside the container:
bash-4.2$ /usr/lib/jvm/java-8-openjdk-amd64
bash: /usr/lib/jvm/java-8-openjdk-amd64: No such file or directory
I don't get it. Since we use the same image, how come the binary/dir is missing for me?
No magic. If we made a container that depended on the Java of the host OS, I would consider that an epic fail!
Are we using the same image?
$ docker images docker.elastic.co/logstash/logstash:5.6.0
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/logstash/logstash 5.6.0 5266d98518da 2 days ago 591MB
@jarpy yeah, I deleted that thought embarrassed
docker images docker.elastic.co/logstash/logstash:5.6.0
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/logstash/logstash 5.6.0 5266d98518da 2 days ago 591MB
I have tried this on 3 different hosts, same on all. One was a fresh VM just cloned, started, docker installed. No idea what could have failed in that scenario...
No need for embarrassment, it was just an idea. :)
I'm really struggling to reproduce this (we should switch GitHub avatars). Would you be able to provide a step-by-step repro? Here's the last thing I tried:
$ docker run --rm -d --name=repro docker.elastic.co/logstash/logstash:5.6.0
5c6098af9076fa1b31a2af03ffdcef4823a503c98de3ed2f4469551bda5d3239
$ docker exec -it repro bash
bash-4.2$ logstash-plugin install logstash-output-influxdb
Validating logstash-output-influxdb
Installing logstash-output-influxdb
Installation successful
That works. Even tho my $JAVA_HOME is empty and I'm missing that binary, the installation succeeded today.
Yesterday it failed.
docker exec -it repro bash
bash-4.2$ logstash-plugin install logstash-output-influxdb
Validating logstash-output-influxdb
Installing logstash-output-influxdb
Installation successful
bash-4.2$ echo $JAVA_HOME
bash-4.2$ ls /usr/lib/jvm/java-8-openjdk-amd64
ls: cannot access /usr/lib/jvm/java-8-openjdk-amd64: No such file or directory
Is the installation done by feching remote content I guess? Could that have been updated since yesterday? The logstash-output-influxdb installer?
This is so strange. I'm sorry for the trouble.
Interesting... The $JAVA_HOME
is definitely a little strange, even if it wasn't the source of your problem. I'll look into that.
No worries at all. You may well have found a bug anyway, just not exactly the one we first suspected.
Oh, and yes. Absolutely remote content.
You can't guarantee the success of anything anymore because you'd have to guarantee the state of the entire Internet! </😡👴>
Thanks for the help! Now I can finally get back to what I was trying to do, sending data to influx :)
$ docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0 echo $JAVA_HOME
/usr/lib/jvm/java-8-openjdk-amd64
@jarpy I think you are displaying the $JAVA_HOME that happens to be set on your host here. I don't think the image sets JAVA_HOME as an env var (we do that in the elasticsearch image though). I also don't think we need $JAVA_HOME for logstash we could double check. Nevertheless java
is in the path, as required:
$ unset $JAVA_HOME
$ docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0 echo $JAVA_HOME
$ export JAVA_HOME="dummy/path"
$ docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0 echo $JAVA_HOME
dummy/path
$ docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0 bash -c 'which java & java -version'
/usr/bin/java
openjdk version "1.8.0_141"
OpenJDK Runtime Environment (build 1.8.0_141-b16)
OpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode)
that bash completion is annoying when looking for env vars. What I end up doing is the following (using ELASTIC_CONTAINER
as the test var to grab)
$ sudo docker run --rm -it docker.elastic.co/logstash/logstash:5.6.0 env|grep ELASTIC_CONTAINER
ELASTIC_CONTAINER=true
I think you are displaying the $JAVA_HOME that happens to be set on your host here.
That is, of course, precisely what I'm doing. How hilarious!
This is why it's so good to work in the open. Even if it's a little embarressing (for me), dumb errors like this just get seen and fixed faster.