sky-uk/kafka-message-scheduler

Temurin JRE image isn't working in Jenkins/Travis

bcarter97 opened this issue · 0 comments

Description

As title, v0.26.0 changed to a base docker image that supports arm/amd, so we had to swap from Alpine to something else. For some reason the Temurin JRE images don't seem to like running in Jenkins/Travis, and give a random Java error when starting up:

Travis logs
$ docker version
Client: Docker Engine - Community
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        f0df350
 Built:             Wed Jun  2 11:56:47 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:58 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

$ docker run sky-uk/kafka-message-scheduler@sha256:0f3edd6fd517b7e9270015a1e56087f6ee2fde38e2b9d1563bd7c06cfdfe27ac
Status: Downloaded newer image for skyuk/kafka-message-scheduler@sha256:0f3edd6fd517b7e9270015a1e56087f6ee2fde38e2b9d1563bd7c06cfdfe27ac
No java installations was detected.
Please go to http://www.java.com/getjava/ and download

This is different behaviour from locally running the image on the exact same Docker engine:

As you can see the application crashes but clearly starts up.

Local logs
$ docker version
Client:
 Cloud integration: 1.0.17
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.16.4
 Git commit:        f0df350
 Built:             Wed Jun  2 11:56:22 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:58 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

$ docker run skyuk/kafka-message-scheduler@sha256:0f3edd6fd517b7e9270015a1e56087f6ee2fde38e2b9d1563bd7c06cfdfe27ac

16:09:19,950 |-INFO in ch.qos.logback.classic.LoggerContext[default] - This is logback-classic version 1.4.5
16:09:19,981 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
16:09:19,985 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [jar:file:/opt/docker/lib/com.sky.scheduler-0.26.0.jar!/logback.xml]
16:09:19,993 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@4d23015c - URL [jar:file:/opt/docker/lib/com.sky.scheduler-0.26.0.jar!/logback.xml] is not of type file
16:09:20,073 |-WARN in IfNestedWithinSecondPhaseElementSC - <if> elements cannot be nested within an <appender>, <logger> or <root> element
16:09:20,073 |-WARN in IfNestedWithinSecondPhaseElementSC - See also http://logback.qos.ch/codes.html#nested_if_element
16:09:20,081 |-WARN in IfNestedWithinSecondPhaseElementSC - Element <appender> at line 2 contains a nested <if> element at line 3
16:09:20,121 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [STDOUT]
16:09:20,121 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
16:09:20,376 |-INFO in ch.qos.logback.core.model.processor.conditional.IfModelHandler - Condition [isDefined("KMS_LOGGING_LOGSTASH")] evaluated to false on line 3
16:09:20,381 |-INFO in ch.qos.logback.core.model.processor.ImplicitModelHandler - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
16:09:20,411 |-INFO in ch.qos.logback.classic.model.processor.LoggerModelHandler - Setting level of logger [com.sky] to INFO
16:09:20,413 |-INFO in ch.qos.logback.classic.model.processor.LoggerModelHandler - Setting additivity of logger [com.sky] to false
16:09:20,414 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [STDOUT] to Logger[com.sky]
16:09:20,414 |-INFO in ch.qos.logback.classic.model.processor.RootLoggerModelHandler - Setting level of ROOT logger to WARN
16:09:20,414 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [STDOUT] to Logger[ROOT]
16:09:20,414 |-INFO in ch.qos.logback.core.model.processor.DefaultProcessor@383f1975 - End of configuration.
16:09:20,415 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@441cc260 - Registering current configuration as safe fallback point

16:09:20.500 [main] INFO  com.sky.kms.Main$ - Kafka Message Scheduler scheduler 0.26.0 starting up...
Exception in thread "main" pureconfig.error.ConfigReaderException: Cannot convert configuration to a com.sky.kms.config.AppConfig. Failures are:
  at 'scheduler.reader.schedule-topics':
    - (application.conf @ jar:file:/opt/docker/lib/com.sky.scheduler-0.26.0.jar!/application.conf: 2) Empty collection found when trying to convert to scala.collection.immutable.List.

	at pureconfig.ConfigSource.loadOrThrow(ConfigSource.scala:81)
	at pureconfig.ConfigSource.loadOrThrow$(ConfigSource.scala:78)
	at pureconfig.ConfigObjectSource.loadOrThrow(ConfigSource.scala:92)
	at com.sky.kms.Main$.delayedEndpoint$com$sky$kms$Main$1(Main.scala:15)
	at com.sky.kms.Main$delayedInit$body.apply(Main.scala:12)
	at scala.Function0.apply$mcV$sp(Function0.scala:42)
	at scala.Function0.apply$mcV$sp$(Function0.scala:42)
	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
	at scala.App.$anonfun$main$1(App.scala:98)
	at scala.App.$anonfun$main$1$adapted(App.scala:98)
	at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
	at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
	at scala.App.main(App.scala:98)
	at scala.App.main$(App.scala:96)
	at com.sky.kms.Main$.main(Main.scala:12)
	at com.sky.kms.Main.main(Main.scala)

I've done some digging and can't find any information about this kind of error. It seems to only have started after we built multi-arch and updated from a JDK container to a JRE container.

I've tried reverting back to the JDK image from JRE and still no luck. This container also adds on ~150mb.

Outcome

I think for now we should revert back to the basic alpine image and add Java ourselves. It seems like the Temurin image is unstable, and the alpine doesn't even support multi arch yet. This can be revisited at a later date.