gocd-contrib/docker-swarm-elastic-agent-plugin

Plugin settings: Agent auto-register Timeout (in minutes) should be (in seconds)

emmeblm opened this issue · 4 comments

In the plugin setting view there is a field to configure the agent auto-register Timeout.
It's label says the number in the field should be written in minutes but the plugin is actually using this number as unit of seconds.

ketan commented

Hi @emmeblm

From this code it seems like we are converting to minutes.

The code to terminate instances that do not register within the timeout period is here along with a bunch of tests which seem to do the right thing. Do you have a set of logs that demonstrate that a container is being terminated before it is expected to be?

Hi @ketan

I'll explain you what we are seeing to understand better if we are using (or expecting) the correct behaviour from this setting.

We have a public image for the elastic profile which takes several minutes to be pulled from the repo. When an agent gets created for the first time, the image starts to download. We configured this setting auto-register timeout with the value of 5 first and what we saw is that every five seconds new services were being created but not replicated properly. This resulted in 33 pulls of the image until it was fully downloaded for one of them. When we changed this setting to be 50 the we saw new services being created every 50 seconds. We changed it several times and every time we saw that the period of time the plugin waits to create a new service until it is fully replicated correspond to this setting in the units of seconds.

Is anything wrong with owr expectation? This is the proper behaviour? Or should we be changing other setting instead? We don't want the plugin to be pulling the same image that many times for the same job.

ketan commented

Thank you for the explanation, let me check me double check the behavior and get back to you.

@emmeblm --- We are not able to reproduce the issue. I am closing this issue as of now. Please feel free to reopen the issue if you are still facing it.