Killing and restart break pre-initialized containers.
Closed this issue · 6 comments
I have two use cases for re-starting containers. One, I use watchtower to automatically update apps that developers are working on. Two, I need to restart apps periodically to refresh data.
The issue is that when a container is killed. A new container is started, this new container however doesn't appear to be recognized as seat and ShinyProxy doesn't recongize that a new seat doesn't start. After a container are killed ShinyProxy reverts to launching a new container after a user connects.
A simple docker command to reproduce the issue is, finds all default named shiny proxy services and kills them.
docker container kill $( docker ps -f name=sp-ser -q )
Thanks for opening this issue. We are currently looking for ways to restart the underlying containers. On Kubernetes this can be done by updating the configuration, which will launch a new ShinyProxy, which then takes care of restarting the containers. We are aware using Kubernetes is not possible for all deployments and will provide a way to restart the containers on Docker & Docker swarm.
The containers seem to restart fine, they just don't seem to get recognized by ShinyProxy, but ShinyProxy doesn't launch a new seat. I feel it may be a slightly broader issue. EG what happens when a container fails, for other reasons, R code error DB connection etc. ShinyProxy should at least realize at some point that it had to launch a container from scratch then reload a seat.
This functionality is already available to some degree in Docker. EG it's possible to configure health checks for the containers, so that the web servers can be pinged periodically to determine when they are up.
We just released ShinyProxy 3.1.1 that includes an API endpoint to restart the (physical) containers used by pre-initialization. This endpoint can only be called by admin users. See: https://shinyproxy.io/downloads/swagger/#/ShinyProxy/stopDelegateProxies
Note that this is an asynchronous endpoint, it will immediately respond, and start replacing the containers in the background.
For example, to restart all containers:
curl -X DELETE -u jack:password 'https://k8s.h.ledfan.be/none/admin/delegate-proxy'
or to restart only containers of the 01_hello
spec:
curl -X DELETE -u jack:password 'http://localhost:8080/admin/delegate-proxy?specId=01_hello
If you wish, you can enable the Swagger UI by adding the following configuration:
springdoc:
api-docs:
enabled: true
swagger-ui:
enabled: true
Next, you can go to the http://localhost:8080/swagger-ui/index.html
, scroll to the /admin/delegate-proxy
endpoint, click on Try out
and click on Execute
.
We are planning on integrating this endpoint into the admin panel. The admin panel will then also list all physical containers and how they are being used by users.
Since I believe this should fix your problem, I'm going to close this issue. Nevertheless, feedback is welcome here or in a new issue.
Hi @LEDfan. Thanks a lot for this!
I'm wondering if this is possible to trigger programmatically from the machine shinyproxy is running on and how that works with authenticating as an admin? (I'm using an Auth0 openid setup)
Example use case: I have a scheduled task running on the server that updates the data for a shinyproxy app. Once this process is done I want to trigger the stop delegate proxy API call on the specID of the app as part of the scheduled task without my input... is this possible?
@LEDfan, @PaulC91 my use case is essentially the same. Basically, refresh containers nightly. If I understand correctly this could be done via a crontab either run in a shell script at launch or in another docker container.
I see you have a used localhost. Will this work with internal-networking: true , or do we need to get the network ID?
I've got around to testing this. The changes to pre-initialized containers are substantial. ShinyProxy handles killed/dead containers much more gracefully for sure. Though the rest api doesn't address the need to programmatically restart seats very well, because the admin has to login. There may be a way around that with tokens, I'm not sure, but we aren't using the admin panel at all. I feel like it would be easier just to restart the entire ShinyProxy stack after the data are processed.