Restarting pre-initialised containers

Hi, I’ve been testing out the new container pre-initialisation feature and it’s great!

In my use case, I often pass data files (local SQLite or duckdb file for example) to an app via a mounted volume. These files can be updated several times per day with new data, but I noticed containers that were already running before a data update will not pick up the latest data. And since the original pre-initialised container will always be running, it will never update.

So is there a programmatic way to restart all pre-initialised containers related to a specific app after some sort of update takes place to ensure any new users visiting the app will see the most up-to-date version?

Thanks!

Hi, good to know you like the feature!

At the time of writing, if you want the most seamless behavior, we advice to use Kubernetes with the ShinyProxy Operator (https://github.com/openanalytics/shinyproxy-operator). When changing the spec of an app, the operator starts a new instance and ShinyProxy will be aware the spec of a pre-initialzed app has changed and starts spinning up new containers in order to replace the old containers (without affecting existing users/connections).

We understand that using Kubernetes is not always an option. If you are using plain Docker with ShinyProxy, you can only force an update of the apps by restarting ShinyProxy. Assuming you are not using Redis, you’ll have to remove the existing running containers. Note that this will cause downtime and break all existing connections.

We are planning to explore a way to trigger an update of the existing pre-initialezed containers. In principle all code is there, we just need to add a trigger.

Hey thanks for you response.

Yes I’m running ShinyProxy with plain Docker and noticed relaunching the ShinyProxy container will then launch new versions of the pre-initialised containers, but the old versions continue to run and need to be stopped manually.

I was wondering, if I identified the container ID(s) of any pre-initialised containers associated with an app and restarted them myself, would ShinyProxy still associate them as pre-initialised containers of said app and continue to send new users to them?

Otherwise, I think an option to trigger an update of a pre-initialised app would be very welcome!

Watchtower is good for automatically updating containers.

Actually, I am running into similar issues for restarting containers (docker swarm containers/services). We need our apps to restart nightly. It really isn’t clear what the best practices are here. There are several options that come to mind. I have done some tests.

The options are generally to terminate the process in some way. However, this is where it becomes a little convoluted. For example, if manually remove the container in our docker UI portainer. Docker restarts the container, but ShinyProxy won’t connect to the seat and launches a new container each time I connect (it shows the launching spinner).

If I kill the container, the container restarts and ShinyProxy will then use it as one of the seats as it is supposed to. So, I presume that is a better way to get the container to launch, though it perplexes me that ShinyProxy doesn’t use the restarted container or launch a new seat when I remove the container.

So, the question is how the process is killed. This is where I hesitate. Ideally this would be configurable in the application.yml. Other options I can think of are using a health check status in the docker file or having the app be aware of time and suiciding. Maybe running a cron job in yet another container that sends kill signals?

So, using docker container kill isn’t feasible. Directly, it restarts, but ShinyProxy doesn’t seem to recognize the new one, and launches a new container after the user connects.

Hi all, we just released ShinyProxy 3.1.1 that includes an API endpoint to restart the (physical) containers used by pre-initialization. See https://github.com/openanalytics/shinyproxy/issues/502#issuecomment-2180529602 for more information.