ShinyProxy: Swarm mode vs single host difference


If I’m deploying ShinyPorxy on a Swarm host, what exactly differences would there be versus deploying on just a single docker host?

I understand Swarm can connect to multiple hosts and then run containers on different hosts, which I think is a good way to scale up our application. However, I don’t think ShinyProxy is using Docker Stack, as a result, we will not be able to get features like Continues Integration when we update or roll back applications. Is Swarm load balancing the containers in this case?

The other concern would be that ShinyProxy always spins up a new container for every new user that accesses our application. If we’are using Swarm, what happens if a host fails? I know if services are deployed by docker stack, they will be rescheduled on healthy nodes. But does ShinyProxy automatically unregister these failed containers( user to port to container mapping) and spawn a new container for the user?

Also, I think ShinyProxy is using a database, so to run ShinyProxy as a service on Swarm cluster, we need to have a dedicated machine which has access to a database and also the Docker daemon. It will be great if some instructions on how to configure shinyproxy with Swarm cluster to be included on

In general, I guess my question is what is the best practice deploying ShinyProxy in a production environment so that we can easily add/upgrade/rollback applications, scale up the infrastructure, have basic load balancing and high availability.

There are two posts in this forum that have talked about this for a bit.

Load Balancing of Shiny Apps
Running ShinyProxy in Kubernetes


1 Like

Hi @Keqiang_Li,

The idea behind running ShinyProxy on a docker swarm is indeed to enable:

  1. horizontal scaling, by adding more nodes to the swarm
  2. load balancing, via the routing mesh
  3. failover and HA, by having the service re-allocate nodes automatically should a node fail

The current release of shinyproxy supports swarm mode, though we have experienced some stability issues with the routing mesh. This is being investigated at the moment.

Regarding continuous integration and ‘seamless updating’ of apps, shinyproxy currently has no specific features for that.
It is an interesting topic, though, and we have had some internal talks about it already.
If you have any thoughts or ideas on this, please feel free to share them; I’d be happy to discuss them with you.


1 Like

Hi @fmichielssen,

Thanks a lot for the explanation.

Any suggestion on quick recovery if ShinyProxy itself crashes or the host that Shinyproxy runs on fails.

Currently, I’m planning a setting that has two physical swarm nodes, one manager and one worker(though it might be better to have at least two manager nodes), and Shinyproxy runs on the manager node(or I can make it run from a third host). As I remember, when Shinyproxy stops, all the containers will be killed. This case I need to restart Shinyproxy. Is there a better way to recover? I’m wondering if we can use docker stack to replicate Shinyproxy and have it run on at least two physical nodes at the same time? I guess there would be some problem keeping track of the routing.



I would be interested in a similar question regarding Kubernetes based deployment. We would like to ensure that if the node running the Shinyproxy service fails, Kubernetes can re-spawn it in a fashion that running containters on other nodes are not affected.

Is this possible currently in some form? If not, would you consider contributions in this direction?



1 Like

Hi @Andrew_Sali,

Failover of the shinyproxy process is not supported. Contributions are always appreciated, but this one may involve significant effort. The proxy table is currently maintained in-memory, along with some other state that will have to be shared in a failover scenario.

Yes, I imagine it would be a significant undertaking. However this will be an absolutely necessary step I think if people would like to rely on shinyproxy in production settings requiring high availability.

Given that shinyproxy is shaping up to be such a nice product, it would be a real loss not to be able to use it under demanding production settings.

If this becomes at some point part of the roadmap, I would be really curious to hear about it!

1 Like