I realize this is a long shot but is there any way to run ShinyProxy in a Docker container and have multiple shiny instances loadbalanced inside that same Docker container as opposed to doing it with one docker container per shiny instance? I think this goes against the model of ShinyProxy but I figured it was worth asking at least
The upcoming release, 1.0.3, will support running ShinyProxy inside a container via this PR:
This is advantageous in a swarm setup, because then containers can be routed to via the swarm’s ingress network, and ShinyProxy will not have to allocate TCP ports for each session.
However, it will not spawn multiple R processes inside the container; it will still use the docker API to spawn multiple containers. That is indeed a design choice, and not easy to move away from.
I’ve been looking at this new feature for a long time. It’s great! Do you guys have an ETA for the 1.0.3 release?
For completeness: this was released as part of ShinyProxy 1.1.0 and is documented on https://www.shinyproxy.io/shinyproxy-containers/ with example configurations available in a dedicated Github repository at https://github.com/openanalytics/shinyproxy-config-examples
Great! I’m testing out the new version. I saw this on shinyproxy.io page:
If you have multiple ShinyProxy containers and want to put a new configuration online, you can perform a ‘rolling update’ without causing any downtime for your users.
This would be a great improvement for updating shinyproxy configuration.
Can you explain how this works? Does this mean I need to specify port-range-start and port-range-max for each shinyproxy instance? If that’s true, how should I deploy multiple instances? just run them as different containers but how the traffic should be routed?
By the way I tried to configure a docker stack but it looks like user will be connected to different shinyproxy instance so the page won’t be loaded completely. The compose file I’m using is as below:
version: '3' services: shinyproxy: image: myrepo/shinyproxy ports: - "8080:8080" networks: - shinyproxy_net volumes: - "/var/run/docker.sock:/var/run/docker.sock" deploy: placement: constraints: [node.role == manager] replicas: 2 restart_policy: condition: on-failure networks: shinyproxy_net: external: true
in application.yml, shiny apps are using the same network as shinyproxy and the internal networking is on.
Thanks a lot for this great feature.
The idea behind that setup is indeed to run multiple ShinyProxy instances in a docker swarm or kubernetes cluster. If your ShinyProxy instance is running inside the swarm/cluster, you don’t have to expose any ports on the shiny nodes, so you can omit the
To route traffic to a ShinyProxy instance, a load balancer is needed. I think HAProxy can be used for docker (but I haven’t tried this myself). Kubernetes has built-in support via ‘LoadBalancer’ type services.
Please note that while this setup allows you to update the ShinyProxy service without downtime, it will still interrupt existing user sessions (i.e. user gets a grey screen and has to refresh). This is because the websocket channel is closed, and performing failover on a websocket channel requires a more complex brokering architecture.
Thanks for the reply. I checked HAProxy. I feel like Shinyproxy itself actually acts like the load balancer in HAProxy already especially if it runs on Swarm because swarm will load balance. Also, if we use HAProxy to balance traffic to different Shinyproxy server, there is still the problem of sticky session because we don’t want the same user to be routed to different shinyproxy instance (it seems HAProxy has the solution for that though). As a result, since we still can’t avoid interrupting current user’s session, I think a simple solution is just to start a new Shinyproxy in a container on a different docker network, and port http to this new Shinyproxy instance. Or a floating ip solution. However, these aren’t really different than just restarting Shinyproxy with the new configuration.
I’m sorry. I thought about it a little bit more. HAProxy does make sense. Shinyproxy is playing the role as a load balancer but putting it behind the HAProxy will make it possible to have zero downtime(individual users will still experience grey screen). I’m referencing this image in a DO tutorial. Replacing the load balancers with Shinyproxy instances will do the work. We can update the passive Shinyproxy and test it and make it the active one before updating the other shinyproxy instance.
I will post back if I have any progress actually deploying a HAProxy.