ShinyProxy Swarm service breaks after restart

First of all, thank you for your effort and dedication!

I’m trying to set up a CentOS-based OpenStack cluster running ShinyProxy as a Docker Swarm service. A fresh installation (done with Terraform+Ansible) works fine, but when I interfere with ShinyProxy in any way (running docker service update, restarting the container, removing the service and registering it again), it crashes at startup with an exception:

Backend is not a Docker Swarm

It’s not even possible to re-set it up with the same Ansible script that was used to install it.
I believe this is specific to ShinyProxy, since Jenkins installed in the same way “wakes up” easily after a restart.
Is this a bug or a configuration issue on my side?

Side question (closely related to my problem): Have you established an elegant, out-of-the-box way to update apps bound to ShinyProxy running in a swarm (or are you planning to do so)? It’s easy to perform updates when the same tag is used across all versions/images of the app, but in a production environment it would be nice to have a more sophisticated mechanism.

Thanks!

I had the same issue after a swarm patch / service update run through ansible.

I didn’t figure out the root cause, but managed to get shinyproxy working again by forcing shinyproxy to run on manager nodes only (by adding

  placement:
    constraints:
      - node.role == manager

to the docker-compose.yml file, under the deploy section), and by changing

docker:
url: http://manager-url:2375
internal-networking: true

to

docker:
internal-networking: true

in the application.yml file (i.e. letting it default to what it defaults to - http://localhost:2375 is what I read somewhere, although this crashes when I try to set this explicitly).

Hello Erik,

did you know why it happened? Had the same issue. I have 1 Manager and two workers currently and it makes no sense for me to use this setup when I am not able to run it on a worker node :-(. Please let me know if something worked for you here :slight_smile:

Hi Markus,

At least we have a theory! I’m taking it from memory now, so it might not make 100% sense, but I believe the swarm patch we did disabled communication to port 2375, which is where the (unsecure) Docker API is at. The shinyproxy service presumably was trying to use this to talk to a manager node? I guess switching to 2377 and doing appropriate adjustments would have worked for too, but we have three managers, so this was easier.

I assumed disabling 2375 was done by us internally, so it seems unlikely this would help you?

Hi, @erik.thornblad, @MarkusL1987 and @Cosi.

Any news about this?

I have the same issue when I use a server as worker. When I use just one manager, shinyproxy works well, but If one worker joins the cluster, the container fails to start.

Thank you.

The problem with shinyproxy is that it is not stateless. This actually means, that your webserver loses track of which shinyproxy instance it connected. This is at least the case if you use it in combination with keycloak.

To solve that you need “sticky sessions”. Nginx+ offers that. I tried that with traeffik but the docker image seemed to be corrupted. It seriously made get request to a ton of porn websites, please check your logs if you want to try traeffik.