Set up containerised Shinyproxy in Rancher with Cattle orchestration, anyone?


#1

Hi,

If you have, I’d really like to see the network configuration in the docker-compose.yml and rancher-compose.yml of the running service, as well as the relevant section of application.yml. The idea is to run all in a single Rancher host.

By mapping the unix socket port of docker in the host I can make shinyproxy to launch the corresponding app container, but:

  • As it is not launched from inside Rancher and I can not manipulate the “labels” section of the launched container, I’ve failed in getting shinyproxy to talk to the shiny app container so far. If it only was a docker-labels: option for the app…

  • Attempts to use “container:” (see here) also fail, because exposing a port is incompatible with this networking mode. Shinyproxy forces exposing the 3838 port (or configured one) even if the image was created without the EXPOSE line.

If it not possible, or too complex, then I guess I’ll have to convince the sys admins to allow using kubernets orchestration internally.

Thanks,
–c


#2

Hey, I created an account in the hopes of saving someone else…

We were struggling with this for many hours, and finally got the openanalytics/shinyproxy:latest image up in Rancher, with a caveat.

  1. Spin up the image on the Managed network, /var/run/docker.sock:/var/run/docker.sock mounted, and port 8080:8080 mapped
  2. execute a shell into the proxy container and change the application.yml to point all “docker-network” tags to “host”
  3. restart the container

Now, you’ll still get a 500 error when you attempt to click an app. Why? Because shinyproxy only uses cached docker images ( https://github.com/openanalytics/shinyproxy-config-examples/issues/2 )… soooo… until shinyproxy actively performs a docker pull command before it attempts to spin up a shinyapp, we’re out of luck. You COULD manually go onto each host and pull down the image, but the next time your janitor rolls around to clean up unused images you might be SOL. Seems that shinyproxy isn’t built to run standalone, which is a real shame.


#3

Hi @SudoBrendan,

ShinyProxy does support to pull images on Kubernetes using the proxy.kubernetes.image-pull-policy field. See https://www.shinyproxy.io/configuration/#kubernetes

Also, there is an example configuration on running ShinyProxy containerized on Kubernetes at https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes

If you can provide detailed feedback on specific differences between this and deployment on Rancher, we can work together to update our documentation and/or support Rancher.

Hope this helps!

Best,
Tobias


#4

Hey @tverbeke, I’d like to apologize for the unwarranted snarkiness in my last post…

Unfortunately, I don’t have any experience in Kubernetes, so I don’t know how much help I could be in differentiating Rancher/Kubernetes side-by-side (anyone else on here want to weigh in?). All I know is that they’re different platforms for orchestrating docker containers and not much else. Additionally, I’m also not familiar with the source of Shinyproxy, so I don’t know what differences would be useful and which ones wouldn’t be for getting Rancher support.

However, I think that mainstreaming the proxy.kubernetes.image-pull-policy would likely be useful for ALL versions of Shinyproxy - not only while running on Kubernetes. Adding this as a top-level configuration option of the server (pull all images to be hosted when starting up Shinyproxy) would make the image truly standalone on any docker-running platform and prevent a lot of undesired 500s. It would also likely ease one of our major use cases: CI/CD on Shinyproxy (all we need to do is push to ‘latest’ and restart our dev Shinyproxy instance; prod would then use specifically versioned docker images). This feature would also benefit from old image cleanup on the host. Is that something that would be manageable, or not really?