Concurrent shinyproxy apps are really slow compared to individual apps running on concurent docker images without shinyproxy

Dear all,

First of all, I’d like to thank you for the nice work OpenAnalytics has made by developing ShinyProxy. As it is really easy to set up and install, I have been able to set it up rapidly and use it on our production server (32 cores) for our (about 15) users.

However, it appears that when several users are using it together, it becomes extremely slow.

We have made several tests :

  • Run the app in concurrent separate R sessions -> normal speed
  • Run the app in concurrent separate containerized R sessions (limiting the numbers of cores to 2 or not) -> normal speed
  • Run concurrent apps with shinyproxy -> slow
  • Run the app (concurrently or not) with only 2 cores with shiny proxy -> slow

It seems there might be an issue at the level of our config of docker server of shinyproxy.

It would thus be of high interest for us if you had a small idea of what is happening and how to solve it.

With kind regards,

Sylvain Brohee

Hello again,

I did not find any elegant solution but I kind of found a workaround.

After having spent more than one day struggling with what appeared to be inconsistencies in docker which used all the 32 cores of my server for every single app, in each of the docker image I am using, I copied the following file ( which I make executable.

#! /bin/bash
var=$(shuf -i 0-31 -n 1)
echo $var
taskset -c $var R -e "shiny::runApp('/root/prod/myapp')"

This small script finds a random number between 0 and 31 (my number of cores) and forces R to use only that processor.

The command in the application.yml file is thus

container-cmd: ["./"]

I am pretty sure there are other solutions but as I am not a system administrator, this is the only one I have found at the moment. If any of you have a better idea, don’t hesitate to share.

Kind regards,


1 Like

My 2 cents… that would solve everything without my awful workaround. Would it be possible to have the options --cpus of docker run available from the application.yml file.

Hi @Sylvain_Brohee,

Thanks for your feedback. This pull request might be of interest to you:
We hope to be able to merge it soon.

Hi @fmichielssen,

I am afraid this pull request will only be useful for the kubernetes users. I don’t think it will apply to my case as my ShinyProxy is not working inside a docker itself.
I think this answer from Tobias is more related to my issue.
Thanks for your feedback,

Hello @Sylvain_Brohee,

Thanks for sharing the feedback and workaround. We will add the --cpu support in an upcoming release.



Thanks for your answer and thanks again for all the effort you put into developping and supporting ShinyProxy.


You’re welcome, @Sylvain_Brohee

ShinyProxy 2.2.2 now has container-memory-request , container-memory-limit and container-cpu-limit fields for the Docker back-end

Hope this helps!


and container-memory-request , container-memory-limit , container-cpu-request and container-cpu-limit fields for the Kubernetes back-end

1 Like

Hi Tobias,

Thank you for the update. I will test that as soon as possible (my awful hack works quite well so it is not THE priority at the moment).

Thanks again for your reactivity and for ShinyProxy which is a really great tool.


Hi Tobias,

For novices, could you give an example of what these fields might look like? E.g.,

  container-memory-request: 200M
  container-memory-limit: 500M
  container-cpu-limit: .1

I have a server with 8 CPU and 32GB ram, and an app that takes 2 minutes to run a simulation. However, if a second user also tries to run a simulation, it takes 12 minutes for each of them. I tried adding the above to the .yml but nothing changed. It would be great to have an example out there for someone like me!

I should add that I have authentication set to none because this is a public facing website. I imagine that shinyproxy is reading this as the same user accessing the app each time, which means it isn’t spawning a new container for each user. Is there a way around this?