Scaling Shiny apps and reducing RAM usage

I’d like to try and scale a Shiny app using ShinyProxy to provide access to several hundred concurrent users, but would like to minimize RAM usage as much as possible.

My understanding of ShinyProxy is that for each visitor to a Shiny app URL it creates a new Docker container with a new R process. Given that a running Docker container and Shiny app can be quite large (eg 200MB+) I was wondering if there is any way to permit a given number of users to access each Docker container (e.g., 5 users) and then create a new Docker container when a sixth concurrent user visits the Shiny site. So a new Docker container would only be created when earlier running containers are accessed by 5 users. I guess a more refined approach may be to allow users to access a running container if it has not reached a maximum memory threshold, and then to create a new container if the limit has been exceeded.

I’ve looked at Docker swarm, and as far as I can tell this is tailored to scaling up across multiple machines (i.e., pooling resources from multiple machines rather than reducing RAM usage)? However, it would be good to know it I could possibly adapt to reducing memory usage as suggested above.

I’d be grateful for any suggestions.

Thanks in advance.

Hi RichardT

I am having a similar issue, and I was going to pose this question in a different thread. Although this does not answer your question, my main concern is that since R is single threaded, and if each docker container spins up a single R process and if we are both aiming for over 100 concurrent users then wouldn’t the maximum number of concurrent users be firstly bounded by the maximum number of threads rather than RAM?