Autoscaling a ShinyProxy Kubernetes cluster

Running Shiny dashboards on a Kubernetes cluster seems like a superb way to ensure availability and scalability of your app. ShinyProxy helps a great deal in achieving these two goals. However, I wonder whether it takes full advantage of the Kubernetes potential.

I am relatively new to Kubernetes, but as far as I understand, pods with actual R/Shiny apps are created by ShinyProxy without changing default resource request parameters for that namespace. If this is correct, does this mean that there is currently no way to specify resource requests (memory and CPU) for each specific Shiny app separately? One could certainly play with default namespace settings, of course. But if the goal were to run two apps, one with huge resource demands and one with minimal ones, how would one optimize resource requests? This is relevant, for example, for configuring a cluster autoscaler service (increasing the number of nodes, which is based on pod resource requests specifications), ensuring there is always sufficient hardware available to service all users. Autoscaling based on default namespace settings alone (not per app-pod) would result either in excess availability or in insufficient resources, as far as I can tell.

How would you comment on this? Does this functionality already exist or is it perhaps something on the roadmap? Thanks!

Hi @autarkie,

You are absolutely right, this is an area where many improvements can be made. You may want to keep an eye on this PR: https://github.com/openanalytics/containerproxy/pull/10
Which I think is already a good step in this direction.

1 Like

Indeed! Thanks for the suggestion!

Being able to define CPU/memory request is definitely a needed feature for Kubernetes backend. Do you have any schedule about the review of this pull request and inclusion in a release anytime soon ?
Thanks !