Shiny-Proxy autoscaling on aws eks

Hi guys,

I hope I am in the right place and someone can help me.

The following was done:

  • We have created an AWS EKS cluster.
  • Shiny proxy installed there.
    Everything is working so far. But when the users start to work and execute different question on the webpage, the CPU usage increases to 100%. Which is also expected by us and that’s why we use an EKS.
    But the problem now is that for each new user session a new pod (sp-pod) is created and I don’t know how to scale these pods now using horizontal pods autoscaler(hpa). I have tested hpa with php apache and that works. The pods scale up and down and it also works for the nodes. Unfortunately, I can’t currently find a solution on how to make the newly created pods scale so that new pods are created as well as new nodes.

I had also found the following github entry:
however it does not describe how to use the hpa or autoscaling exactly.

I hope someone had the same problem and can help me.

Thanks a lot and greetings


Any ideas or suggestions?

I do not have any solution for it but I am also in the same boat, so I can try to helpy you with my limited knowledge.
how did you set up your cluster? using terraform or eksctl?
The idea is that we should give the permissions to the service account in EKS to be able to create new resources in AWS, that is possible via a OIDC provider.
We can sync once I understand a bit more about the setup of your project.

we could achieve auto scaling with EKS using the latest terraform module version 18.20.1


The idea of ShinyProxy is that each time a user starts an app, a dedicated pod is started for this user and app. So if you have 5 users which all start one app, 5 pods will be created by ShinyProxy. If all of them start two different apps, you will have 10 pods running.
Therefore, there is no need to use HPA. It’s not possible to use multiple pods for a single (user,app) combination.

In order for the cluster to scale when more apps are started, two things are needed:

Thanks @tdekoninck for the response.
We are using the autoscaler in the EKS cluster and everything is fine but we are facing one problem. As you said, SP starts a new pod whenever a new user logs in into an app, when the autoscaler realises it cannot fit the new container into the node it tries to spin up a new node but this is where we are too late. The node spin up takes a bit time and during that time the apps timeout 2-3 times and then everything works perfectly fine.
I think this is a problem according to user experience, the user will not have the patience or interest to wait for 3-4 refresh. Can you suggest some way to circumvent it, like there is always enough capacity left for at least 1 extra container?
We checked the documentation of the autoscaler and it says that “it makes sure that no node is unused”, we do not mind an extra node running even if it is not utilized.


In such cases we typically enable over-provisioning in the autoscaler, see:

In our experiences this makes a great improvement.