ShinyProxy Kubernetes autoscaling, reducing # of nodes

Hi there,

Great to work with ShinyProxy and Kubernetes so far of course :slight_smile:

I have been wondering how well Kubernetes and autoscaling would work with ShinyProxy. Given that each user gets assigned a pod, a node should not be turned off until all ShinyProxy related pods have been terminated, or otherwise a user sees a crash.

Is there a way how to handle downscaling nicely with ShinyProxy and Kubernetes?

Does ShinyProxy prioritise nodes when allocating new pods? If not, it might be impossible to downscale automatically the number of nodes without introducing crashes to users.

Best regards,

Michael