Setting kubernetes pod fields and using multiple replica sets

Hello,

I have two questions about the kubernetes configuration for ShinyProxy:

1:
Is there a way to set pod/deployment fields other than namespace? I am using node labeling and node selectors but there is no way to set that for the pods that ShinyProxy creates. The pods can appear on any one of my AWS auto scaling groups. We would like to contain all ShinyProxy pods to particular nodes by setting nodeSelector.

Other fields that I was able to get around were requests and limits. I was able to use a limit range on the namespace to make sure the created sp-pods had some kind of default resource management.

2:
Is ShinyProxy designed to run in a replica set greater than 1? I was getting weird behavior when I tried 2. I had two nodes that each contained one ShinyProxy pod. When I launched an app, a sp-pod was created and everything was working fine. Once I refreshed the page, the other ShinyProxy pod picked up the request and created a new sp-pod. I lost the previously existing pod and what was running on it.

I was trying to make ShinyProxy highly available, in case a node goes down, but replica set doesn’t seem to be the way.

Thanks,
Michael

Hi @MichaelCal,

For point 1, there is indeed no support for that currently, but it sounds like a useful addition to me. Please feel free to submit a feature request or pull request on https://github.com/openanalytics/shinyproxy

For point 2, since ShinyProxy is a stateful application, I think the solution is to enable sticky sessions by using SessionAffinity on the ShinyProxy service:

Client-IP based session affinity can be selected by setting service.spec.sessionAffinity to “ClientIP” (the default is “None”), and you can set the max session sticky time by setting the field service.spec.sessionAffinityConfig.clientIP.timeoutSeconds if you have already set service.spec.sessionAffinity to “ClientIP” (the default is “10800”).

(from https://kubernetes.io/docs/concepts/services-networking/service/)

Hi @fmichielssen,

Thanks for the reply. There is already a pull request for point 1: https://github.com/openanalytics/containerproxy/pull/2.

I will check out SessionAffinity and see if that helps.

@MichaelCal: Following the pull request ShinyProxy 2.1.0 now supports setting a node-selector for a Kubernetes cluster using proxy.kubernetes.node-selector - see https://www.shinyproxy.io/downloads/

Best,
Tobias

Hi @tverbeke,

Thank you for adding support for the mentioned features to shinyproxy, they are very useful!

Related to this; I am trying debug failures when setting pod resource request and limits from the shinyproxy application.yml. I’ve posted a more detailed description here: https://github.com/openanalytics/shinyproxy/issues/183

Briefly, I am not seeing any cpu or memory resources being set for pods launched by shinyproxy. I’d like to autoscale my cluster nodes in response to shinyproxy launched pods. I was wondering if you had a working example for this or if I need to create a solution similar to what @MichaelCal describes

I appreciate any advice you might have.

Thank you,
-Dmitry

Hi @dgrapov, Nice to hear from you. This is probably a bug in ShinyProxy and we will investigate.

Hi @dgrapov and @tverbeke

Regarding AWS’ EKS, it might be worth exploring the recent addition of running Kubernetes pods on AWS’ Fargate (instead of using your own nodes, you use Fargate).

It is probably one of the easier and cheaper methods to set up autoscaling with ShinyProxy. Here is a GitHub issue where someone tries to get it to work https://github.com/openanalytics/shinyproxy/issues/182.

Hi @michaelhogersnplm,

I agree that for AWS users the new kubernetes support for Fargate should make things both cheaper and simpler. I also suspect that the lack of pod reported resources in shinyproxy might also be an issue for scheduling on Fargate.

-Dmitry