ShinyProxy on Kubernetes giving shiny app Pod status of Completed on page refresh

Hello, We are having the following issue:

When we refresh our shiny app, which is running using ShinyProxy on a Kubernetes Cluster, the Pod that the app is running in gets a Status of ‘Completed’ and then when the app loads, we get 503 errors when it tries to call our landing page (i.e. our-domain.com/app_direct/our-app/). If we wait about 30 seconds the pod running the app wiill either first get a status of CrashLoopBackoff and then Running, or it will sometimes get status of Running, at which time if we again refresh the page, everything starts up without problems.

I assume that the pod is somehow getting a SIGTERM signal, Is there some way to avoid that? What I also find confusing is that if we close the Tab, the pod doesn’t get status of Terminating until our heartbeat-timeout, set in our ShinyProxy configs, is reached, as expected. Why would the behavior of refreshing versus closing the page be different?

Thanks!

Is it possible for you to share the pod’s log?

You can use kubectl get pods and then kubectl log to obtain it.

Best regards,

Michael Hogers
NPL Markets Ltd.

Actually we see what is happening now:

When the app is closed or refresh we are running a session$onSessionEnded callback which is executing stopApp(), this is causing the app to be ‘Completed’ but then the pod just restarts, and this is the period when we get the 503’s, in between the app sending the stopApp() signal and the page trying to load again.

Is there a way to send a terminate signal to the pod when the app is refreshed/page is closed, so that a new pod is launched if the page is reloaded? I have seen there is a proxy REST API but I can’t find anything in the documentation about this.

1 Like