stopApp on proxyRelease & scale-down

Hi,
I am using Shinyproxy 3.1.1. and I have an app with 2 minimum seats:
specs:
- id: myapp

minimum-seats-available: 2

All works as expected, 2 containers are created on start-up and when a user logs in, one container seat is claimed, and a new container is created to maintain the minimum 2 seats.

When the user logs out, I see in the shinyproxy log that the proxy is being released, and it doesn’t immediately scale down, as expected, and I see log lines like:
: Not scaling down because last scaleUp was 1 minutes ago (1 proxies to remove, delay is 2)

Eventually, the delay is over and scale-down occurs, and the container is removed:
: [specId=mapapp delegateProxyId=82a2cb60-9763-4878-8469-0910aa48de5c] Selected DelegateProxy for removal during scale-down
: [specId=mapapp delegateProxyId=82a2cb60-9763-4878-8469-0910aa48de5c] Stopping DelegateProxy

However, at this point, the database connection that the app created is still left dangling at the DB server. Is there any way to clean this up…for instance can I arrange to make a shiny stopApp() call to the app where I can release the DB connection?

To be clear: I get a sessionEnded call when the user logs out, but I don’t want to release the DB connection there, in case the app container gets reused again thereafter. I’d like to release the connection only when the whole container is removed. Is there any way to accomplish this?

Thanks,

Best regards,
Anand.

Hi, happy to hear the feature is working well for you!
Regarding your issue, I understand the situation is not optimal, but I think the database connection will automatically time out and close.

In principle you should be able to add code that runs when the container is stopped, but depending on your container backend this might not work. E.g. on Kubernetes the container is killed without allowing it to shutdown. This is something we want to improve in ShinyProxy, but we don’t have a concrete plan for it yet.
For the time being, I guess you just have to wait for the connection to timeout.

Hi,
OK, understood, thank you. Indeed, the connection is eventually cleaned up on the DB, just wanted to be cleaner, but I can live this for now. Thanks,

Best regards,
Anand.