Connect to localhost:2375 failed: Connection refused

Hi,

I have an error with the port 2375 :

Caused by: java.util.concurrent.ExecutionException: javax.ws.rs.ProcessingException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/127.0.0.1] failed: Connection refused (Conne
ction refused)

But I don’t understand because I followed the shinyproxy documentation and docker documentation. I upgraded Docker before doing this, I have the following version :

Client:
Version: 18.09.6
API version: 1.39
Go version: go1.10.8
Git commit: 481bc77156
Built: Sat May 4 02:34:58 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.6
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:02:43 2019
OS/Arch: linux/amd64
Experimental: false

I have a docker.service file, with following lines :

 [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix:// -D -H tcp://127.0.0.1:2375
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

But, as you said : “but these settings will be lost upon updating Docker on your system and are therefore not recommended”. So, I read the docker documentation to create the override file with this command :

sudo systemctl edit docker

And add these lines :

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:// -D -H tcp://127.0.0.1:2375

Then :

systemctl restart docker

And :
systemctl status docker

 docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/docker.service.d
           └─override.conf
   Active: active (running) since mar. 2019-05-07 09:59:44 CEST; 9s ago
     Docs: https://docs.docker.com
 Main PID: 1683 (dockerd)
    Tasks: 10
   Memory: 31.7M
   CGroup: /system.slice/docker.service
           └─1683 /usr/bin/dockerd -H unix:// -D -H tcp://127.0.0.1:2375

mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373431038+02:00" level=debug msg="Registering GET, /networks"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373476065+02:00" level=debug msg="Registering GET, /networks/"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373518179+02:00" level=debug msg="Registering GET, /networks/{id:.+}"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373564400+02:00" level=debug msg="Registering POST, /networks/create"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373603510+02:00" level=debug msg="Registering POST, /networks/{id:.*}/connect"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373662840+02:00" level=debug msg="Registering POST, /networks/{id:.*}/disconnect"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373722709+02:00" level=debug msg="Registering POST, /networks/prune"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.373762904+02:00" level=debug msg="Registering DELETE, /networks/{id:.*}"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.374175309+02:00" level=info msg="API listen on 127.0.0.1:2375"
mai 07 09:59:44 vm-pkg-29 dockerd[1683]: time="2019-05-07T09:59:44.374208188+02:00" level=info msg="API listen on /var/run/docker.sock"

So docker is listening on port 2375, and I I tape :

netstat -tl

Connexions Internet actives (seulement serveurs)
Proto Recv-Q Send-Q Adresse locale Adresse distante Etat
tcp 0 0 localhost:2375 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp 0 0 localhost:smtp 0.0.0.0:* LISTEN
tcp 0 0 localhos:x11-ssh-offset 0.0.0.0:* LISTEN
tcp6 0 0 [::]:http [::]:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
tcp6 0 0 localhos:x11-ssh-offset [::]:* LISTEN

It seems to be ok … So, I have my shinyproxy image and my shiny app image :
docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
shinyproxy latest b74cb2697855 10 days ago 502MB
avelt/great latest 09168111f65a 11 days ago 4GB

I launch shinyproxy :

docker run -d -v /var/run/docker.sock:/var/run/docker.sock -v /var/Shinyproxy/ShinyProxy-config-examples/02-containerized-docker-engine:/logs --net sp-example-net -p 80:80 shinyproxy

I connect to shinyproxy, I click on my app and I have the following error :

Caused by: java.util.concurrent.ExecutionException: javax.ws.rs.ProcessingException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/127.0.0.1] failed: Connection refused (Conne
ction refused)

My application was working before I changed the docker settings with port 2375. The problem was that when two people connecting on the application, the first was closed because each application was sent on port 3838. To be able to automatically increment the port number, you advised me to set docker following the documentation with port 2375. So the problem really comes from this configuration and not from my shiny app image.

PS : I found a similar topic on the forum, but update docker solved this problem, which is not my case …

PS 2 : My Linux distribution :

cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Thank you for your help.
Best,
Amandine

Just for future reference: as mentioned in another topic, you don’t need to change docker settings to expose port 2375 when you are running shinyproxy inside container, and you say yourself that it worked before…
Since the test applications work (as you answered in the other topic), I don’t think the problem is in the shinyproxy configuration.