Using your host network as a network for your containers via --net=host or docker-compose via network_mode: host is one option, but it has an undesirable side effect: (a) you are now exposing container ports in your host system and (b) that you can no longer connect to those containers that are not mapped to your host network.
In your case, a quicker and cleaner solution would be to make your ssh tunnel “accessible” to your docker containers (for example, by connecting ssh to the docker0 bridge) instead of exposing your dock containers in your host environment (as suggested in the accepted answer).
Tunnel Setup:
For this to work, find the ip that uses your docker0 bridge through:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to specify ssh to bind to this ip to listen for traffic directed to port 9000 through
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the binding address,: :9000 will only be available for your host's feedback interface, and not for your dock containers.
Note: you can also bind your tunnel to 0.0.0.0 , which will force ssh to listen on all interfaces.
Setting up your application:
In your container application, use the same ip docker0 to connect to the server: 172.17.0.1:9000 . Now traffic passing through your docker0 bridge docker0 also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote database located at :9000 , your "ConnectionString" will contain "server=172.17.0.1,9000;