Root user in Elasticsearch 2.4.0 in Docker container

I am running an ELK stack with Docker to manage the log with the current configuration of ES 1.7, Logstash 1.5.4 and Kibana 4.1.4. Now I'm trying to upgrade Elasticsearch to 2.4.0, find https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz using the tar.gz file with Docker. Since ES 2.X does not allow you to run it as root, I used

 -Des.insecure.allow.root=true 

while the elasticsearch service is running, but my container does not start. No issues are mentioned in the magazines.

 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k //opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found Scheduler@0.0.0 start /opt/log-management/Scheduler node scheduler-app.js ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper node app.js Jobs are registered [2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576] [2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea! Native thread-sleep not available. This will result in much slower performance, but it will still work. You should re-install spawn-sync or upgrade to the lastest version of node if possible. Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details [2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] [2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ... Wed, 28 Sep 2016 09:04:24 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5 Wed, 28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20 Wed, 28 Sep 2016 09:04:24 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15 [2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] [2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs] [2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb], compressed ordinary object pointers [true] [2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead [2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized [2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starting ... [2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA 

Any findings would be appreciated.

EDIT 1: Since //opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found was an error and the docker image does not have the hostname utility, I tried to use the uname -n command to get hostname in ES. Now it does not throw a node name error, but the problem remains the same. It does not start. Is it right to use?

Another doubt, when I use ES 1.7, which works and is currently working, the hostname utility does not work either, but it works without any problems. Very embarrassed. Logs after using uname -n :

  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1083 100 1083 0 0 1093k 0 --:--:-- --:--:-- --:--:-- 1057k > ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper > node app.js > Scheduler@0.0.0 start /opt/log-management/Scheduler > node scheduler-app.js Jobs are registered [2016-09-30 10:10:37,785][INFO ][bootstrap ] max_open_files [1048576] [2016-09-30 10:10:37,822][WARN ][bootstrap ] running as ROOT user. this is a bad idea! Native thread-sleep not available. This will result in much slower performance, but it will still work. You should re-install spawn-sync or upgrade to the lastest version of node if possible. Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details [2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] [2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] initializing ... Fri, 30 Sep 2016 10:10:38 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5 Fri, 30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20 Fri, 30 Sep 2016 10:10:38 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15 [2016-09-30 10:10:38,435][INFO ][plugins ] [Helleyes] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] [2016-09-30 10:10:38,455][INFO ][env ] [Helleyes] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs] [2016-09-30 10:10:38,456][INFO ][env ] [Helleyes] heap size [7.8gb], compressed ordinary object pointers [true] [2016-09-30 10:10:38,483][WARN ][threadpool ] [Helleyes] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead [2016-09-30 10:10:40,151][INFO ][node ] [Helleyes] initialized [2016-09-30 10:10:40,152][INFO ][node ] [Helleyes] starting ... [2016-09-30 10:10:40,278][INFO ][transport ] [Helleyes] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2016-09-30 10:10:40,283][INFO ][discovery ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ [2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x329b2977, /172.17.0.15:53388 => /10.240.118.69:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6, /172.17.0.15:46846 => /10.240.118.70:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:41,798][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6, /172.17.0.15:46958 => /10.240.118.70:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:41,800][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6, /172.17.0.15:53501 => /10.240.118.69:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:43,302][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f, /172.17.0.15:47057 => /10.240.118.70:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:43,303][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0, /172.17.0.15:53598 => /10.240.118.69:9300]], closing connection java.lang.NullPointerException at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) 

java.util.concurrent.ThreadPoolExecutor $ Worker.run (ThreadPoolExecutor.java:617) at java.lang.Thread.run (Thread.java:745) [2016-09-30 10: 10: 44,807] [INFO] [cluster .service] [Helleyes] new_master {Helleyes} {wvVGkhxnTqaa_wS5GGjZBQ} {10.240.118.68} {10.240.118.68:9300}, reason: zen-disco-join (selected_as_master, [0] joined) [2016-09-30 10: 10 : 44,852] [INFO] [http] [Helleyes] publish_address {10.240.118.68:9200}, bound_addresses {[:: 1]: 9200}, {127.0.0.1:9200} [2016-09-30 10: 10: 44,852 ] [INFO] [node] [Hellis] [2016-09-30 10: 10: 44,984] [INFO] [gateway] [Hellase] restored indexes [32] in cluster_state

Error after failed deployment

 failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "", "failed": true, "item": {"url": "http://10.240.118.68:9200"}, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://10.240.118.68:9200"} 

EDIT 2: Even if the hostname utility is installed and working fine, the containers do not start. Logs are the same as EDIT 1.

EDIT. 3: The container starts, but is not available at http://nodeip:9200 . Of 3 nodes, only 1 has 2.4 other 2, still has 1.7, and 2.4 is not part of the cluster. Inside a container running 2.4, a curl up to localhost:9200 gives an elasticsearch search result, but is not accessible from the outside.

EDIT 4: I tried to perform a basic installation of ES 2.4 on a cluster where ES 1.7 works fine in the same setup. I launched the ES migration plugin to check if the ES 2.4 cluster is working properly and it gave me a green color. Installation basics follow

Dockerfile

 #Pulling SLES12 thin base image FROM private-registry-1 #Author MAINTAINER XYZ # Pre-requisite - Adding repositories RUN zypper ar private-registry-2 RUN zypper --no-gpg-checks -n refresh #Install required packages and dependencies RUN zypper -n in net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1 #Downloading elasticsearch executable ENV ES_VERSION=2.4.0 ENV ES_DIR="//opt//log-management//elasticsearch" ENV ES_CONFIG_PATH="${ES_DIR}//config" ENV ES_REST_PORT=9200 ENV ES_INTERNAL_COM_PORT=9300 WORKDIR /opt/log-management RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \ && rm ${ES_DIR}-${ES_VERSION}.tar.gz \ && mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} #Exposing elasticsearch server container port to the HOST EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT} #Removing binary files which are not needed RUN zypper -n rm wget # Removing zypper repos RUN zypper rr caspiancs_common #Running elasticsearch executable WORKDIR ${ES_DIR} ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true 

Build with

 docker build -t es-test . 

1) When starting from docker run -d --name elasticsearch --net=host -p 9200:9200 -p 9300:9300 es-test , as one of the comments says, and curl localhost:9200 inside the container or node, which runs the container, I get the correct answer. I still cannot communicate with other cluster nodes on port 9200.

2) When starting with docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 es-test and do curl localhost:9200 inside the container it works fine, but not in the node giving me an error

 curl: (56) Recv failure: Connection reset by peer 

I still cannot communicate with other cluster nodes on port 9200.

EDIT 5: Using this answer on this question , I got all three of the three containers running ES 2.4. But the ES cannot form a cluster with all of these three containers. The network configuration is as follows. network.host : 0.0.0.0 , http.port: 9200 ,

 #configure elasticsearch.yml for clustering echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml 

The logs obtained using docker logs elasticsearch are as follows:

 [2016-10-06 12:31:28,887][WARN ][bootstrap ] running as ROOT user. this is a bad idea! [2016-10-06 12:31:29,080][INFO ][node ] [Screech] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z] [2016-10-06 12:31:29,081][INFO ][node ] [Screech] initializing ... [2016-10-06 12:31:29,652][INFO ][plugins ] [Screech] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] [2016-10-06 12:31:29,684][INFO ][env ] [Screech] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8.7gb], net total_space [9.7gb], spins? [unknown], types [rootfs] [2016-10-06 12:31:29,684][INFO ][env ] [Screech] heap size [989.8mb], compressed ordinary object pointers [true] [2016-10-06 12:31:29,720][WARN ][threadpool ] [Screech] requested thread pool size [60] for [index] is too large; setting to maximum [5] instead [2016-10-06 12:31:31,387][INFO ][node ] [Screech] initialized [2016-10-06 12:31:31,387][INFO ][node ] [Screech] starting ... [2016-10-06 12:31:31,456][INFO ][transport ] [Screech] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300} [2016-10-06 12:31:31,465][INFO ][discovery ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw [2016-10-06 12:31:34,500][WARN ][discovery.zen ] [Screech] failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}], retrying... ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300]; at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002) at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937) at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911) at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260) at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444) at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396) at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96) at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 

Whenever I mention the IP address of the host that works with this container as network.host , I get into the old situation, that is, there is only one container that works with ES 2.4, the other two - 1.7.

I just saw that the proxy of the dock is listening on 9300 or "I think" it is listening.

 elasticsearch-server/src/main/docker # netstat -nlp | grep 9300 tcp 0 0 :::9300 :::* LISTEN 6656/docker-proxy 

Any conclusions on this?

+6
source share
3 answers

I managed to create a cluster with the following settings

network.publish_host=CONTAINER_HOST_ADDRESS i.e. the address of the node where the container is running. network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS

tranport.publish_port is important when you run ES behind a proxy / load balancer like nginx or haproxy.

+2
source

According to the documentation for elasticsearch 2.x, by default, network.host binds to localhost

You will need to explicitly set network.host:0.0.0.0 as indicated in this answer :

Example:

 ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true -Des.network.host=0.0.0.0 
+1
source

Try matching your ports when starting the container using the -p flag.

Neither EXPOSE nor --expose are host dependent; these rules do not make ports accessible from the default host. Given the limitation of the EXPOSE statement as the author of the Dockerfile , you should often include the EXPOSE rule only as a hint in which the ports will provide services. The container operator must specify additional network rules.

Try matching your ports when running docker run , for example docker run -p 9200:9200 -p 9300:9300 <image>:<tag>

0
source

All Articles