Hadoop 2.7.3 cluster namenode not starting

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

Hadoop 2.7.3 cluster namenode not starting

Bhushan Pathak
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
************************************************************/



I have changed the port number multiple times, every time I get the same error. How do I get past this?



Thanks
Bhushan Pathak
Reply | Threaded
Open this post in threaded view
|

Fwd: Hadoop 2.7.3 cluster namenode not starting

Bhushan Pathak
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
************************************************************/



I have changed the port number multiple times, every time I get the same error. How do I get past this?



Thanks
Bhushan Pathak

Reply | Threaded
Open this post in threaded view
|

RE: Hadoop 2.7.3 cluster namenode not starting

Brahma Reddy Battula

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To: [hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Bhushan Pathak
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To: [hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 




---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

core-site.xml (1K) Download Attachment
hadoop-env.sh (5K) Download Attachment
hdfs-site.xml (1K) Download Attachment
hosts (380 bytes) Download Attachment
mapred-site.xml (1K) Download Attachment
slaves (36 bytes) Download Attachment
yarn-site.xml (2K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Bhushan Pathak
Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To: [hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 



Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Vinayakumar B-3
In reply to this post by Bhushan Pathak
I think you might need to change the IP itself. 

Try something similar to 192.168.1.20

-Vinay

On 27 Apr 2017 8:20 pm, "Bhushan Pathak" <[hidden email]> wrote:
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
************************************************************/



I have changed the port number multiple times, every time I get the same error. How do I get past this?



Thanks
Bhushan Pathak
Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Hilmi Egemen Ciritoğlu
In reply to this post by Bhushan Pathak
Can you check is port(51150) in use from other process:

sudo netstat -tulpn | grep '51150' 

Regards,
Egemen

2017-04-27 11:04 GMT+01:00 Bhushan Pathak <[hidden email]>:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To: [hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 




---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

RE: Hadoop 2.7.3 cluster namenode not starting

Brahma Reddy Battula
In reply to this post by Bhushan Pathak

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Lei Cao
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <[hidden email]> wrote:

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Bhushan Pathak
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop 2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give any output. 

Even if I change  the port number to a different one, say 52220, 50000, I still get the same error.

Thanks
Bhushan Pathak 

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <[hidden email]> wrote:
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <[hidden email]> wrote:

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

 

 


Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Sidharth Kumar
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if this help

$telnet <IP address> <port number>
Ex: $telnet 192.168.1.60 9000


Let's Hadooping....!

Bests
Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <[hidden email]> wrote:
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop 2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give any output. 

Even if I change  the port number to a different one, say 52220, 50000, I still get the same error.

Thanks
Bhushan Pathak 

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <[hidden email]> wrote:
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <[hidden email]> wrote:

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

 

 


Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Bhushan Pathak
Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that 'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <[hidden email]> wrote:
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if this help

$telnet <IP address> <port number>
Ex: $telnet 192.168.1.60 9000


Let's Hadooping....!

Bests
Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <[hidden email]> wrote:
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop 2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give any output. 

Even if I change  the port number to a different one, say 52220, 50000, I still get the same error.

Thanks
Bhushan Pathak 

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <[hidden email]> wrote:
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <[hidden email]> wrote:

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

 

 



Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Sidharth Kumar
Hi,

The error you mentioned below " 'Name or service not known'" means servers not able to communicate to each other. Check network configurations.

Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 17-May-2017 12:13 PM, "Bhushan Pathak" <[hidden email]> wrote:
Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that 'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <[hidden email]> wrote:
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if this help

$telnet <IP address> <port number>
Ex: $telnet 192.168.1.60 9000


Let's Hadooping....!

Bests
Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <[hidden email]> wrote:
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop 2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give any output. 

Even if I change  the port number to a different one, say 52220, 50000, I still get the same error.

Thanks
Bhushan Pathak 

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <[hidden email]> wrote:
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <[hidden email]> wrote:

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

 

 




Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Bhushan Pathak
What configuration do you want me to check? Each of the three nodes can access each other via password-less SSH, can ping each other's IP.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 17, 2017 at 10:11 PM, Sidharth Kumar <[hidden email]> wrote:
Hi,

The error you mentioned below " 'Name or service not known'" means servers not able to communicate to each other. Check network configurations.

Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 17-May-2017 12:13 PM, "Bhushan Pathak" <[hidden email]> wrote:
Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that 'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <[hidden email]> wrote:
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if this help

$telnet <IP address> <port number>
Ex: $telnet 192.168.1.60 9000


Let's Hadooping....!

Bests
Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <[hidden email]> wrote:
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop 2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give any output. 

Even if I change  the port number to a different one, say 52220, 50000, I still get the same error.

Thanks
Bhushan Pathak 

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <[hidden email]> wrote:
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <[hidden email]> wrote:

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak

 

 

 





Reply | Threaded
Open this post in threaded view
|

Re: Hadoop 2.7.3 cluster namenode not starting

Donald Nelson

Hello Everyone,

I am planning to upgrade our Hadoop from v 1.0.4 to 2.7.3 together with hbase 0.94 to 1.3. Does anyone know of some steps that can help me?

Thanks in advance,

Donald Nelson


On 05/18/2017 12:39 PM, Bhushan Pathak wrote:
What configuration do you want me to check? Each of the three nodes can access each other via password-less SSH, can ping each other's IP.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 17, 2017 at 10:11 PM, Sidharth Kumar <[hidden email]> wrote:
Hi,

The error you mentioned below " 'Name or service not known'" means servers not able to communicate to each other. Check network configurations.

Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 17-May-2017 12:13 PM, "Bhushan Pathak" <[hidden email]> wrote:
Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that 'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <[hidden email]> wrote:
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if this help

$telnet <IP address> <port number>
Ex: $telnet 192.168.1.60 9000


Let's Hadooping....!

Bests
Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <[hidden email]> wrote:
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop 2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give any output. 

Even if I change  the port number to a different one, say 52220, 50000, I still get the same error.

Thanks
Bhushan Pathak 

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <[hidden email]> wrote:
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <[hidden email]> wrote:

Please check “hostname –i” .

 

 

1)      What’s configured in the “master” file.(you shared only slave file).?

 

2)      Can you able to “ping master”?

 

3)      Can you configure like this check once..?

                1.1.1.1 master

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [[hidden email]]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: [hidden email]
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

 

Some additional info -

OS: CentOS 7

RAM: 8GB

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <[hidden email]> wrote:

Yes, I'm running the command on the master node.

 

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

 

The same config files & hosts file exist on all 3 nodes.

 

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

 

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <[hidden email]> wrote:

Are you sure that you are starting in same machine (master)..?

 

Please share “/etc/hosts” and configuration files..

 

 

Regards

Brahma Reddy Battula

 

From: Bhushan Pathak [mailto:[hidden email]]
Sent: 27 April 2017 17:18
To:
[hidden email]
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

 

Hello

 

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

 

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -

2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)

        at org.apache.hadoop.ipc.Server.bind(Server.java:425)

        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:433)

        at sun.nio.ch.Net.bind(Net.java:425)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.apache.hadoop.ipc.Server.bind(Server.java:408)

        ... 13 more

2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

************************************************************/

 

 

 

I have changed the port number multiple times, every time I get the same error. How do I get past this?

 

 

 

Thanks

Bhushan Pathak