Hadoop File System not working

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Hadoop File System not working

Ishan Jain
Hello.
 
I had configured hadoop to work on two nodes, one master and one slave and implemented Ignite File System on it. Everything was working. I was able to store data and retrieve data. 
Now due to error in apache hive mapreduce i had to change owner of tmp file to hadoop which caused the jps command process information to become unavailable.
I again changed back the owner but it didn't work .So i saw error recovery in stack overflow and deleted the hspfdata_user in tmp folder which caused all the hadoop namenodes and datanodes to shutdown. I had to format my namenode and delete the data directory of datanode to make the hadoop server run again.
Now after restarting everything, i am trying to run the code again. It just inserted the first tuple after which it has been continuously showing the same error:



 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hive/warehouse/yt.db/tickers/000000_0 is closed by DFSClient_NONMAPREDUCE_-115883595_174
2017-06-12 15:20:31,449 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: x.x.x.x:50010 is added to blk_1073741825_1001 size 34
2017-06-12 15:20:32,555 WARN org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.append: Failed to APPEND_FILE /user/hive/warehouse/yt.db/tickers/000000_0 for DFSClient_NONMAPREDUCE_-115883595_174 on x.x.x.x because DFSClient_NONMAPREDUCE_-115883595_174 is already the current lease holder.
2017-06-12 15:20:32,556 INFO org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream: Nothing to flush
2017-06-12 15:20:32,557 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.append from x.x.x.x:39906 Call#15 Retry#0: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to APPEND_FILE /user/hive/warehouse/yt.db/tickers/000000_0 for DFSClient_NONMAPREDUCE_-115883595_174 on x.x.x.x because DFSClient_NONMAPREDUCE_-115883595_174 is already the current lease holder.

Now, Facts:
I am trying to run the same code which i had earlier.
I have just formatted the namenode and cleared the datanode directory so that there is no cluster-id issue and everything starts again including namenode,datanode,secondarynamenode,resourcemanager and nodemanager.
This error is showing in the namenode logs.
Please help me out.Any help would be appreciated.
Loading...