Upload data to HDFS running in Amazon EC2 from local non-Hadoop Machine -


i set hadoop cluster of 2 nodes on amazon ec2. works well. can upload data hdfs master node or other instances in same amazon zone hadoop cluster using hadoop api (java program attached).

however, when want local non-hadoop machine, turns out exceptions below:

i login hadoop namenode check command line. folder "testdir" created, size of uploaded file "myfile" 0.

==================this separator===============================

these exceptions

apr 18, 2013 10:40:47 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream createblockoutputstream info: exception in createblockoutputstream 10.196.153.215:50010 java.net.connectexception: connection timed out apr 18, 2013 10:40:47 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream nextblockoutputstream info: abandoning block blk_560654195674249927_1002 apr 18, 2013 10:40:47 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream nextblockoutputstream info: excluding datanode 10.196.153.215:50010 apr 18, 2013 10:41:09 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream createblockoutputstream info: exception in createblockoutputstream 10.195.171.154:50010 java.net.connectexception: connection timed out apr 18, 2013 10:41:09 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream nextblockoutputstream info: abandoning block blk_1747509888999401559_1002 apr 18, 2013 10:41:10 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream nextblockoutputstream info: excluding datanode 10.195.171.154:50010 apr 18, 2013 10:41:10 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer run warning: datastreamer exception: org.apache.hadoop.ipc.remoteexception: java.io.ioexception: file /user/ubuntu/testdir/myfile replicated 0 nodes, instead of 1     @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem.java:1558)     @ org.apache.hadoop.hdfs.server.namenode.namenode.addblock(namenode.java:696)     @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)     @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)     @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)     @ java.lang.reflect.method.invoke(method.java:601)     @ org.apache.hadoop.ipc.rpc$server.call(rpc.java:563)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1388)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1384)     @ java.security.accesscontroller.doprivileged(native method)     @ javax.security.auth.subject.doas(subject.java:415)     @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1121)     @ org.apache.hadoop.ipc.server$handler.run(server.java:1382)      @ org.apache.hadoop.ipc.client.call(client.java:1070)     @ org.apache.hadoop.ipc.rpc$invoker.invoke(rpc.java:225)     @ $proxy1.addblock(unknown source)     @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)     @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)     @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)     @ java.lang.reflect.method.invoke(method.java:601)     @ org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:82)     @ org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:59)     @ $proxy1.addblock(unknown source)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock(dfsclient.java:3510)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream(dfsclient.java:3373)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2600(dfsclient.java:2589)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run(dfsclient.java:2829)  apr 18, 2013 10:41:10 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream processdatanodeerror warning: error recovery block blk_1747509888999401559_1002 bad datanode[0] nodes == null apr 18, 2013 10:41:10 org.apache.hadoop.hdfs.dfsclient$dfsoutputstream processdatanodeerror warning: not block locations. source file "/user/ubuntu/testdir/myfile" - aborting... exception in thread "main" org.apache.hadoop.ipc.remoteexception: java.io.ioexception: file /user/ubuntu/testdir/myfile replicated 0 nodes, instead of 1     @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem.java:1558)     @ org.apache.hadoop.hdfs.server.namenode.namenode.addblock(namenode.java:696)     @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)     @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)     @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)     @ java.lang.reflect.method.invoke(method.java:601)     @ org.apache.hadoop.ipc.rpc$server.call(rpc.java:563)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1388)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1384)     @ java.security.accesscontroller.doprivileged(native method)     @ javax.security.auth.subject.doas(subject.java:415)     @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1121)     @ org.apache.hadoop.ipc.server$handler.run(server.java:1382)      @ org.apache.hadoop.ipc.client.call(client.java:1070)     @ org.apache.hadoop.ipc.rpc$invoker.invoke(rpc.java:225)     @ $proxy1.addblock(unknown source)     @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)     @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)     @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)     @ java.lang.reflect.method.invoke(method.java:601)     @ org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:82)     @ org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:59)     @ $proxy1.addblock(unknown source)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock(dfsclient.java:3510)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream(dfsclient.java:3373)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2600(dfsclient.java:2589)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run(dfsclient.java:2829) 

==================this separator===============================

here java codes

path output = new path("testdir"); configuration conf = new configuration(); conf.set("fs.default.name", "hdfs://ec2-23-22-12-173.compute-1.amazonaws.com:9000"); conf.set("hadoop.job.user",ubuntu);  filesystem.mkdirs(filesystem.get(conf), output, fspermission.valueof("drwxr-xr-x")); filesystem fs = filesystem.get(conf); fs.copyfromlocalfile(new path("./myfile"), output); 

==================this separator=============================== ps. have open port 9000, 50010 in security group , turned off linux firewall already.

anyone has thoughts?

thanks.

there several reasons behind error : 1- datanodes not , running. make sure not case. if don't try dig dn logs on each server.

2- space on machines dns running lesser space specified through "dfs.datanode.du.reserved" property.

3- there no space left on dn machines.

4- path specified "dfs.data.dir" in hdfs-site.xml file has no space left.(perhaps disk serving dfs.data.dir has run out of space).

5- dns not able send heartbeats/block-reports nn. make sure there no network related issue.

hth


Comments

Popular posts from this blog

Why does Ruby on Rails generate add a blank line to the end of a file? -

keyboard - Smiles and long press feature in Android -

node.js - Bad Request - node js ajax post -