MapReduce job is stuck on a multi node Hadoop-2.7.1 cluster -


i have run hadoop 2.7.1 on multi node cluster (1 namenode , 4 datanodes). but, when run mapreduce job (wordcount example hadoop website), stuck @ point.

[~@~ hadoop-2.7.1]$ bin/hadoop jar wordcount.jar wordcount /user/inputdata/ /user/outputdata 15/09/30 17:54:56 warn util.nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable 15/09/30 17:54:57 info client.rmproxy: connecting resourcemanager @ /0.0.0.0:8032 15/09/30 17:54:58 warn mapreduce.jobresourceuploader: hadoop command-line option parsing not performed. implement tool interface , execute application toolrunner remedy this. 15/09/30 17:54:59 info input.fileinputformat: total input paths process : 1 15/09/30 17:55:00 info mapreduce.jobsubmitter: number of splits:1 15/09/30 17:55:00 info mapreduce.jobsubmitter: submitting tokens job: job_1443606819488_0002 15/09/30 17:55:00 info impl.yarnclientimpl: submitted application application_1443606819488_0002 15/09/30 17:55:00 info mapreduce.job: url track job: http://~~~~:8088/proxy/application_1443606819488_0002/ 15/09/30 17:55:00 info mapreduce.job: running job: job_1443606819488_0002 

do have specify memory yarn?

note: datanode hardwares old (each has 1gb ram).

appreciate help. thank you.

the data nodes memory (1gb) scarce prepare atleast 1 container run mapper/reducer/am in it.

you try lowering below container memory allocation values in yarn-site.xml lower values container created on them.

yarn.scheduler.minimum-allocation-mb yarn.scheduler.maximum-allocation-mb 

also try reduce below properties values in job configration,

mapreduce.map.memory.mb mapreduce.reduce.memory.mb mapreduce.map.java.opts mapreduce.reduce.java.opts 

Comments

Popular posts from this blog

html - Outlook 2010 Anchor (url/address/link) -

javascript - Why does running this loop 9 times take 100x longer than running it 8 times? -

Getting gateway time-out Rails app with Nginx + Puma running on Digital Ocean -