Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, : hadoop streaming failed with error code 1 #225

Open
veereshthotigar opened this issue May 6, 2015 · 0 comments

Comments

@veereshthotigar
Copy link

This is my program code and following is the error while running R Programming language in Hadoop(integrated environment)

small.ints = to.dfs(1:10)

15/05/05 22:01:05 INFO util.NativeCodeLoader: Loaded the native-hadoop library

15/05/05 22:01:05 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library

15/05/05 22:01:05 INFO compress.CodecPool: Got brand-new compressor

mapreduce(

input = small.ints,

map = function(k, v)

{

lapply(seq_along(v), function(r){

x <- runif(v[[r]])

keyval(r,c(max),min(x))

})

})

packageJobJar: [/app/hadoop/tmp/hadoop-unjar6376116358821475638/] [] /tmp/streamjob2302049389214578652.jar tmpDir=null

15/05/05 22:01:22 INFO mapred.FileInputFormat: Total input paths to process : 1

15/05/05 22:01:22 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local]

15/05/05 22:01:22 INFO streaming.StreamJob: Running job: job_201505051844_0004

15/05/05 22:01:22 INFO streaming.StreamJob: To kill this job, run:

15/05/05 22:01:22 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201505051844_0004

15/05/05 22:01:22 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201505051844_0004

15/05/05 22:01:23 INFO streaming.StreamJob: map 0% reduce 0%

15/05/05 22:03:30 INFO streaming.StreamJob: map 100% reduce 100%

15/05/05 22:03:30 INFO streaming.StreamJob: To kill this job, run:

15/05/05 22:03:30 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201505051844_0004

15/05/05 22:03:30 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201505051844_0004

15/05/05 22:03:30 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201505051844_0004_m_000000

15/05/05 22:03:30 INFO streaming.StreamJob: killJob... Streaming Command Failed!

Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, : hadoop streaming failed with error code 1

****************And following is the LOG file :****************************

User: hduser Job Name: streamjob2302049389214578652.jar Job File: hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201505051844_0004/job.xml Submit Host: ubuntu Submit Host Address: 127.0.1.1 Job-ACLs: All users are allowed Job Setup: Successful Status: Failed Failure Info:# of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201505051844_0004_m_000000 Started at: Tue May 05 22:01:22 PDT 2015 Failed at: Tue May 05 22:03:30 P

_log_************

fs.s3n.implorg.apache.hadoop.fs.s3native.NativeS3FileSystem mapred.task.cache.levels2 hadoop.tmp.dir/app/hadoop/tmp hadoop.native.libtrue map.sort.classorg.apache.hadoop.util.QuickSort dfs.namenode.decommission.nodes.per.interval5 dfs.https.need.client.authfalse ipc.client.idlethreshold4000 dfs.datanode.data.dir.perm755 mapred.system.dir${hadoop.tmp.dir}/mapred/system mapred.job.tracker.persist.jobstatus.hours0 dfs.datanode.address0.0.0.0:50010 dfs.namenode.logging.levelinfo dfs.block.access.token.enablefalse io.skip.checksum.errorsfalse fs.default.namehdfs://localhost:54310 mapred.cluster.reduce.memory.mb-1 mapred.child.tmp./tmp fs.har.impl.disable.cachetrue dfs.safemode.threshold.pct0.999f mapred.skip.reduce.max.skip.groups0 dfs.namenode.handler.count10 dfs.blockreport.initialDelay0 mapred.heartbeats.in.second100 mapred.tasktracker.dns.nameserverdefault io.sort.factor10 mapred.task.timeout600000 mapred.max.tracker.failures4 hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactory mapred.mapoutput.key.classorg.apache.hadoop.typedbytes.TypedBytesWritable mapred.job.tracker.jobhistory.lru.cache.size5 fs.hdfs.implorg.apache.hadoop.hdfs.DistributedFileSystem mapred.queue.default.acl-administer-jobs* mapred.output.key.classorg.apache.hadoop.typedbytes.TypedBytesWritable dfs.block.access.key.update.interval600 mapred.skip.map.auto.incr.proc.counttrue mapred.map.runner.classorg.apache.hadoop.streaming.PipeMapRunner mapreduce.job.complete.cancel.delegation.tokenstrue io.mapfile.bloom.size1048576 mapreduce.reduce.shuffle.connect.timeout180000 dfs.safemode.extension30000 mapred.jobtracker.blacklist.fault-timeout-window180 tasktracker.http.threads40 mapred.job.shuffle.merge.percent0.66 fs.ftp.implorg.apache.hadoop.fs.ftp.FTPFileSystem user.namehduser mapred.output.compressfalse io.bytes.per.checksum512 mapred.combine.recordsBeforeProgress10000 mapred.healthChecker.script.timeout600000 mapred.cache.files.filesizes8400,2922,1743 topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping dfs.https.server.keystore.resourcessl-server.xml stream.reduce.inputtypedbytes mapred.reduce.slowstart.completed.maps0.05 mapred.reduce.max.attempts4 fs.ramfs.implorg.apache.hadoop.fs.InMemoryFileSystem dfs.block.access.token.lifetime600 dfs.name.edits.dir${dfs.name.dir} mapred.skip.map.max.skip.records0 mapred.cluster.map.memory.mb-1 hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMapping mapred.job.tracker.persist.jobstatus.dir/jobtracker/jobsInfo stream.map.input.writer.classorg.apache.hadoop.streaming.io.TypedBytesInputWriter stream.map.streamprocessorRscript+--vanilla+.%2Frmr-streaming-map1d1323add177 mapred.jarhdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201505051844_0004/job.jar dfs.block.size67108864 fs.s3.buffer.dir${hadoop.tmp.dir}/s3 job.end.retry.attempts0 fs.file.implorg.apache.hadoop.fs.LocalFileSystem mapred.local.dir.minspacestart0 mapred.output.compression.typeRECORD dfs.datanode.ipc.address0.0.0.0:50020 dfs.permissionsfalse topology.script.number.args100 io.mapfile.bloom.error.rate0.005 mapred.cluster.max.reduce.memory.mb-1 mapred.max.tracker.blacklists4 mapred.task.profile.maps0-2 dfs.datanode.https.address0.0.0.0:50475 mapred.userlog.retain.hours24 dfs.secondary.http.address0.0.0.0:50090 dfs.replication.max512 mapred.job.tracker.persist.jobstatus.activefalse hadoop.security.authorizationfalse local.cache.size10737418240 dfs.namenode.delegation.token.renew-interval86400000 mapred.min.split.size0 mapred.map.tasks2 mapred.child.java.opts-Xmx200m mapreduce.job.counters.limit120 mapred.output.value.classorg.apache.hadoop.typedbytes.TypedBytesWritable stream.reduce.output.reader.classorg.apache.hadoop.streaming.io.TypedBytesOutputReader dfs.https.client.keystore.resourcessl-client.xml mapred.job.queue.namedefault dfs.https.address0.0.0.0:50470 mapred.job.tracker.retiredjobs.cache.size1000 dfs.balance.bandwidthPerSec1048576 ipc.server.listen.queue.size128 job.end.retry.interval30000 mapred.inmem.merge.threshold1000 mapreduce.reduce.java.opts-Xmx400M mapred.skip.attempts.to.start.skipping2 mapreduce.tasktracker.outofband.heartbeat.damper1000000 fs.checkpoint.dir${hadoop.tmp.dir}/dfs/namesecondary mapred.reduce.tasks0 mapred.merge.recordsBeforeProgress10000 mapred.userlog.limit.kb0 stream.reduce.input.writer.classorg.apache.hadoop.streaming.io.TypedBytesInputWriter mapred.job.reduce.memory.mb-1 dfs.max.objects0 webinterface.private.actionsfalse mapreduce.map.java.opts-Xmx400M hadoop.security.token.service.use_iptrue io.sort.spill.percent0.80 mapred.job.shuffle.input.buffer.percent0.70 mapred.job.namestreamjob2302049389214578652.jar dfs.datanode.dns.nameserverdefault mapred.map.tasks.speculative.executiontrue hadoop.util.hash.typemurmur dfs.blockreport.intervalMsec3600000 mapred.map.max.attempts4 mapreduce.job.acl-view-job dfs.client.block.write.retries3 mapred.job.tracker.handler.count10 mapred.input.format.classorg.apache.hadoop.streaming.AutoInputFormat mapreduce.reduce.shuffle.read.timeout180000 mapred.tasktracker.expiry.interval600000 dfs.https.enablefalse mapred.jobtracker.maxtasks.per.job-1 mapred.jobtracker.job.history.block.size3145728 keep.failed.task.filesfalse mapred.output.format.classorg.apache.hadoop.mapred.SequenceFileOutputFormat dfs.datanode.failed.volumes.tolerated0 ipc.client.tcpnodelayfalse mapred.task.profile.reduces0-2 mapred.output.compression.codecorg.apache.hadoop.io.compress.DefaultCodec io.map.index.skip0 mapred.working.dirhdfs://localhost:54310/user/hduser ipc.server.tcpnodelayfalse stream.map.output.reader.classorg.apache.hadoop.streaming.io.TypedBytesOutputReader mapred.jobtracker.blacklist.fault-bucket-width15 dfs.namenode.delegation.key.update-interval86400000 mapred.used.genericoptionsparsertrue mapred.job.map.memory.mb-1 dfs.default.chunk.view.size32768 mapred.cache.files.timestamps1430888482384,1430888482446,1430888482475 hadoop.logfile.size10000000 mapred.reduce.tasks.speculative.executiontrue mapreduce.job.dirhdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201505051844_0004 mapreduce.tasktracker.outofband.heartbeatfalse mapreduce.reduce.input.limit-1 dfs.datanode.du.reserved0 hadoop.security.authenticationsimple fs.checkpoint.period3600 dfs.web.ugiwebuser,webgroup mapred.job.reuse.jvm.num.tasks1 mapred.jobtracker.completeuserjobs.maximum100 dfs.df.interval60000 dfs.data.dir${hadoop.tmp.dir}/dfs/data mapred.task.tracker.task-controllerorg.apache.hadoop.mapred.DefaultTaskController fs.s3.maxRetries4 dfs.datanode.dns.interfacedefault mapred.cluster.max.map.memory.mb-1 mapred.mapoutput.value.classorg.apache.hadoop.typedbytes.TypedBytesWritable dfs.support.appendfalse mapreduce.reduce.shuffle.maxfetchfailures10 mapreduce.job.acl-modify-job dfs.permissions.supergroupsupergroup mapred.local.dir${hadoop.tmp.dir}/mapred/local fs.hftp.implorg.apache.hadoop.hdfs.HftpFileSystem mapred.mapper.classorg.apache.hadoop.streaming.PipeMapper fs.trash.interval0 fs.s3.sleepTimeSeconds10 dfs.replication.min1 mapred.submit.replication10 fs.har.implorg.apache.hadoop.fs.HarFileSystem mapreduce.job.cache.files.visibilitiesfalse,false,false mapred.map.output.compression.codecorg.apache.hadoop.io.compress.DefaultCodec mapred.tasktracker.dns.interfacedefault dfs.namenode.decommission.interval30 dfs.http.address0.0.0.0:50070 mapred.cache.fileshdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201505051844_0004/files/rmr-local-env1d131320826f#rmr-local-env1d131320826f,hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201505051844_0004/files/rmr-global-env1d1316b761ed#rmr-global-env1d1316b761ed,hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201505051844_0004/files/rmr-streaming-map1d1323add177#rmr-streaming-map1d1323add177 dfs.heartbeat.interval3 mapred.job.trackerlocalhost:54311 mapreduce.job.submithostubuntu io.seqfile.sorter.recordlimit1000000 dfs.name.dir${hadoop.tmp.dir}/dfs/name mapred.line.input.format.linespermap1 mapred.jobtracker.taskSchedulerorg.apache.hadoop.mapred.JobQueueTaskScheduler mapred.create.symlinkyes dfs.datanode.http.address0.0.0.0:50075 fs.webhdfs.implorg.apache.hadoop.hdfs.web.WebHdfsFileSystem mapred.local.dir.minspacekill0 dfs.replication.interval3 io.sort.record.percent0.05 fs.kfs.implorg.apache.hadoop.fs.kfs.KosmosFileSystem mapred.temp.dir${hadoop.tmp.dir}/mapred/temp mapred.tasktracker.reduce.tasks.maximum2 dfs.replication1 fs.checkpoint.edits.dir${fs.checkpoint.dir} mapred.tasktracker.tasks.sleeptime-before-sigkill5000 mapred.job.reduce.input.buffer.percent0.0 mapred.tasktracker.indexcache.mb10 mapreduce.job.split.metainfo.maxsize10000000 hadoop.logfile.count10 mapred.skip.reduce.auto.incr.proc.counttrue mapreduce.job.submithostaddress127.0.1.1 tmpfilesfile:/tmp/Rtmpa7EUW1/rmr-local-env1d131320826f,file:/tmp/Rtmpa7EUW1/rmr-global-env1d1316b761ed,file:/tmp/Rtmpa7EUW1/rmr-streaming-map1d1323add177 io.seqfile.compress.blocksize1000000 fs.s3.block.size67108864 mapred.tasktracker.taskmemorymanager.monitoring-interval5000 mapred.queue.default.stateRUNNING mapred.acls.enabledfalse mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging mapred.queue.namesdefault dfs.access.time.precision3600000 fs.hsftp.implorg.apache.hadoop.hdfs.HsftpFileSystem stream.map.outputtypedbytes mapred.task.tracker.http.address0.0.0.0:50060 mapred.reduce.parallel.copies5 io.seqfile.lazydecompresstrue mapred.output.dirhdfs://localhost:54310/tmp/file1d133e0f03dc io.sort.mb100 ipc.client.connection.maxidletime10000 mapred.compress.map.outputfalse hadoop.security.uid.cache.secs14400 mapred.task.tracker.report.address127.0.0.1:0 mapred.healthChecker.interval60000 ipc.client.kill.max10 ipc.client.connect.max.retries10 fs.s3.implorg.apache.hadoop.fs.s3.S3FileSystem mapred.user.jobconf.limit5242880 mapred.input.dirhdfs://localhost:54310/tmp/file1d132a548e17 mapred.job.tracker.http.address0.0.0.0:50030 io.file.buffer.size4096 stream.numinputspecs1 mapred.jobtracker.restart.recoverfalse io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization dfs.datanode.handler.count3 mapred.task.profilefalse mapreduce.input.num.files1 stream.reduce.outputtypedbytes dfs.replication.considerLoadtrue stream.map.inputtypedbytes jobclient.output.filterFAILED dfs.namenode.delegation.token.max-lifetime604800000 mapred.tasktracker.map.tasks.maximum2 io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec fs.checkpoint.size67108864

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant