论坛首页 综合技术论坛

执行hive时 mapreduce报错

浏览 5408 次
精华帖 (0) :: 良好帖 (0) :: 新手帖 (0) :: 隐藏帖 (0)
作者 正文
   发表时间:2014-12-13  
hive> select count(*) from testkkk;                                                                            
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1418293347253_0077, Tracking URL = http://dapserver3:8088/proxy/application_1418293347253_0077/
Kill Command = /opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop/bin/hadoop job  -kill job_1418293347253_0077
Hadoop job information for Stage-1: number of mappers: 5; number of reducers: 1
2014-12-13 11:35:27,202 Stage-1 map = 0%,  reduce = 0%
2014-12-13 11:35:42,672 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_1418293347253_0077 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1418293347253_0077_m_000001 (and more) from job job_1418293347253_0077
Examining task ID: task_1418293347253_0077_m_000003 (and more) from job job_1418293347253_0077

Task with the most failures(4):
-----
Task ID:
  task_1418293347253_0077_m_000001

URL:
  http://dapserver3:8088/taskdetails.jsp?jobid=job_1418293347253_0077&tipid=task_1418293347253_0077_m_000001
-----
Diagnostic Messages for this Task:
Application application_1418293347253_0077 initialization failed (exitCode=1) with output: main : command provided 0
main : user is nobody
main : requested yarn user is hdfs
EPERM: Operation not permitted
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)
        at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1063)
        at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:157)
        at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)
        at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:721)
        at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:717)
        at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
        at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:717)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createDir(ContainerLocalizer.java:383)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.initDirs(ContainerLocalizer.java:369)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:129)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)

下面是mr的日志
Application application_1418293347253_0054 failed 2 times due to AM Container for appattempt_1418293347253_0054_000002 exited with exitCode: -1000 due to: Application application_1418293347253_0054 initialization failed (exitCode=1) with output: main : command provided 0
main : user is nobody
main : requested yarn user is hdfs
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)
at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1063)
at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:157)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:721)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:717)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:717)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createDir(ContainerLocalizer.java:383)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.initDirs(ContainerLocalizer.java:369)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:129)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)
.Failing this attempt.. Failing the application.
   发表时间:2014-12-13  
这个应该是报mr进行创建目录,然后授权被拒绝。不知道mr要创建什么目录,请大神指点
0 请登录后投票
论坛首页 综合技术版

跳转论坛:
Global site tag (gtag.js) - Google Analytics