`
ouyida3
  • 浏览: 49910 次
  • 性别: Icon_minigender_1
  • 来自: 广州
社区版块
存档分类
最新评论

ubuntu启动hadoop报错

 
阅读更多
报错信息:

删除了tmp,再format,再重启,未解决!

2011-08-03 23:15:28,053 WARN org.apache.hadoop.conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
2011-08-03 23:15:28,134 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/61.140.3.66
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
2011-08-03 23:15:28,329 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2011-08-03 23:15:28,356 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2011-08-03 23:15:28,386 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2011-08-03 23:15:28,386 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2011-08-03 23:15:28,507 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2011-08-03 23:15:28,511 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2011-08-03 23:15:28,522 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2011-08-03 23:15:28,523 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2011-08-03 23:15:28,593 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2011-08-03 23:15:28,594 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2011-08-03 23:15:28,594 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2011-08-03 23:15:28,594 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2011-08-03 23:15:28,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2011-08-03 23:15:28,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-08-03 23:15:28,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-08-03 23:15:28,713 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2011-08-03 23:15:28,713 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2011-08-03 23:15:29,072 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2011-08-03 23:15:29,101 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2011-08-03 23:15:29,117 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2011-08-03 23:15:29,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2011-08-03 23:15:29,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds.
2011-08-03 23:15:29,124 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2011-08-03 23:15:29,129 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2011-08-03 23:15:29,147 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2011-08-03 23:15:29,169 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2011-08-03 23:15:29,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 472 msecs
2011-08-03 23:15:29,199 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2011-08-03 23:15:29,199 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2011-08-03 23:15:29,199 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2011-08-03 23:15:29,199 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2011-08-03 23:15:29,199 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 28 msec
2011-08-03 23:15:29,199 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2011-08-03 23:15:29,200 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2011-08-03 23:15:29,200 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2011-08-03 23:15:29,209 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2011-08-03 23:15:29,209 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2011-08-03 23:15:29,209 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2011-08-03 23:15:29,210 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2011-08-03 23:15:29,210 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2011-08-03 23:15:29,217 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2011-08-03 23:15:29,247 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
2011-08-03 23:15:29,248 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
2011-08-03 23:15:29,251 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2011-08-03 23:15:29,252 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
2011-08-03 23:15:34,377 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-08-03 23:15:34,479 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2011-08-03 23:15:34,493 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2011-08-03 23:15:34,495 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2011-08-03 23:15:34,496 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2011-08-03 23:15:34,496 INFO org.mortbay.log: jetty-6.1.26
2011-08-03 23:15:34,847 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2011-08-03 23:15:34,847 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2011-08-03 23:15:34,848 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2011-08-03 23:15:34,848 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2011-08-03 23:15:34,849 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
2011-08-03 23:15:34,850 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2011-08-03 23:15:34,850 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
2011-08-03 23:15:34,850 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
2011-08-03 23:15:34,851 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
2011-08-03 23:15:34,851 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
2011-08-03 23:15:34,851 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
2011-08-03 23:15:34,851 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2011-08-03 23:15:34,851 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2011-08-03 23:15:34,852 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
2011-08-03 23:15:38,973 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/tmp/hadoop-hadoop/mapred/system/jobtracker.info, DFSClient_-295207535) from 127.0.0.1:45940: error: java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
2011-08-03 23:15:49,012 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000, call addBlock(/tmp/hadoop-hadoop/mapred/system/jobtracker.info, DFSClient_-295207535) from 127.0.0.1:45940: error: java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
2011-08-03 23:15:52,865 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-59185525-59.37.71.86-50010-1312384552855
2011-08-03 23:15:52,873 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2011-08-03 23:15:52,905 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 0, processing time: 3 msecs
2011-08-03 23:15:59,032 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hadoop/mapred/system/jobtracker.info. blk_-7994876160759765790_1003
2011-08-03 23:15:59,189 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-7994876160759765790_1003 size 4
2011-08-03 23:15:59,196 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-hadoop/mapred/system/jobtracker.info is closed by DFSClient_-295207535
2011-08-03 23:20:37,332 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2011-08-03 23:20:37,332 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 17 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 12 SyncTimes(ms): 5
2011-08-03 23:20:38,057 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1
2011-08-03 23:20:38,057 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 5
2011-08-03 23:29:25,830 WARN org.apache.hadoop.ipc.Server: Incorrect header or version mismatch from 127.0.0.1:39934 got version 3 expected version 4
2011-08-03 23:31:28,248 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/61.140.3.66
************************************************************/









------------------------------------

这个错误在windows上也报过,但是删除了tmp文件夹就解决了!

2011-08-04 10:29:35,703 WARN org.apache.hadoop.conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
2011-08-04 10:29:35,765 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = meimer/130.51.38.218
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-08-04 10:29:35,921 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2011-08-04 10:29:36,000 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Problem binding to meimer-computer/130.51.38.100:9001 : Cannot assign requested address: bind
at org.apache.hadoop.ipc.Server.bind(Server.java:190)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1595)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
Caused by: java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:188)
... 8 more

2011-08-04 10:29:36,000 INFO org.apache.hadoop.mapred.JobTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at meimer/130.51.38.218
************************************************************/
2011-08-04 10:33:01,781 WARN org.apache.hadoop.conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
2011-08-04 10:33:01,843 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = meimer/130.51.38.218
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-08-04 10:33:02,000 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2011-08-04 10:33:02,109 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=JobTracker, port=9001
2011-08-04 10:33:02,203 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-08-04 10:33:03,109 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
2011-08-04 10:33:03,109 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
2011-08-04 10:33:03,109 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
2011-08-04 10:33:03,109 INFO org.mortbay.log: jetty-6.1.14
2011-08-04 10:33:03,562 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030
2011-08-04 10:33:03,562 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
2011-08-04 10:33:03,562 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
2011-08-04 10:33:03,562 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
2011-08-04 10:33:03,968 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
2011-08-04 10:33:04,375 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive
2011-08-04 10:33:04,546 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

2011-08-04 10:33:04,546 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
2011-08-04 10:33:04,546 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file "/tmp/hadoop-Administrator/mapred/system/jobtracker.info" - Aborting...
2011-08-04 10:33:04,546 WARN org.apache.hadoop.mapred.JobTracker: Writing to file hdfs://localhost:9000/tmp/hadoop-Administrator/mapred/system/jobtracker.info failed!
2011-08-04 10:33:04,546 WARN org.apache.hadoop.mapred.JobTracker: FileSystem is not ready yet!
2011-08-04 10:33:04,593 WARN org.apache.hadoop.mapred.JobTracker: Failed to initialize recovery manager.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
2011-08-04 10:33:14,593 WARN org.apache.hadoop.mapred.JobTracker: Retrying...
2011-08-04 10:33:14,703 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

2011-08-04 10:33:14,703 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
2011-08-04 10:33:14,703 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file "/tmp/hadoop-Administrator/mapred/system/jobtracker.info" - Aborting...
2011-08-04 10:33:14,703 WARN org.apache.hadoop.mapred.JobTracker: Writing to file hdfs://localhost:9000/tmp/hadoop-Administrator/mapred/system/jobtracker.info failed!
2011-08-04 10:33:14,703 WARN org.apache.hadoop.mapred.JobTracker: FileSystem is not ready yet!
2011-08-04 10:33:14,734 WARN org.apache.hadoop.mapred.JobTracker: Failed to initialize recovery manager.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
2011-08-04 10:33:24,734 WARN org.apache.hadoop.mapred.JobTracker: Retrying...
2011-08-04 10:33:24,812 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

2011-08-04 10:33:24,812 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
2011-08-04 10:33:24,812 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file "/tmp/hadoop-Administrator/mapred/system/jobtracker.info" - Aborting...
2011-08-04 10:33:24,812 WARN org.apache.hadoop.mapred.JobTracker: Writing to file hdfs://localhost:9000/tmp/hadoop-Administrator/mapred/system/jobtracker.info failed!
2011-08-04 10:33:24,812 WARN org.apache.hadoop.mapred.JobTracker: FileSystem is not ready yet!
2011-08-04 10:33:24,843 WARN org.apache.hadoop.mapred.JobTracker: Failed to initialize recovery manager.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
2011-08-04 10:33:34,843 WARN org.apache.hadoop.mapred.JobTracker: Retrying...
2011-08-04 10:33:34,906 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

2011-08-04 10:33:34,906 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
2011-08-04 10:33:34,906 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file "/tmp/hadoop-Administrator/mapred/system/jobtracker.info" - Aborting...
2011-08-04 10:33:34,906 WARN org.apache.hadoop.mapred.JobTracker: Writing to file hdfs://localhost:9000/tmp/hadoop-Administrator/mapred/system/jobtracker.info failed!
2011-08-04 10:33:34,906 WARN org.apache.hadoop.mapred.JobTracker: FileSystem is not ready yet!
2011-08-04 10:33:34,921 WARN org.apache.hadoop.mapred.JobTracker: Failed to initialize recovery manager.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/hadoop-Administrator/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
分享到:
评论

相关推荐

    CDH集群大数据hadoop报错解决办法及思路整理-绝对干货

    CDH集群大数据hadoop报错解决办法及思路整理,主要解决大数据在运行过程中所遇到的问题,相关解决办法都是实践验证过。

    基于Ubuntu的Hadoop简易集群安装与配置

    - 启动Hadoop集群。 #### 四、测试是否安装成功 - 使用`jps`命令检查Hadoop各进程是否正常运行。 - 使用`hdfs dfs -ls /`命令查看HDFS根目录下的文件。 #### 五、Hadoop中HDFS的应用 ##### 4.1 HDFS体系架构 HDFS...

    linux内 spark-hadoop 报错详细提示

    linux内 spark_hadoop 报错详细提示 【项目资源】:包含前端、后端、移动开发、操作系统、人工智能、物联网、信息化管理、数据库、硬件开发、大数据、课程资源、音视频、网站开发等各种技术项目的源码。包括STM32、...

    mac hadoop报错native 需要的包

    你可以在Hadoop的sbin目录下使用`start-dfs.sh`和`start-yarn.sh`命令启动Hadoop。 6. **检查日志**:如果问题仍然存在,查看Hadoop的日志文件(如`$HADOOP_HOME/logs/*`)可以帮助你找出更具体的问题所在。 7. **...

    ubuntu搭建hadoop单节点.docx

    ubuntu 搭建 Hadoop 单节点 Hadoop 是一个由 Apache 基金会所开发的分布式系统根底架构,用户可以在不了解分布式底层细节的情况下,开发分布式程序。Hadoop 实现了一个分布式文件系统(HDFS),简称 HDFS。HDFS 有...

    ubuntu运行hadoop的wordcount

    ### Ubuntu上运行Hadoop WordCount实例详解 #### 一、环境搭建与配置 在Ubuntu系统上部署并运行Hadoop WordCount实例,首先需要确保已经安装了Hadoop环境,并且版本为hadoop-0.20.2。此版本较旧,主要用于教学或...

    基于Ubuntu的hadoop集群安装与配置

    "基于Ubuntu的hadoop集群安装与配置" 本文将详细介绍基于Ubuntu环境下的Hadoop集群安装与配置,涵盖Hadoop的基本概念、HDFS(分布式文件系统)、MapReduce(分布式计算模型)、集群架构、NameNode和DataNode的角色...

    Logstash6整合Hadoop-报错与解决方案.docx

    Logstash6整合Hadoop报错与解决方案 Logstash是 Elastic Stack 中的数据处理引擎,可以从多种数据源中提取数据,并对其进行处理和转换,然后将其输出到多种目标中,例如 Elasticsearch、Kafka、Hadoop 等。在大...

    Ubuntu14.04 Hadoop完全分布式安装手册

    Hadoop安装

    ubuntu搭建hadoop

    在Ubuntu系统上搭建Hadoop集群是一项基础且重要的任务,它涉及到分布式存储和计算的基础架构。本文将详细解析这个过程,包括环境准备、安装Java、配置Hadoop、启动集群以及进行基本的测试。 首先,环境准备是搭建...

    Ubuntu下hadoop-2.5.2编译好的64bit的native库

    在Ubuntu操作系统中,Hadoop是Apache软件基金会开发的一个开源分布式计算框架,用于处理和存储大量数据。Hadoop的运行效率和性能与它的本机库(Native Libraries)紧密相关,这些库提供了与操作系统的直接交互,比如...

    Ubuntu20.04配置Hadoop.txt

    本教程是根据个人在UBUNTU虚拟机上安装配置Hadoop2.7.3的实际操作步骤一步步记录下来的,大部分指令操作的目的都加了注释以方便理解。(本教程很详细,如果还是遇到问题可以直接咨询楼主,不会让你的积分百花的)

    ubuntu上hadoop的安装及配置

    启动Hadoop集群: ```bash start-all.sh ``` 至此,一个简单的Hadoop集群已在Ubuntu虚拟机上安装并配置完毕。你可以使用Hadoop命令进行数据操作和MapReduce任务测试,验证集群是否正常工作。在整个过程中,注意...

    大数据处理实验一-VMware+Ubuntu+Hadoop安装

    4. 启动 Hadoop 服务,例如 namenode 和 datanode。 知识点四:大数据处理实验的重要性 大数据处理实验是一种重要的学习方式,帮助用户了解大数据处理的基本概念和技术。通过本实验,用户可以学习如何安装和配置...

    Ubuntu上搭建Hadoop2.x详细文档

    ### Ubuntu上搭建Hadoop2.x详细文档 #### 背景与动机 随着大数据时代的到来,数据处理的需求急剧增加,传统的数据库系统难以满足大规模数据处理的要求。因此,Apache Hadoop作为一个开源的大数据处理框架,凭借其...

    在Ubuntu上装Hadoop

    "Ubuntu 上的 Hadoop 安装指南" 在 Ubuntu 操作系统中安装 Hadoop,是大数据处理和分析的重要一步骤。本文将指导您如何在 Ubuntu 上安装 Hadoop,包括安装前的准备、Hadoop 的下载和安装、Java 环境的配置、ssh ...

    Ubuntu下的Hadoop安装教程

    ### Hadoop 在 Ubuntu 下的安装教程 #### 一、安装 Linux 操作系统 在搭建 Hadoop 开发环境之前,首先需要确保系统环境已准备好。本文档假设你正在使用 Ubuntu 12.04 操作系统。 如果你还没有安装 Linux 操作系统...

    Ubuntu部署Hadoop0.20.2简要指南.pdf

    【Ubuntu部署Hadoop 0.20.2简要指南】是针对在Ubuntu 10.10系统上安装和配置Hadoop 0.20.204.0版本的详细步骤。以下是对该指南内容的详细解释: 1. **Java安装与环境配置**: 在部署Hadoop之前,需要先确保系统中...

Global site tag (gtag.js) - Google Analytics