- 浏览: 265504 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (119)
- spring (1)
- hibernate (1)
- struts (0)
- ibatis (0)
- memcache (4)
- mysql (1)
- ant (0)
- junit (0)
- protobuf (1)
- android (1)
- java (15)
- language (1)
- google (1)
- scala (0)
- ruby (0)
- python (0)
- 设计模式 (1)
- think in java (6)
- 服务器 (4)
- javascript (24)
- css (2)
- mongodb (1)
- eclipse (1)
- 并发 (1)
- test (1)
- jquery (3)
- vim (2)
- javaio (1)
- log4j (0)
- jdk (0)
- api (0)
- hadoop (1)
- HashMap (1)
- 数据库 (1)
- webservice (1)
- jvm (0)
- linux (4)
最新评论
-
weilingfeng98:
定制SSLContext
java安全SSL -
weixuanfeng:
楼主有没有用eclipse,Java调用Ant脚本的代码啊。 ...
ant调用步骤
1.安装jdk(自带了jre),单纯的jre是不够的,MapReduce的编写和Hadoop的编译都依赖于jdk。(注意jdk必须1.6以上,本教程使用的是jdk1.6.0_24)
2、安装cygwin,下载地址 http://www.cygwin.com,本教程采用的是1.7.9
必须选择的安装包: Net Category----openssh和openssl
可选包:Editors Category---vim,方便修改配置文件
Devel Category----subversion
3. 配置cygwin的环境变量
点击桌面的cygwin图标,启动cygwin,执行如下命令
$vim /etc/profile
在最后添加如下行:
export JAVA_HOME=/cygdrive/d/Java/jdk1.6.0_24
export HADOOP_HOME=/cygdrive/d/hadoop-0.21.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
4.在cygwin中安装sshd
点击桌面的cygwin图标,启动cygwin,执行如下命令
$ssh-host-config,当要求输入yes/no时,选择no。当看到“Have fun”,一般表示sshd服务安装成功。
5.启动sshd服务
在桌面上的“我的电脑”图标上单击右键,点击“管理”,在打开的窗口左侧菜单中选择“服务和应用程序”,在右侧的列表中,在sshd行上点击右键,选择“启动”
6.配置ssh登录
点击桌面的cygwin图标,启动cygwin,执行如下命令
#生成密钥文件
$ssh-keygen 一路回车即可
#加为信任证书
$cd ~/.ssh
$cp id_rsa.pub authorized_keys
#exit
重新点击桌面的cygwin图标,启动cygwin
$ssh localhost
如果不再提示输入密码,则成功
7.下载hadoop-0.21.0的安装包,hadoop-0.21.0.tar.gz
8.安装hadoop
将hadoop-0.21.0.tar.gz 解压,如解压到d:\hadoop-0.21.0
配置conf下的hadoop-env.sh ,zhi
只需要将JAVA_HOME修改成jdk的安装路径即可(注意此处的路径不是windows风格的目录d:\java\dk1.6.0_24,而是linux风格/cygwin/d/java/jdk1.6.0_24)
配置conf下的mapred.site ,增加如下代码
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
9. format a new distributed-files system
$bin/hadoop namecode -format
10.start the hadoop daemons
$bin/start-all.sh
11.browser the web interface for NameCode and JobTracker,by default they are available at
NameCode- http://localhost:50070
JobTracker-http://localhost:50030
12.common Exception
12.1解决Window环境下启动Hadoop时出现的 java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName 异常在Window下启动Hadoop-0.21.0版本时,会出现下面这样的错误提示:
1 java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName
2 Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName
3
4 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
5 at java.security.AccessController.doPrivileged(Native Method)
6 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
7 at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
8 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
9 at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
10 Could not find the main class: org.apache.hadoop.util.PlatformName. Program wil
11 l exit.
经过不断的查找原因和尝试,终于有了解决这个错误的办法,只需要将${HADOOP_HOME}/bin/hadoop-config.sh文件中的第190行的一下的内容
JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} -Xmx32m ${HADOOP_JAVA_PLATFORM_OPTS} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
12.2hadoop无法正常启动(1)
执行 $ bin/hadoop start-all.sh之后,无法启动.
异常一
Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:214)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:135)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:119)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:481)
解决方法:
此时是没有配置conf/mapred-site.xml的缘故. 在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml,0.20.2版本中配置mapred-site.xml无效,只能配置core-site.xml文件
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
hadoop无法正常启动(2)
异常二、
starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out
localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out
localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out
localhost: Exception in thread "main" java.lang.NullPointerException
localhost: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out
localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out
解决方法:
此时是没有配置conf/mapred-site.xml的缘故. 在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml , 0.20.2版本中配置mapred-site.xml无效,只能配置core-site.xml文件
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
hadoop无法正常启动(3)
异常三、
starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out
localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out
localhost: Error: JAVA_HOME is not set.
localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out
localhost: Error: JAVA_HOME is not set.
starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out
localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out
localhost: Error: JAVA_HOME is not set.
解决方法:
请在$hadoop/conf/hadoop-env.sh文件中配置JDK的环境变量
JAVA_HOME=/home/xixitie/jdk
CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME CLASSPATH
hadoop无法正常启动(4)
异常四:mapred-site.xml配置中使用hdfs://localhost:9001,而不使用localhost:9001的配置
异常信息如下:
11/04/20 23:33:25 INFO security.Groups: Group mapping impl=org.apache.hadoop.sec urity.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead.
11/04/20 23:33:25 WARN conf.Configuration: mapred.task.id is deprecated. Instead , use mapreduce.task.attempt.id
11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead.
11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead.
解决方法:
mapred-site.xml配置中使用hdfs://localhost:9000,而不使用localhost:9000的配置
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
hadoop无法正常启动(5)
异常五、no namenode to stop 问题的解决:
异常信息如下:
11/04/20 21:48:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 0 time(s).
11/04/20 21:48:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 1 time(s).
11/04/20 21:48:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 2 time(s).
11/04/20 21:48:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 3 time(s).
11/04/20 21:48:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 4 time(s).
11/04/20 21:48:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 5 time(s).
11/04/20 21:48:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 6 time(s).
11/04/20 21:48:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 7 time(s).
11/04/20 21:48:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 8 time(s).
解决方法:
这个问题是由namenode没有启动起来引起的,为什么no namenode to stop,可能之前的一些数据对namenode有影响,
你需要执行:
$ bin/hadoop namenode -format
然后
$bin/hadoop start-all.sh
hadoop无法正常启动(6)
异常五、no datanode to stop 问题的解决:
有时数据结构出现问题会产生无法启动datanode的问题。
然后用 hadoop namenode -format 重新格式化后仍然无效,/tmp中的文件并没有清楚。
其实还需要清除/tmp/hadoop*里的文件。
执行步骤:
一、先删除hadoop:///tmp
hadoop fs -rmr /tmp
二、停止 hadoop
stop-all.sh
三、删除/tmp/hadoop*
rm -rf /tmp/hadoop*
四、格式化hadoop
hadoop namenode -format
五、启动hadoop
start-all.sh
之后即可解决这个datanode没法启动的问题
http://blog.csdn.net/dream8062/article/details/7281744
2、安装cygwin,下载地址 http://www.cygwin.com,本教程采用的是1.7.9
必须选择的安装包: Net Category----openssh和openssl
可选包:Editors Category---vim,方便修改配置文件
Devel Category----subversion
3. 配置cygwin的环境变量
点击桌面的cygwin图标,启动cygwin,执行如下命令
$vim /etc/profile
在最后添加如下行:
export JAVA_HOME=/cygdrive/d/Java/jdk1.6.0_24
export HADOOP_HOME=/cygdrive/d/hadoop-0.21.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
4.在cygwin中安装sshd
点击桌面的cygwin图标,启动cygwin,执行如下命令
$ssh-host-config,当要求输入yes/no时,选择no。当看到“Have fun”,一般表示sshd服务安装成功。
5.启动sshd服务
在桌面上的“我的电脑”图标上单击右键,点击“管理”,在打开的窗口左侧菜单中选择“服务和应用程序”,在右侧的列表中,在sshd行上点击右键,选择“启动”
6.配置ssh登录
点击桌面的cygwin图标,启动cygwin,执行如下命令
#生成密钥文件
$ssh-keygen 一路回车即可
#加为信任证书
$cd ~/.ssh
$cp id_rsa.pub authorized_keys
#exit
重新点击桌面的cygwin图标,启动cygwin
$ssh localhost
如果不再提示输入密码,则成功
7.下载hadoop-0.21.0的安装包,hadoop-0.21.0.tar.gz
8.安装hadoop
将hadoop-0.21.0.tar.gz 解压,如解压到d:\hadoop-0.21.0
配置conf下的hadoop-env.sh ,zhi
只需要将JAVA_HOME修改成jdk的安装路径即可(注意此处的路径不是windows风格的目录d:\java\dk1.6.0_24,而是linux风格/cygwin/d/java/jdk1.6.0_24)
配置conf下的mapred.site ,增加如下代码
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
9. format a new distributed-files system
$bin/hadoop namecode -format
10.start the hadoop daemons
$bin/start-all.sh
11.browser the web interface for NameCode and JobTracker,by default they are available at
NameCode- http://localhost:50070
JobTracker-http://localhost:50030
12.common Exception
12.1解决Window环境下启动Hadoop时出现的 java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName 异常在Window下启动Hadoop-0.21.0版本时,会出现下面这样的错误提示:
1 java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName
2 Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName
3
4 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
5 at java.security.AccessController.doPrivileged(Native Method)
6 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
7 at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
8 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
9 at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
10 Could not find the main class: org.apache.hadoop.util.PlatformName. Program wil
11 l exit.
经过不断的查找原因和尝试,终于有了解决这个错误的办法,只需要将${HADOOP_HOME}/bin/hadoop-config.sh文件中的第190行的一下的内容
JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} -Xmx32m ${HADOOP_JAVA_PLATFORM_OPTS} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
12.2hadoop无法正常启动(1)
执行 $ bin/hadoop start-all.sh之后,无法启动.
异常一
Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:214)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:135)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:119)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:481)
解决方法:
此时是没有配置conf/mapred-site.xml的缘故. 在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml,0.20.2版本中配置mapred-site.xml无效,只能配置core-site.xml文件
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
hadoop无法正常启动(2)
异常二、
starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out
localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out
localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out
localhost: Exception in thread "main" java.lang.NullPointerException
localhost: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out
localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out
解决方法:
此时是没有配置conf/mapred-site.xml的缘故. 在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml , 0.20.2版本中配置mapred-site.xml无效,只能配置core-site.xml文件
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
hadoop无法正常启动(3)
异常三、
starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out
localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out
localhost: Error: JAVA_HOME is not set.
localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out
localhost: Error: JAVA_HOME is not set.
starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out
localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out
localhost: Error: JAVA_HOME is not set.
解决方法:
请在$hadoop/conf/hadoop-env.sh文件中配置JDK的环境变量
JAVA_HOME=/home/xixitie/jdk
CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME CLASSPATH
hadoop无法正常启动(4)
异常四:mapred-site.xml配置中使用hdfs://localhost:9001,而不使用localhost:9001的配置
异常信息如下:
11/04/20 23:33:25 INFO security.Groups: Group mapping impl=org.apache.hadoop.sec urity.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead.
11/04/20 23:33:25 WARN conf.Configuration: mapred.task.id is deprecated. Instead , use mapreduce.task.attempt.id
11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead.
11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead.
解决方法:
mapred-site.xml配置中使用hdfs://localhost:9000,而不使用localhost:9000的配置
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
hadoop无法正常启动(5)
异常五、no namenode to stop 问题的解决:
异常信息如下:
11/04/20 21:48:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 0 time(s).
11/04/20 21:48:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 1 time(s).
11/04/20 21:48:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 2 time(s).
11/04/20 21:48:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 3 time(s).
11/04/20 21:48:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 4 time(s).
11/04/20 21:48:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 5 time(s).
11/04/20 21:48:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 6 time(s).
11/04/20 21:48:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 7 time(s).
11/04/20 21:48:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 8 time(s).
解决方法:
这个问题是由namenode没有启动起来引起的,为什么no namenode to stop,可能之前的一些数据对namenode有影响,
你需要执行:
$ bin/hadoop namenode -format
然后
$bin/hadoop start-all.sh
hadoop无法正常启动(6)
异常五、no datanode to stop 问题的解决:
有时数据结构出现问题会产生无法启动datanode的问题。
然后用 hadoop namenode -format 重新格式化后仍然无效,/tmp中的文件并没有清楚。
其实还需要清除/tmp/hadoop*里的文件。
执行步骤:
一、先删除hadoop:///tmp
hadoop fs -rmr /tmp
二、停止 hadoop
stop-all.sh
三、删除/tmp/hadoop*
rm -rf /tmp/hadoop*
四、格式化hadoop
hadoop namenode -format
五、启动hadoop
start-all.sh
之后即可解决这个datanode没法启动的问题
http://blog.csdn.net/dream8062/article/details/7281744
发表评论
-
javaIo总结
2011-12-11 01:19 895节点流: 1. -
java并发整理
2011-09-14 00:34 1133java并发库中几个同步工具类 CopyOnWriteArra ... -
eclipse快捷键
2011-05-27 12:38 742来一个大众版的eclipse快捷键使用技巧,请看链接http: ... -
URLRewrite配置(转)
2011-05-09 17:18 1388web.xml <!-- URL 伪静态过滤 --&g ... -
简单工厂设计模式
2011-01-05 23:52 927http://www.cnblogs.com/zzj-4600 ... -
代理模式和装饰器模式的区别与联系
2011-01-04 20:04 2476最近上javaeye,看到不少人讨论java设计模式,本人 ... -
(转)使用java.util.concurrent实现的线程池、消息队列功能
2010-12-17 10:52 2300(转)使用java.util.concurrent实现的线程池 ... -
【转】线程池(java.util.concurrent.ThreadPoolExecutor)使用简介
2010-12-16 16:03 986在多线程大师Doug Lea的贡献下,在JDK1.5中加入了许 ... -
Format的子类中SimpleDateFormat,NumberFormat,MessageFormat那些是线程安全的
2010-12-15 16:12 2862SimpleDateFormat,NumberFormat,M ... -
servlet线程安全
2010-12-14 20:01 2177JSP/Servlet的多线程原理: ... -
java中String、StringBuilder和StringBuffer,StringHelper 的区别
2010-12-09 18:00 1497String 字符串常量 StringBuffer 字符串变量 ... -
开始学习scala和ruby
2010-10-27 16:15 987开始学习scala和ruby -
缓存策略
2010-09-21 13:26 929http://hi.baidu.com/pepsichan/b ... -
StringTokenizer是一个很好的一个类
2010-09-20 14:58 1122StringTokenizer 是一个很好的类,一直没有怎么用 ...
相关推荐
根据提供的文件信息,我们可以归纳出以下关于Hadoop安装步骤及相关软件下载的知识点: ### Hadoop基础知识 1. **Hadoop简介**:Hadoop是一个能够对大量数据进行分布式处理的软件框架,它允许用户轻松地在由...
适合小白看着视频全程教你如何安装Hadoop,以及Hadoop的相关调试,安装的脚本直接无脑复制。 Hadoop的安装大概可以分为: 1.虚拟机的安装 2.安装前有关映像文件的下载,Hadoop一般采用Ubuntu平台,简介易操作。 3....
本文将详细介绍Hadoop的安装步骤及其关键配置。 #### 二、Hadoop安装前的准备工作 1. **操作系统准备**: - 确保服务器上安装了Linux操作系统。 - 配置好网络环境,包括IP地址、子网掩码等。 - 安装JDK,并设置...
#### 四、Hadoop安装步骤 1. **下载Hadoop**: - 访问Hadoop官方网站下载页面,选择Hadoop 2.6.0版本进行下载。 - 将下载的压缩包解压到`/usr/local/`目录下,例如:`tar -xzf hadoop-2.6.0.tar.gz -C /usr/local/...
#### 三、Hadoop安装步骤 1. **拷贝Hadoop安装包** 将`hadoop-2.9.1.tar.gz`拷贝到每台虚拟机的`/usr/local`目录下,并解压后重命名文件夹为`hadoop`。 ```bash # 在每台虚拟机上执行 cp hadoop-2.9.1.tar.gz...
Hadoop集群安装详细步骤 Hadoop是一个分布式计算框架,主要提供了分布式文件存储(DFS)和Map/Reduce核心功能。在这里,我们将详细介绍Hadoop集群的安装步骤,包括准备工作、安装Hadoop软件、配置集群环境等内容。 ...
大数据组件 详细安装步骤(linux配置 hadoop集群搭建 hive flume kafka spark zk 搭建安装)
这个描述可以看我主页内容的第一个视屏,这两个视屏安装是一起的。
hadoop集群安装步骤,完整的步骤,5台机器搭建,一次成功
三、Hadoop安装步骤 1. 下载Hadoop:从Apache官网获取最新稳定版本的Hadoop二进制包。 2. 解压Hadoop:使用`tar -zxvf hadoop-x.x.x.tar.gz`命令解压至指定目录。 3. 配置Hadoop:编辑conf/hadoop-env.sh文件,设置...
Hadoop需要的安装包可以参考我的博文http://blog.csdn.net/h66295112博客里面的Spark搭建过程(小白)这一篇博文里有安装包下载地址
"hadoop伪分布式安装过程截图"这个资源,显然提供了一种通过图形化方式理解Hadoop安装步骤的方式。以下是对这个主题的详细说明: 1. **Hadoop概述**:Hadoop由Apache软件基金会开发,灵感来源于Google的GFS和...
本文详细介绍了在CentOS环境下安装Hadoop的具体步骤,包括创建用户、安装SSH、配置无密码登录以及安装Java环境等内容。通过遵循这些步骤,可以顺利完成Hadoop的安装与配置。在实际操作过程中,还可能遇到其他细节...