- 浏览: 406575 次
- 性别:
- 来自: 深圳
文章分类
- 全部博客 (173)
- java 程序设计 (22)
- struct学习笔记 (11)
- spring学习笔记 (10)
- hibernate学习笔记 (5)
- oracle学习笔记 (2)
- javascript学习笔记 (17)
- jquery学习笔记 (10)
- CSS学习笔记 (16)
- 面向协议的编程 (1)
- jmf学习笔记 (1)
- EJB3.0学习 笔记 (3)
- linux学习笔记 (20)
- 云计算架构学习笔记 (1)
- php程序设计 (1)
- python程序设计 (0)
- 数据结构算法 (5)
- 数据库 (8)
- 数据库设计 (0)
- eclipse 插件 (3)
- resin (2)
- html5 (4)
- linux程序设计 (3)
- android开发 (0)
- 其他 (4)
- 服务器端脚本 (0)
- ruby程序设计 (0)
- perl程序设计 (0)
- 开放平台开发 (1)
最新评论
-
huxin889:
第三四张图片裂了
ant 打包 jar 可执行 -
leichenlei:
user.hashCode() 会出现负数,怎么处理?
mysql merge分表 -
niaoqq1:
不好使。来看看我的方法。js:var NodeArr=getS ...
java中如何在ajax发送参数的时候,参数以数组的方式传递到后数组台 -
zhijiandedaima:
为什么我的defaultCache是空,空指针异常啊
spring 整合memcache -
lt26i:
帮了大忙了向楼主学习
java中如何在ajax发送参数的时候,参数以数组的方式传递到后数组台
一 部署 Hadoop 前的准备工作
1 需要知道hadoop依赖Java和SSH
Java 1.5.x (以上),必须安装。
ssh 必须安装并且保证 sshd 一直运行,以便用Hadoop 脚本管理远端Hadoop守护进程。
2 建立 Hadoop 公共帐号
所有的节点应该具有相同的用户名,可以使用如下命令添加:
useradd hadoop
passwd hadoop
vi /etc/sudoers
添加
hadoop ALL=(ALL)ALL
3 配置 host 主机名
tail -n 3 /etc/hosts
192.168.1.114 namenode
192.168.1.115 datanode1
192.168.1.116 datanode2
192.168.1.117 datanode3
4 以上几点要求所有节点(namenode|datanode)配置全部相同
二 ssh 配置
1 生成私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
d6:63:76:43:e2:5b:8e:85:ab:67:a2:7c:a6:8f:23:f9 hadoop@hadoop1.test.com
2 私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop ~]$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop ~]$ ls .ssh/
authorized_keys id_rsa id_rsa.pub
3 把公匙文件上传到datanode服务器
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode1
28
hadoop@datanode1's password:
Now try logging into the machine, with "ssh 'hadoop@datanode1'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode2
28
hadoop@datanode2's password:
Now try logging into the machine, with "ssh 'hadoop@datanode2'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode3
28
hadoop@datanode3's password:
Now try logging into the machine, with "ssh 'hadoop@datanode3'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@localhost
28
hadoop@localhost's password:
Now try logging into the machine, with "ssh 'hadoop@localhost'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
4 验证
[hadoop@localhost hadoop-1.1.2]$ ssh datanode1
Last login: Sun Jun 9 00:17:09 2013 from namenode
[hadoop@datanode1 ~]$ exit
logout
Connection to datanode1 closed.
三 java环境配置
那么 下载linux下的jdk 包,解压到指定的目录
# tar -zxvf jdk-7u7-linux-i586.tar.gz
配置java环境变量
# vi /etc/profile
在文件的最后添加
export JAVA_HOME=/usr/java/jdk-1.7
export CLASSPATH=.:$JAVA_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH
重新加载刚才修改的配置文件
# source /etc/profile
# java -version
java 环境安装成功!
拷贝java安装包和hadoop的包到datanode上
# scp /etc/profile root@datanode1:/etc/
# scp /etc/profile root@datanode2:/etc/
# scp /etc/profile root@datanode3:/etc/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode1:/home/hadoop/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode2:/home/hadoop/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode3:/home/hadoop/
按照同样的步骤 将jdk的 tar包解压到 /usr/java/jdk1.7.0_07
#source /etc/profile
#java -version
四 hadoop 配置
1 配置目录
[hadoop@hadoop ~]$ pwd
/home/hadoop
2 配置hadoop-env.sh,指定java位置
vi hadoop/conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_07
3 配置core-site.xml //定位文件系统的 namenode
[hadoop@hadoop1 ~]$ cat hadoop/conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://namenode:9000</value>
</property>
</configuration>
4 配置mapred-site.xml //定位jobtracker 所在的主节点
[hadoop@hadoop1 ~]$ cat hadoop/conf/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>namenode:9001</value>
</property>
</configuration>
5 配置hdfs-site.xml //配置HDFS副本数量
[hadoop@hadoop1 ~]$ vi hadoop/conf/hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
6 配置 master 与 slave 配置文档
[hadoop@hadoop ~]$ vi hadoop/conf/masters
namenode
[hadoop@hadoop ~]$ vi hadoop/conf/slaves
datanode1
datanode2
datanode3
7 拷贝hadoop 目录到所有节点(datanode)
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode1:
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode2:
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode3:
8 格式化 HDFS
$ bin/hadoop namenode -format
9 启动hadoop 守护进程
[hadoop@hadoop hadoop]$ bin/start-all.sh
10 验证 计算PI 的值
[hadoop@hadoop hadoop]$bin/hadoop jar hadoop-examples-1.1.2.jar pi 4 2
如果出现
“could only be replicated to 0 nodes, instead of 1 ”,产生这样的错误原因有多种,这里列举出以下四种常用的解决方法以供参考:
1、确保master(namenode) 、slaves(datanode)的防火墙已经关闭(我遇到的是这种 $service iptables stop 来关闭防火墙)
2、确保DFS空间的使用情况
3、Hadoop默认的hadoop.tmp.dir的路径为/tmp/hadoop-${user.name},而有的linux系统的/tmp目录文件系统的类型往往是Hadoop不支持的。
4、先后启动namenode、datanode
[hadoop@hadoop hadoop]$bin/hadoop-daemon.sh start namenode
[hadoop@hadoop hadoop]$bin/hadoop-daemon.sh start datanode
删除文件
[hadoop@hadoop hadoop]bin/hadoop fs -rmr hdfs://localhost:8020/user/Vito/PiEstimator_TMP_3_141592654
离开安全模式
[hadoop@hadoop hadoop]bin/hadoop dfsadmin -safemode leave
看到下面的输出说明环境搭建成功
[hadoop@hadoop hadoop]$ bin/hadoop fs -rmr hdfs://namenode:9000/user/hadoop/PiEstimator_TMP_3_141592654
Deleted hdfs://namenode:9000/user/hadoop/PiEstimator_TMP_3_141592654
[hadoop@hadoop hadoop]$ bin/hadoop jar hadoop-examples-1.1.2.jar pi 4 2
Number of Maps = 4
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Starting Job
13/06/09 07:02:57 INFO mapred.FileInputFormat: Total input paths to process : 4
13/06/09 07:02:57 INFO mapred.JobClient: Running job: job_201306090651_0002
13/06/09 07:02:58 INFO mapred.JobClient: map 0% reduce 0%
13/06/09 07:03:04 INFO mapred.JobClient: map 50% reduce 0%
13/06/09 07:03:12 INFO mapred.JobClient: map 50% reduce 16%
13/06/09 07:03:52 INFO mapred.JobClient: map 75% reduce 16%
13/06/09 07:03:57 INFO mapred.JobClient: map 100% reduce 16%
13/06/09 07:03:59 INFO mapred.JobClient: map 100% reduce 100%
13/06/09 07:04:00 INFO mapred.JobClient: Job complete: job_201306090651_0002
13/06/09 07:04:00 INFO mapred.JobClient: Counters: 31
13/06/09 07:04:00 INFO mapred.JobClient: Job Counters
13/06/09 07:04:00 INFO mapred.JobClient: Launched reduce tasks=1
13/06/09 07:04:00 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=115754
13/06/09 07:04:00 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/06/09 07:04:00 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/06/09 07:04:00 INFO mapred.JobClient: Rack-local map tasks=2
13/06/09 07:04:00 INFO mapred.JobClient: Launched map tasks=4
13/06/09 07:04:00 INFO mapred.JobClient: Data-local map tasks=2
13/06/09 07:04:00 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=55300
13/06/09 07:04:00 INFO mapred.JobClient: File Input Format Counters
13/06/09 07:04:00 INFO mapred.JobClient: Bytes Read=472
13/06/09 07:04:00 INFO mapred.JobClient: File Output Format Counters
13/06/09 07:04:00 INFO mapred.JobClient: Bytes Written=97
13/06/09 07:04:00 INFO mapred.JobClient: FileSystemCounters
13/06/09 07:04:00 INFO mapred.JobClient: FILE_BYTES_READ=94
13/06/09 07:04:00 INFO mapred.JobClient: HDFS_BYTES_READ=960
13/06/09 07:04:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=259773
13/06/09 07:04:00 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
13/06/09 07:04:00 INFO mapred.JobClient: Map-Reduce Framework
13/06/09 07:04:00 INFO mapred.JobClient: Map output materialized bytes=112
13/06/09 07:04:00 INFO mapred.JobClient: Map input records=4
13/06/09 07:04:00 INFO mapred.JobClient: Reduce shuffle bytes=112
13/06/09 07:04:00 INFO mapred.JobClient: Spilled Records=16
13/06/09 07:04:00 INFO mapred.JobClient: Map output bytes=72
13/06/09 07:04:00 INFO mapred.JobClient: Total committed heap usage (bytes)=820191232
13/06/09 07:04:00 INFO mapred.JobClient: CPU time spent (ms)=3470
13/06/09 07:04:00 INFO mapred.JobClient: Map input bytes=96
13/06/09 07:04:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=488
13/06/09 07:04:00 INFO mapred.JobClient: Combine input records=0
13/06/09 07:04:00 INFO mapred.JobClient: Reduce input records=8
13/06/09 07:04:00 INFO mapred.JobClient: Reduce input groups=8
13/06/09 07:04:00 INFO mapred.JobClient: Combine output records=0
13/06/09 07:04:00 INFO mapred.JobClient: Physical memory (bytes) snapshot=592089088
13/06/09 07:04:00 INFO mapred.JobClient: Reduce output records=0
13/06/09 07:04:00 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1736314880
13/06/09 07:04:00 INFO mapred.JobClient: Map output records=8
Job Finished in 63.642 seconds
Estimated value of Pi is 3.50000000000000000000
1 需要知道hadoop依赖Java和SSH
Java 1.5.x (以上),必须安装。
ssh 必须安装并且保证 sshd 一直运行,以便用Hadoop 脚本管理远端Hadoop守护进程。
2 建立 Hadoop 公共帐号
所有的节点应该具有相同的用户名,可以使用如下命令添加:
useradd hadoop
passwd hadoop
vi /etc/sudoers
添加
hadoop ALL=(ALL)ALL
3 配置 host 主机名
tail -n 3 /etc/hosts
192.168.1.114 namenode
192.168.1.115 datanode1
192.168.1.116 datanode2
192.168.1.117 datanode3
4 以上几点要求所有节点(namenode|datanode)配置全部相同
二 ssh 配置
1 生成私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
d6:63:76:43:e2:5b:8e:85:ab:67:a2:7c:a6:8f:23:f9 hadoop@hadoop1.test.com
2 私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop ~]$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop ~]$ ls .ssh/
authorized_keys id_rsa id_rsa.pub
3 把公匙文件上传到datanode服务器
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode1
28
hadoop@datanode1's password:
Now try logging into the machine, with "ssh 'hadoop@datanode1'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode2
28
hadoop@datanode2's password:
Now try logging into the machine, with "ssh 'hadoop@datanode2'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode3
28
hadoop@datanode3's password:
Now try logging into the machine, with "ssh 'hadoop@datanode3'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@localhost
28
hadoop@localhost's password:
Now try logging into the machine, with "ssh 'hadoop@localhost'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
4 验证
[hadoop@localhost hadoop-1.1.2]$ ssh datanode1
Last login: Sun Jun 9 00:17:09 2013 from namenode
[hadoop@datanode1 ~]$ exit
logout
Connection to datanode1 closed.
三 java环境配置
那么 下载linux下的jdk 包,解压到指定的目录
# tar -zxvf jdk-7u7-linux-i586.tar.gz
配置java环境变量
# vi /etc/profile
在文件的最后添加
export JAVA_HOME=/usr/java/jdk-1.7
export CLASSPATH=.:$JAVA_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH
重新加载刚才修改的配置文件
# source /etc/profile
# java -version
java 环境安装成功!
拷贝java安装包和hadoop的包到datanode上
# scp /etc/profile root@datanode1:/etc/
# scp /etc/profile root@datanode2:/etc/
# scp /etc/profile root@datanode3:/etc/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode1:/home/hadoop/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode2:/home/hadoop/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode3:/home/hadoop/
按照同样的步骤 将jdk的 tar包解压到 /usr/java/jdk1.7.0_07
#source /etc/profile
#java -version
四 hadoop 配置
1 配置目录
[hadoop@hadoop ~]$ pwd
/home/hadoop
2 配置hadoop-env.sh,指定java位置
vi hadoop/conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_07
3 配置core-site.xml //定位文件系统的 namenode
[hadoop@hadoop1 ~]$ cat hadoop/conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://namenode:9000</value>
</property>
</configuration>
4 配置mapred-site.xml //定位jobtracker 所在的主节点
[hadoop@hadoop1 ~]$ cat hadoop/conf/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>namenode:9001</value>
</property>
</configuration>
5 配置hdfs-site.xml //配置HDFS副本数量
[hadoop@hadoop1 ~]$ vi hadoop/conf/hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
6 配置 master 与 slave 配置文档
[hadoop@hadoop ~]$ vi hadoop/conf/masters
namenode
[hadoop@hadoop ~]$ vi hadoop/conf/slaves
datanode1
datanode2
datanode3
7 拷贝hadoop 目录到所有节点(datanode)
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode1:
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode2:
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode3:
8 格式化 HDFS
$ bin/hadoop namenode -format
9 启动hadoop 守护进程
[hadoop@hadoop hadoop]$ bin/start-all.sh
10 验证 计算PI 的值
[hadoop@hadoop hadoop]$bin/hadoop jar hadoop-examples-1.1.2.jar pi 4 2
如果出现
“could only be replicated to 0 nodes, instead of 1 ”,产生这样的错误原因有多种,这里列举出以下四种常用的解决方法以供参考:
1、确保master(namenode) 、slaves(datanode)的防火墙已经关闭(我遇到的是这种 $service iptables stop 来关闭防火墙)
2、确保DFS空间的使用情况
3、Hadoop默认的hadoop.tmp.dir的路径为/tmp/hadoop-${user.name},而有的linux系统的/tmp目录文件系统的类型往往是Hadoop不支持的。
4、先后启动namenode、datanode
[hadoop@hadoop hadoop]$bin/hadoop-daemon.sh start namenode
[hadoop@hadoop hadoop]$bin/hadoop-daemon.sh start datanode
删除文件
[hadoop@hadoop hadoop]bin/hadoop fs -rmr hdfs://localhost:8020/user/Vito/PiEstimator_TMP_3_141592654
离开安全模式
[hadoop@hadoop hadoop]bin/hadoop dfsadmin -safemode leave
看到下面的输出说明环境搭建成功
[hadoop@hadoop hadoop]$ bin/hadoop fs -rmr hdfs://namenode:9000/user/hadoop/PiEstimator_TMP_3_141592654
Deleted hdfs://namenode:9000/user/hadoop/PiEstimator_TMP_3_141592654
[hadoop@hadoop hadoop]$ bin/hadoop jar hadoop-examples-1.1.2.jar pi 4 2
Number of Maps = 4
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Starting Job
13/06/09 07:02:57 INFO mapred.FileInputFormat: Total input paths to process : 4
13/06/09 07:02:57 INFO mapred.JobClient: Running job: job_201306090651_0002
13/06/09 07:02:58 INFO mapred.JobClient: map 0% reduce 0%
13/06/09 07:03:04 INFO mapred.JobClient: map 50% reduce 0%
13/06/09 07:03:12 INFO mapred.JobClient: map 50% reduce 16%
13/06/09 07:03:52 INFO mapred.JobClient: map 75% reduce 16%
13/06/09 07:03:57 INFO mapred.JobClient: map 100% reduce 16%
13/06/09 07:03:59 INFO mapred.JobClient: map 100% reduce 100%
13/06/09 07:04:00 INFO mapred.JobClient: Job complete: job_201306090651_0002
13/06/09 07:04:00 INFO mapred.JobClient: Counters: 31
13/06/09 07:04:00 INFO mapred.JobClient: Job Counters
13/06/09 07:04:00 INFO mapred.JobClient: Launched reduce tasks=1
13/06/09 07:04:00 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=115754
13/06/09 07:04:00 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/06/09 07:04:00 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/06/09 07:04:00 INFO mapred.JobClient: Rack-local map tasks=2
13/06/09 07:04:00 INFO mapred.JobClient: Launched map tasks=4
13/06/09 07:04:00 INFO mapred.JobClient: Data-local map tasks=2
13/06/09 07:04:00 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=55300
13/06/09 07:04:00 INFO mapred.JobClient: File Input Format Counters
13/06/09 07:04:00 INFO mapred.JobClient: Bytes Read=472
13/06/09 07:04:00 INFO mapred.JobClient: File Output Format Counters
13/06/09 07:04:00 INFO mapred.JobClient: Bytes Written=97
13/06/09 07:04:00 INFO mapred.JobClient: FileSystemCounters
13/06/09 07:04:00 INFO mapred.JobClient: FILE_BYTES_READ=94
13/06/09 07:04:00 INFO mapred.JobClient: HDFS_BYTES_READ=960
13/06/09 07:04:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=259773
13/06/09 07:04:00 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
13/06/09 07:04:00 INFO mapred.JobClient: Map-Reduce Framework
13/06/09 07:04:00 INFO mapred.JobClient: Map output materialized bytes=112
13/06/09 07:04:00 INFO mapred.JobClient: Map input records=4
13/06/09 07:04:00 INFO mapred.JobClient: Reduce shuffle bytes=112
13/06/09 07:04:00 INFO mapred.JobClient: Spilled Records=16
13/06/09 07:04:00 INFO mapred.JobClient: Map output bytes=72
13/06/09 07:04:00 INFO mapred.JobClient: Total committed heap usage (bytes)=820191232
13/06/09 07:04:00 INFO mapred.JobClient: CPU time spent (ms)=3470
13/06/09 07:04:00 INFO mapred.JobClient: Map input bytes=96
13/06/09 07:04:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=488
13/06/09 07:04:00 INFO mapred.JobClient: Combine input records=0
13/06/09 07:04:00 INFO mapred.JobClient: Reduce input records=8
13/06/09 07:04:00 INFO mapred.JobClient: Reduce input groups=8
13/06/09 07:04:00 INFO mapred.JobClient: Combine output records=0
13/06/09 07:04:00 INFO mapred.JobClient: Physical memory (bytes) snapshot=592089088
13/06/09 07:04:00 INFO mapred.JobClient: Reduce output records=0
13/06/09 07:04:00 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1736314880
13/06/09 07:04:00 INFO mapred.JobClient: Map output records=8
Job Finished in 63.642 seconds
Estimated value of Pi is 3.50000000000000000000
发表评论
-
ant + svn 自动部署项目
2013-07-23 23:13 4800build.properties # -------- ... -
redhat/centos 搭建svn服务器环境
2013-07-23 22:55 2950subversion支持3种服务配置 1、apache + ... -
Cloned Red Hat/CentOS/ Linux Virtual Machines 出现“Device eth0 does not seem to be
2013-07-23 22:48 1197Cloned Red Hat/CentOS/ Linux Vi ... -
RHEL / Centos 6: Install Nginx
2013-04-24 23:41 1949Step #1: Install nginx ... -
linux安装resin
2012-09-06 23:54 10461.解压resin源码包 tar -zxvf /root ... -
linux下安装后缀为bin的JDK
2012-09-06 22:34 12191、sun网站上下载jdk-6u10-linux-i586. ... -
yum 常用的安装软件的命令详解
2012-08-17 09:15 11431.用YUM安装软件包 ... -
国外程序员推荐:每个程序员都应读的书
2012-08-08 19:59 916转载 : 伯乐在线 编者按 :2008年8月4日 ... -
centos 服务器环境下静默安装oracle
2012-08-06 15:42 1644一 前期过程:下载oracle for linux (x8 ... -
CentOS安装crontab及使用方法
2012-07-30 16:32 1903转自http://hi.baidu.com ... -
linux下yum安装xwindow
2012-07-24 10:52 1033执行下面指令(原样照抄即可): yum groupinstal ... -
linux yum安装mysql后要注意的一些初始化问题
2012-07-24 10:36 14631. 配置开机启动服务 chkconfig ... -
VirtualBox的提供了四种网络接入模式
2012-07-23 17:08 4542VirtualBox的提供了四种网 ... -
centos root 用户忘记密码
2012-07-23 09:16 1019忘记 root 密码需要进入进单用户模式来修改。 操作很简单: ... -
centos 中文显示乱码解决
2012-07-22 17:56 1539在使用CentOS系统时,安装的时候可能你会碰到英文的Cent ... -
how to setup memcache on linux
2012-07-21 16:42 1093Steps to install Libevent(memca ... -
vmware 中常用的3中网络配置的讲述
2012-07-21 15:49 994VMware虚拟机安装好以后,会自动添加两张网卡( ... -
VI编辑器的基本使用方法
2012-06-25 16:50 858【Vi编辑器的基本使用方法】 转摘自 http://linu ... -
linux下连接mysql接口
2011-12-07 13:09 14701)需要什么头文件? #include <mysql/m ...
相关推荐
Hadoop集群环境搭建是大数据处理的核心组件之一,本文将详细介绍Hadoop集群环境的搭建过程,包括集群规划、前置条件、免密登录、集群搭建、提交服务到集群等几个方面。 集群规划 在搭建Hadoop集群环境时,需要首先...
基于Centos7下的hadoop2.7集群的搭建。(在vmware中的2台虚拟机。)
"hadoop集群环境的搭建" Hadoop 是一个开源的大数据处理框架,由Apache基金会开发和维护。它可以实现大规模数据的存储和处理,具有高可扩展性、可靠性和高性能等特点。搭建 Hadoop 集群环境是实现大数据处理的重要...
Hadoop集群环境搭建,实战篇
在Linux环境下搭建Hadoop集群是一项复杂但至关重要的任务,它为大数据处理提供了强大的分布式平台。以下将详细介绍如何在一台虚拟机上安装多台Linux服务节点,并构建Hadoop集群环境。 首先,我们需要准备一个基础...
hadoop单机和集群搭建过程,一共三个节点,很详细,每一步都有截图
【Hadoop集群环境搭建】 Hadoop是一个开源的分布式计算框架,它允许在大规模集群中运行应用程序,处理海量数据。在本文中,我们将详细介绍如何搭建一个Hadoop集群环境,包括必要的步骤和配置。首先,我们需要准备...
Linux Info: Ubuntu 16.10 x64 Docker 本身就是基于 Linux 的,所以首先以我的一台服务器做实验。虽然最后跑 wordcount 已经由于内存不足而崩掉,但是之前的过程还是可以参考的。 连接服务器 使用 ssh 命令连接远程...
YARN是HADOOP的资源管理器,负责管理HADOOP集群中计算资源的分配。 1.2 HADOOP产生背景 HADOOP的产生背景是大数据时代的到来,随着数据量的急剧增长,传统的数据处理方式无法满足需求,HADOOP的出现解决了这个问题...
Hadoop 集群架构搭建分析是指设计和搭建一个高效、可靠、可扩展的 Hadoop 集群环境,以满足大数据处理和分析的需求。本文将从概述、环境准备、环境搭建三个方面对 Hadoop 集群架构搭建进行分析。 一、概述 Hadoop ...
【标题】:基于CentOS的大数据Hadoop集群搭建详解 【描述】:本教程专为初学者设计,详细阐述了如何手动搭建Hadoop集群,步骤详尽,易于理解。 【标签】:Hadoop集群搭建 【正文】: Hadoop是一个开源的分布式...
本指南将指导用户从头开始搭建 Hadoop 环境,包括虚拟机环境的准备、Linux 基础知识、shell 增强大数据集群环境准备、ZooKeeper 介绍及集群操作网络编程等方面的内容。 虚拟机环境准备 虚拟机环境准备是搭建 ...
在大数据领域,Hadoop是一个广泛使用的开源框架,用于存储和处理海量数据。本文将详细讲解如何搭建一个...这个超详细的教程覆盖了从零开始到集群搭建完成的全过程,按照步骤执行,可以轻松掌握Hadoop集群搭建技术。
在Eclipse中,可以通过以下步骤建立与Hadoop集群的连接: 1. **打开Map/Reduce Locations**:在Eclipse下方窗口找到“Map/Reduce Locations”。 2. **新建Hadoop Location**:在空白区域右键单击,选择“New Hadoop...
### 基于Hadoop集群搭建HBase集群详解 #### 一、引言 随着大数据技术的迅猛发展,海量数据的高效存储与处理成为企业关注的重点。Hadoop作为一款能够处理大量数据的基础框架,被广泛应用于各类场景之中。然而,在...
全程跟着安装配置的一般不会出现问题,jdk版本尽量选择和Hadoop版本相容的,Hadoop版本可以选择较低版本,2.7版本较为稳定,Linux系统版本没有多大要求,一般将Hadoop和jdk版本选择好就行,这个作业较为简单,在安装...
根据提供的文件信息,下面将详细介绍在虚拟机上搭建Hadoop集群环境的相关知识点。 1. 安装虚拟机和操作系统 首先,需要安装虚拟机软件,例如文档中提到的VMware Workstation,它是一款流行的虚拟化软件,可以安装在...
综上所述,这个压缩包提供了全面的Hadoop集群搭建教程,涵盖了从基础环境准备、服务器配置、Hadoop安装、SSH和FTP服务的设置,到集群管理和维护等多个环节。对于想要学习和实践Hadoop大数据处理的人来说,这是一个...