1)确认安装了JDK,没有如下安装
[root@primary ~]# cd /home
[root@primary home]# cp jdk-6u31-linux-i586-rpm.bin /usr/local/
[root@primary home]# cd /usr/local/
[root@primary local]# chmod +x jdk-6u31-linux-i586-rpm.bin
[root@primary local]# ./jdk-6u31-linux-i586-rpm.bin
Press Enter to continue.....
[root@primary local]#
[root@primary local]# chmod +x jdk-6u31-linux-i586.rpm
[root@primary local]# rpm -ivh jdk-6u31-linux-i586.rpm
[root@primary ~]# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.6.0_31
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile
[root@primary ~]# java -version
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) Client VM (build 20.6-b01, mixed mode, sharing)
我用来安装的用户为hadoop
2、建立ssh无密码登录
删除了机器上所有的认证文件
[hadoop@primary ~]$ rm ~/.ssh/*
[hadoop@primary ~]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /home/hadoop/.ssh/id_dsa.
Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.
The key fingerprint is:
8b:db:12:21:57:1c:25:32:d9:4d:4d:16:98:6b:66:88 hadoop@primary
[hadoop@primary ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh对文件的权限要求
[hadoop@primary ~]$ chmod 644 ~/.ssh/authorized_keys
[hadoop@primary ~]$
[hadoop@primary ~]$ ssh localhost
- rsync 默认安装了
[hadoop@primary ~]$ rsync --version
rsync version 2.6.3 protocol version 28
Copyright (C) 1996-2004 by Andrew Tridgell and others
Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
inplace, IPv6, 64-bit system inums, 64-bit internal inums
rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details
4)安装HADOOP
[root@primary home]# chmod 777 hadoop-1.0.4-bin.tar.gz
[hadoop@primary home]$ tar xzvf hadoop-1.0.4-bin.tar.gz
进入HADOOP解压的目录
[hadoop@primary home]$ cd hadoop
查看JAVA—HOME
[hadoop@primary ~]$ env |grep JAVA
JAVA_HOME=/usr/java/jdk1.6.0_31
我在根目录下创建了一个文件。如下操作的
chown -R hadoop:root /hadoop
-bash-3.00$ id
uid=503(hadoop) gid=505(hadoop) groups=0(root),505(hadoop)
-bash-3.00$ tar xzvf hadoop-1.0.4-bin.tar.gz
-bash-3.00$ pwd
/hadoop/hadoop-1.0.4
-bash-3.00$ vi conf/hadoop-env.sh
在 conf/hadoop-env.sh 中配置 Java 环境
配置conf/core-site.xml HDFS 地址和端口
hadoop@station1 hadoop-1.0.3]$ cat conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://station1:9000</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/web/data/</value>
</property>
<property>
<name>dfs.tem.dir</name>
<value>/hadoop/web/tem/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/web/name/</value>
</property>
</configuration>
配置hdfs-site.xml replication
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/web/data/</value>
</property>
<property>
<name>dfs.tem.dir</name>
<value>/hadoop/web/tem/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/web/name/</value>
</property>
</configuration>
配置 conf/mapred-site.xml
[hadoop@station1 hadoop-1.0.3]$ cat conf/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
mapred-queue-acls.xml mapred-site.xml masters
[cloud@station1 conf]$ vi mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>station1:9001</value>
</property>
</configuration>
5)
运行hadoop,进入bin
(1)格式化文件系统
hadoop namenode – format
-bash-3.00$ ./bin/hadoop - format
Unrecognized option: -
Could not create the Java virtual machine.
报错了。命令错了。
[hadoop@primary hadoop-1.0.4]$ ./bin/hadoop namenode -format
12/10/25 16:53:38 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = primary/192.168.26.83
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
12/10/25 16:53:39 INFO util.GSet: VM type = 32-bit
12/10/25 16:53:39 INFO util.GSet: 2% max memory = 19.33375 MB
12/10/25 16:53:39 INFO util.GSet: capacity = 2^22 = 4194304 entries
12/10/25 16:53:39 INFO util.GSet: recommended=4194304, actual=4194304
12/10/25 16:53:39 INFO namenode.FSNamesystem: fsOwner=hadoop
12/10/25 16:53:39 INFO namenode.FSNamesystem: supergroup=supergroup
12/10/25 16:53:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/10/25 16:53:39 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/10/25 16:53:39 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/10/25 16:53:39 INFO namenode.NameNode: Caching file names occuring more than 10 times
12/10/25 16:53:39 INFO common.Storage: Image file of size 112 saved in 0 seconds.
12/10/25 16:53:39 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
12/10/25 16:53:39 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at primary/192.168.26.83
************************************************************/
(2)启动hadoop
start-all.sh
[hadoop@primary hadoop-1.0.4]$ cd bin
[hadoop@primary bin]$ pwd
/hadoop/hadoop-1.0.4/bin
[hadoop@primary bin]$ ./start-all.sh
namenode running as process 7387. Stop it first.
localhost: starting datanode, logging to /hadoop/hadoop-1.0.4/libexec/../logs/hadoop-hadoop-datanode-primary.out
localhost: starting secondarynamenode, logging to /hadoop/hadoop-1.0.4/libexec/../logs/hadoop-hadoop-secondarynamenode-primary.out
starting jobtracker, logging to /hadoop/hadoop-1.0.4/libexec/../logs/hadoop-hadoop-jobtracker-primary.out
localhost: starting tasktracker, logging to /hadoop/hadoop-1.0.4/libexec/../logs/hadoop-hadoop-tasktracker-primary.out
(3)查看进程
jps
(4)查看集群状态
hadoop dfsadmin -report
web 方式查看:http://localhost:50070 http://localhost:50030
但没想到的是按网上的步骤配置后datanode节点怎么也没办法启动。后来通过分析启动日志后发现fs.data.dir参数设置的目录权限必需为755,要不启动datanode节点启动就会因为权限检测错误而自动关闭。提示信息如下:
- WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /data/hadoop/disk, expected: rwxr-xr-x, while actual: rwxrwxrwx
- RROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
源文档 <http://gghandsome.blog.51cto.com/398804/822983>
5)测试向DATANODE 添加文件
[hadoop@primary bin]$ pwd
/hadoop/hadoop-1.0.4/bin
[hadoop@primary bin]$ hadoop fs -put slaves.sh hdfs://localhost:9000/
Warning: $HADOOP_HOME is deprecated.
[hadoop@primary bin]$
6)测试MapReduce
创建目录
[hadoop@primary bin]$ hadoop fs -mkdir hdfs://localhost:9000/input/
放入文件
[hadoop@primary bin]$ hadoop fs -put *.sh /input/
计算这个SH文件里的单词数量(注意这个目录在hadoop 的目录,不在BIN目录)
[hadoop@station1 hadoop-1.0.3]$ bin/hadoop jar hadoop-examples-1.0.3.jar wordcount /input /out
Warning: $HADOOP_HOME is deprecated.
12/10/29 15:00:55 INFO input.FileInputFormat: Total input paths to process : 14
12/10/29 15:00:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/10/29 15:00:55 WARN snappy.LoadSnappy: Snappy native library not loaded
12/10/29 15:00:56 INFO mapred.JobClient: Running job: job_201210291430_0002
12/10/29 15:00:57 INFO mapred.JobClient: map 0% reduce 0%
12/10/29 15:01:10 INFO mapred.JobClient: map 14% reduce 0%
12/10/29 15:01:19 INFO mapred.JobClient: map 28% reduce 0%
12/10/29 15:01:28 INFO mapred.JobClient: map 42% reduce 9%
12/10/29 15:01:34 INFO mapred.JobClient: map 57% reduce 9%
12/10/29 15:01:37 INFO mapred.JobClient: map 57% reduce 14%
12/10/29 15:01:40 INFO mapred.JobClient: map 71% reduce 14%
12/10/29 15:01:43 INFO mapred.JobClient: map 71% reduce 19%
12/10/29 15:01:47 INFO mapred.JobClient: map 85% reduce 19%
12/10/29 15:01:53 INFO mapred.JobClient: map 100% reduce 23%
12/10/29 15:02:00 INFO mapred.JobClient: map 100% reduce 28%
12/10/29 15:02:06 INFO mapred.JobClient: map 100% reduce 100%
12/10/29 15:02:11 INFO mapred.JobClient: Job complete: job_201210291430_0002
12/10/29 15:02:11 INFO mapred.JobClient: Counters: 29
12/10/29 15:02:11 INFO mapred.JobClient: Job Counters
12/10/29 15:02:11 INFO mapred.JobClient: Launched reduce tasks=1
12/10/29 15:02:11 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=87869
12/10/29 15:02:11 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/10/29 15:02:11 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/10/29 15:02:11 INFO mapred.JobClient: Launched map tasks=14
12/10/29 15:02:11 INFO mapred.JobClient: Data-local map tasks=14
12/10/29 15:02:11 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=55240
12/10/29 15:02:11 INFO mapred.JobClient: File Output Format Counters
12/10/29 15:02:11 INFO mapred.JobClient: Bytes Written=6173
12/10/29 15:02:11 INFO mapred.JobClient: FileSystemCounters
12/10/29 15:02:11 INFO mapred.JobClient: FILE_BYTES_READ=28724
12/10/29 15:02:11 INFO mapred.JobClient: HDFS_BYTES_READ=23858
12/10/29 15:02:11 INFO mapred.JobClient: FILE_BYTES_WRITTEN=381199
12/10/29 15:02:11 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=6173
12/10/29 15:02:11 INFO mapred.JobClient: File Input Format Counters
12/10/29 15:02:11 INFO mapred.JobClient: Bytes Read=22341
12/10/29 15:02:11 INFO mapred.JobClient: Map-Reduce Framework
12/10/29 15:02:11 INFO mapred.JobClient: Map output materialized bytes=28802
12/10/29 15:02:11 INFO mapred.JobClient: Map input records=691
12/10/29 15:02:11 INFO mapred.JobClient: Reduce shuffle bytes=28802
12/10/29 15:02:11 INFO mapred.JobClient: Spilled Records=4018
12/10/29 15:02:11 INFO mapred.JobClient: Map output bytes=34161
12/10/29 15:02:11 INFO mapred.JobClient: Total committed heap usage (bytes)=2264064000
12/10/29 15:02:11 INFO mapred.JobClient: CPU time spent (ms)=9340
12/10/29 15:02:11 INFO mapred.JobClient: Combine input records=3137
12/10/29 15:02:11 INFO mapred.JobClient: SPLIT_RAW_BYTES=1517
12/10/29 15:02:11 INFO mapred.JobClient: Reduce input records=2009
12/10/29 15:02:11 INFO mapred.JobClient: Reduce input groups=497
12/10/29 15:02:11 INFO mapred.JobClient: Combine output records=2009
12/10/29 15:02:11 INFO mapred.JobClient: Physical memory (bytes) snapshot=1989103616
12/10/29 15:02:11 INFO mapred.JobClient: Reduce output records=497
12/10/29 15:02:11 INFO mapred.JobClient: Virtual memory (bytes) snapshot=5140561920
12/10/29 15:02:11 INFO mapred.JobClient: Map output records=3137
下面是上传文件文件的统计
[hadoop@station1 bin]$ grep apache *.sh|wc
14 28 915
hadoop 统计结果如下
查看的路径 (因为面页太大了。我只显视了一部分)
结果
查看运行任务
集群的安装请参考:http://pftzzg.iteye.com/blog/1910171
hadoop-0.20 请参考:http://pftzzg.iteye.com/admin/blogs/1911023
1.03 请参考:http://pftzzg.iteye.com/admin/blogs/1910153
相关推荐
1. 解压压缩包:使用tar命令解压文件,例如`tar -xvf spark-3.1.3-bin-without-hadoop.tgz`。 2. 配置环境变量:在`~/.bashrc`或`~/.bash_profile`中设置SPARK_HOME,并将Spark的bin目录添加到PATH。 3. 如果在...
在解压并安装"spark-3.1.3-bin-hadoop3.2.tgz"后,你需要配置环境变量,如SPARK_HOME,然后可以通过启动Master和Worker节点来建立Spark集群。对于单机测试,可以使用本地模式。使用Spark时,你可以编写Python、Scala...
标题中的"apache-hadoop-3.1.0-winutils-master.zip"是一个针对Windows用户的Hadoop工具包,它包含了运行Hadoop所需的特定于Windows的工具和配置。`winutils.exe`是这个工具包的关键组件,它是Hadoop在Windows上的一...
flink-shaded-hadoop-2-uber-2.7.5-10.0.jar
在Ubuntu里安装spark,spark-2.1.0-bin-without-hadoop该版本直接下载到本地后解压即可使用。 Apache Spark 是一种用于大数据工作负载的分布式开源处理系统。它使用内存中缓存和优化的查询执行方式,可针对任何规模...
spark-2.4.5-bin-hadoop2.7.tgz的安装包,适用ubuntu,Redhat等linux系统,解压即可安装,解压命令:tar -zxvf spark-2.4.5-bin-hadoop2.7.tar.gz -C dst(解压后存放路径)
《Spark 2.1.0 与 Hadoop 2.7 整合:全面解析与安装指南》 Spark,作为一款大数据处理框架,以其高效、易用和强大的并行计算能力,深受数据科学家和开发者的青睐。在Spark 2.1.0版本中,它进一步提升了性能,并对...
Flink1.10.1编译hadoop2.7.2 编译flink-shaded-hadoop-2-uber
**1. Spark基础知识** Spark的核心设计是基于分布式内存计算模型,它引入了Resilient Distributed Datasets (RDDs)的概念,这是一种可分区的、只读的数据集,可以在集群中的多个节点上并行操作。RDDs支持两种主要...
1. 解压下载的`spark-3.1.2-bin-hadoop3.2.tgz`文件。 2. 配置环境变量,如`SPARK_HOME`和`PATH`,以便系统能够找到Spark二进制路径。 3. 可选配置,如修改`spark/conf/spark-env.sh`以设置内存分配、JVM参数等。 4....
4. **配置Hadoop**:确保Hadoop已经正确安装和配置,Spark需要知道Hadoop的相关配置,如HDFS的地址、YARN的地址等。 5. **启动Spark**:启动Spark的Master和Worker节点,如果使用的是standalone模式,可以通过`sbin/...
【压缩包子文件的文件名称列表】: "zxl-hadoop-master" 这个文件名可能表示这是项目的主分支或源代码库,通常包含项目的核心代码、配置文件、文档等资源。用户可以通过解压并编译此代码来部署和运行zxl-hadoop服务。...
spark的安装包,Linux下使用,需要欢迎下载,spark-2.3.3-bin-hadoop2.7
spark-assembly-1.6.1-hadoop2.6.0.jar,欢迎下载,,,,,,,,,,,,,,,,,,,,
1. **下载和解压**: 首先,下载`apache-hadoop-3.1.0-winutils-master.zip`到你的本地机器,然后将其解压到一个合适的目录,例如`C:\hadoop\winutils`。 2. **配置环境变量**: 在系统环境变量中添加`HADOOP_HOME`,...
1. **下载和解压**:获取`apache-hadoop-3.1.0-winutils-master.zip`,将其解压到一个适当的目录,例如`C:\hadoop\winutils`。 2. **配置环境变量**:为了使系统能够找到Hadoop的可执行文件,需要设置或更新系统...
1. **解压文件**:首先,使用tar命令解压"spark-2.0.2-bin-hadoop2.4.tgz",这将创建一个名为"spark-2.0.2-bin-hadoop2.4"的目录,包含所有必要的文件和目录。 2. **配置环境变量**:在系统环境变量中添加SPARK_...
1. **Spark核心概念**:Spark的核心组件是弹性分布式数据集(Resilient Distributed Datasets, RDD),这是一种容错的、不可变的数据集合,可以在集群中的多个节点上并行操作。此外,Spark还提供了DataFrame和...
1. **hadoop-winutils**: 在Windows环境下,由于系统内核差异,Hadoop的一些功能需要特定的工具集来支持,这就是winutils。它提供了一系列的命令行工具,如设置Hadoop环境变量、启动HDFS服务、管理HDFS文件等。在...