`

hadoop cluster install

阅读更多

vvvvvvvvv config vvvvvvvv

set domain alias in all nodes(optional must ):
/etc/hosts

#let the master accesses all the slaves without passwords:
#method 1:
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@slave
#method 2:
copy $HOME/.ssh/id_rsa.pub to the slaves,
then login to slaves:cat id_rsa.pub >> $HOME/.ssh/authorized_keys
this will trigger save slave's host key fingerprint to the hadoop@master's "known_hosts" file

#setting the master & slaves file,*do only in master machine*
#Note that the machine on which bin/start-dfs.sh runned will become the primary namenode.

#so ,u MUST start hadoop in namenode ONLY! **but if u can run on all nodes if u want to run a app**
master:
 master
slaves:
 slave1
 slave2
 ...

##  to all nodes(contains master)
#setting the core-site.xml
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

#hdfs-site.xml,optional
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
==It defines how many machines a single file should be replicated to before it becomes available.
If you set this to a value higher than the number of slave nodes (more precisely, the number of datanodes)
that you have available, you will start seeing a lot of (Zero targets found, forbidden1.size=1) type errors in the log files.
==
  </description>
</property>

#mapred-site.xml
<property>
  <name>mapred.job.tracker</name>
  <value>master:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>


setting the /etc/hosts
 because the default c1.domain set to 127.0.1.1,so there are something errors during the start-dfs.sh.
 resolution:add it behind the domain "desktop" as a new domain but the same ip.others is the same.


format on master

done

FAQ:java.io.IOException: Incompatible namespaceIDs
resolution:
 a.(直接删除datanode数据,再format namenode.deprecated)
   1. stop the cluster
   2. delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
   3. reformat the namenode (NOTE: all HDFS data is lost during this process!)
   4. restart the cluster
 b.(将namenode的namespaceID 更新到 datanode中的VERSION中)
   1. stop the datanode
   2. edit the value of namespaceID in <dfs.data.dir>/current/VERSION to match the value of the current namenode
   3. restart the datanode

summary:
本地的hadoop.tmp.dir与cluster上的ls目录前面是对应的,但后面部分不是.

^^^^ config ^^^^^

 

output:


cluster run:(use 54s)
input 3 files
hadoop@leibnitz-laptop:/cc/hadoop/hadoop-0.20.2$ hadoop jar hadoop-0.20.2-examples.jar wordcount input/ output
11/02/26 02:51:33 INFO input.FileInputFormat: Total input paths to process : 3
11/02/26 02:51:34 INFO mapred.JobClient: Running job: job_201102260237_0002
11/02/26 02:51:35 INFO mapred.JobClient:  map 0% reduce 0%
11/02/26 02:51:57 INFO mapred.JobClient:  map 33% reduce 0%
11/02/26 02:52:05 INFO mapred.JobClient:  map 92% reduce 0%
11/02/26 02:52:08 INFO mapred.JobClient:  map 100% reduce 0%
11/02/26 02:52:18 INFO mapred.JobClient:  map 100% reduce 22%
11/02/26 02:52:25 INFO mapred.JobClient:  map 100% reduce 100%
11/02/26 02:52:27 INFO mapred.JobClient: Job complete: job_201102260237_0002
11/02/26 02:52:27 INFO mapred.JobClient: Counters: 17
11/02/26 02:52:27 INFO mapred.JobClient:   Job Counters
11/02/26 02:52:27 INFO mapred.JobClient:     Launched reduce tasks=1
11/02/26 02:52:27 INFO mapred.JobClient:     Launched map tasks=3
11/02/26 02:52:27 INFO mapred.JobClient:     Data-local map tasks=3
11/02/26 02:52:27 INFO mapred.JobClient:   FileSystemCounters
11/02/26 02:52:27 INFO mapred.JobClient:     FILE_BYTES_READ=2214725
11/02/26 02:52:27 INFO mapred.JobClient:     HDFS_BYTES_READ=3671479
11/02/26 02:52:27 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=3689100
11/02/26 02:52:27 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=880802
11/02/26 02:52:27 INFO mapred.JobClient:   Map-Reduce Framework
11/02/26 02:52:27 INFO mapred.JobClient:     Reduce input groups=82331
11/02/26 02:52:27 INFO mapred.JobClient:     Combine output records=102317
11/02/26 02:52:27 INFO mapred.JobClient:     Map input records=77931
11/02/26 02:52:27 INFO mapred.JobClient:     Reduce shuffle bytes=1474279
11/02/26 02:52:27 INFO mapred.JobClient:     Reduce output records=82331
11/02/26 02:52:27 INFO mapred.JobClient:     Spilled Records=255947
11/02/26 02:52:27 INFO mapred.JobClient:     Map output bytes=6076039
11/02/26 02:52:27 INFO mapred.JobClient:     Combine input records=629167
11/02/26 02:52:27 INFO mapred.JobClient:     Map output records=629167
11/02/26 02:52:27 INFO mapred.JobClient:     Reduce input records=102317

分享到:
评论

相关推荐

    Hadoop-Cluster-Install-.zip_hadoop_hadoop cluster

    Hadoop在centOS系统下的安装文档,系统是虚拟机上做出来的,一个namenode,两个datanode,详细讲解了安装过程。

    利用ansible 自动 安装Hadoop 集群

    - name: Install Hadoop Cluster hosts: hadoop_cluster become: yes tasks: - name: Update package list apt: update_cache=yes - name: Install Java apt: name=java-package state=latest - name: ...

    hadoop-2.6.0版本所需插件.zip

    6. **运行和调试**:写好代码后,右键点击项目,选择"Hadoop" &gt; "Run on Cluster"或"Debug on Cluster",Eclipse会自动将你的程序提交到Hadoop集群上运行。你可以在"Console"视图中查看运行日志,也可以在...

    win7下Eclipse开发Hadoop应用程序环境搭建

    此外,可以使用Mini Hadoop Cluster或Hadoop模拟器(如Hadoop Local Mode)在本地进行快速测试。 以上步骤是基础的环境搭建过程,实际开发中可能还需要考虑其他因素,如配置Hadoop的YARN资源管理器,或者使用更高级...

    hadoop2x-eclipse-plugin

    2. 配置集群信息:如果你的Hadoop集群不是本地模式,需要在"Cluster Configuration"中添加集群的配置,包括JobTracker和NameNode的地址。 三、创建Hadoop项目 有了插件支持,创建Hadoop MapReduce项目变得非常简单...

    hadoop安装

    2. **安装Java**:Hadoop依赖于Java环境,执行`sudo apt-get install default-jdk`来安装Java开发工具包。 3. **下载Hadoop**:从Apache镜像站点(如http://mirror.esocc.com/apache/hadoop/common/stable/)获取所...

    在windows环境下安装Hadoop

    - 访问Hadoop官网:[http://hadoop.apache.org/docs/stable/cluster_setup.html](http://hadoop.apache.org/docs/stable/cluster_setup.html),下载适合版本的Hadoop压缩包。 2. **解压并配置Hadoop** - 将下载...

    hadoop2x-eclipse-plugin-master

    3. 直接下载安装:将下载的插件文件解压后,通过Eclipse的“Help” -&gt; "Install New Software" -&gt; "Add" -&gt; "Archive",选择解压后的插件文件夹,然后按照提示完成安装。 三、配置Hadoop环境 1. 安装完插件后,...

    cdh hadoop官方安装文档

    3. **安装CDH4之前的准备工作(BEFORE YOU INSTALL CDH4 ON A CLUSTER)** 4. **支持的操作系统(SUPPORTED OPERATING SYSTEMS FOR CDH4)** 5. **CDH4安装流程(CDH4 INSTALLATION)** 6. **CDH4与MapReduce(CDH4 ...

    Hadoop 管理

    cluster { name = "myhadoop" } ``` - **同步所有被监控节点的gmond.conf** - 确保所有被监控节点上的配置文件一致。 - **配置hadoop-metrics2.properties文件** - 启用Ganglia支持,并指定服务器地址,例如...

    HOD(hadoop on demand)的基本安装

    - 设置必要的环境变量,如`${JAVA_HOME}`、`${CLUSTER_NAME}`、`${HADOOP_HOME}`、`${RM_QUEUE}`和`${RM_HOME}`等,以适应你的集群环境。 #### 结论 HOD的安装和配置涉及多个步骤,从Torque的安装到HOD的部署,每...

    Hadoop_硬件HA_配置

    硬件HA的实现依赖于RHCS(Red Hat Cluster Suite),它包含luci、ricci、cman和rgmanager等组件。通过`yum install luci ricci cman rgmanager`命令来安装这些组件。启动服务并配置它们自动启动: 1. 在mfs01上启动...

    linux配置Hadoop

    Linux 配置 Hadoop 是指在 Linux 系统中安装和配置 Hadoop 分布式文件系统(HDFS),以便在 cluster 中存储和处理大规模数据。以下是 Linux 配置 Hadoop 的详细步骤和知识点: 1. Linux 系统安装和网络设置 在 ...

    Hadoop单机版和全分布式(集群)安装

    Hadoop,分布式的大数据存储和计算, 免费开源!有Linux基础的同学安装起来比较顺风顺水,写几个配置文件就可以启动了,本人菜鸟,所以写的比较详细。为了方便,本人使用三台的虚拟机系统是Ubuntu-12。设置虚拟机的...

    最新Hadoop集群部署(最全面).docx

    可以根据官方地址(http://hadoop.apache.org/common/docs/r0.19.2/cn/cluster_setup.html)中的指南完成部署。 四、结语 Hadoop集群部署是大数据处理的关键步骤,正确的部署可以确保集群的高可用性和高性能。本文...

    hadoop-cluster

    “ copy”角色将复制hadoop和jdk的安装包,“ install”角色将在实例上安装这些rpm包,“ masterconfig”将创建一个目录格式化它并在主实例上启动namenode服务,“ slaveconfig”将创建一个目录并在从属实例上启动...

    Python库 | MyCluster-1.1.2-py3-none-any.whl

    在命令行中输入`pip install MyCluster-1.1.2-py3-none-any.whl`,即可将库安装到你的Python环境中。安装完成后,就可以在你的项目中导入MyCluster库,开始享受它带来的便利。 在实际应用中,MyCluster可以广泛应用...

    伦敦:在Python和Hadoop中实现Map Reduce作业

    p python3 .envsource .env/bin/activatepip install -r requirements.txt构建Docker映像./bin/build.sh slave./bin/build.sh master./bin/build.sh zoo./bin/build.sh network运行Docker容器./bin/start.sh slave./...

    anaconda_cluster_install:为指定用户在机器集群中自动安装 Anaconda Python

    为什么选择Python它是开源的它是 Python 2.7,最新的 2.* 版本它包括 Numpy、IPython 和许多其他很酷的东西,为您的架构预先配置和预编译一个用例假设您有一个 Hadoop 集群,带有 CentOS 6.* 机器。 (因为 CentOS 6...

Global site tag (gtag.js) - Google Analytics