- 浏览: 285146 次
- 性别:
- 来自: 广州
文章分类
- 全部博客 (247)
- free talking (11)
- java (18)
- search (16)
- hbase (34)
- open-sources (0)
- architect (1)
- zookeeper (16)
- vm (1)
- hadoop (34)
- nutch (33)
- lucene (5)
- ubuntu/shell (8)
- ant (0)
- mapreduce (5)
- hdfs (2)
- hadoop sources reading (13)
- AI (0)
- distributed tech (1)
- others (1)
- maths (6)
- english (1)
- art & entertainment (1)
- nosql (1)
- algorithms (8)
- hadoop-2.5 (16)
- hbase-0.94.2 source (28)
- zookeeper-3.4.3 source reading (1)
- solr (1)
- TODO (3)
- JVM optimization (1)
- architecture (0)
- hbase-guideline (1)
- data mining (3)
- hive (1)
- mahout (0)
- spark (28)
- scala (3)
- python (0)
- machine learning (1)
最新评论
-
jpsb:
...
为什么需要分布式? -
leibnitz:
hi guy, this is used as develo ...
compile hadoop-2.5.x on OS X(macbook) -
string2020:
撸主真土豪,在苹果里面玩大数据.
compile hadoop-2.5.x on OS X(macbook) -
youngliu_liu:
怎样运行这个脚本啊??大牛,我刚进入搜索引擎行业,希望你能不吝 ...
nutch 数据增量更新 -
leibnitz:
also, there is a similar bug ...
2。hbase CRUD--Lease in hbase
vvvvvvvvv config vvvvvvvv
set domain alias in all nodes(optional
must
):
/etc/hosts
#let the master accesses all the slaves without passwords:
#method 1:
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@slave
#method 2:
copy $HOME/.ssh/id_rsa.pub to the slaves,
then login to slaves:cat id_rsa.pub >> $HOME/.ssh/authorized_keys
this will trigger save slave's host key fingerprint to the hadoop@master's "known_hosts" file
#setting the master & slaves file,*do only in master machine*
#Note that the machine on which bin/start-dfs.sh runned will become the primary namenode.
#so ,u MUST start hadoop in namenode ONLY! **but if u can run on all nodes if u want to run a app**
master:
master
slaves:
slave1
slave2
...
## to all nodes(contains master)
#setting the core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://master:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
#hdfs-site.xml,optional
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
==It defines how many machines a single file should be replicated to before it becomes available.
If you set this to a value higher than the number of slave nodes (more precisely, the number of datanodes)
that you have available, you will start seeing a lot of (Zero targets found, forbidden1.size=1) type errors in the log files.
==
</description>
</property>
#mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>master:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
setting the /etc/hosts
because the default c1.domain set to 127.0.1.1,so there are something errors during the start-dfs.sh.
resolution:add it behind the domain "desktop" as a new domain but the same ip.others is the same.
format on master
done
FAQ:java.io.IOException: Incompatible namespaceIDs
resolution:
a.(直接删除datanode数据,再format namenode.deprecated)
1. stop the cluster
2. delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3. reformat the namenode (NOTE: all HDFS data is lost during this process!)
4. restart the cluster
b.(将namenode的namespaceID 更新到 datanode中的VERSION中)
1. stop the datanode
2. edit the value of namespaceID in <dfs.data.dir>/current/VERSION to match the value of the current namenode
3. restart the datanode
summary:
本地的hadoop.tmp.dir与cluster上的ls目录前面是对应的,但后面部分不是.
^^^^ config ^^^^^
output:
cluster run:(use 54s)
input 3 files
hadoop@leibnitz-laptop:/cc/hadoop/hadoop-0.20.2$ hadoop jar hadoop-0.20.2-examples.jar wordcount input/ output
11/02/26 02:51:33 INFO input.FileInputFormat: Total input paths to process : 3
11/02/26 02:51:34 INFO mapred.JobClient: Running job: job_201102260237_0002
11/02/26 02:51:35 INFO mapred.JobClient: map 0% reduce 0%
11/02/26 02:51:57 INFO mapred.JobClient: map 33% reduce 0%
11/02/26 02:52:05 INFO mapred.JobClient: map 92% reduce 0%
11/02/26 02:52:08 INFO mapred.JobClient: map 100% reduce 0%
11/02/26 02:52:18 INFO mapred.JobClient: map 100% reduce 22%
11/02/26 02:52:25 INFO mapred.JobClient: map 100% reduce 100%
11/02/26 02:52:27 INFO mapred.JobClient: Job complete: job_201102260237_0002
11/02/26 02:52:27 INFO mapred.JobClient: Counters: 17
11/02/26 02:52:27 INFO mapred.JobClient: Job Counters
11/02/26 02:52:27 INFO mapred.JobClient: Launched reduce tasks=1
11/02/26 02:52:27 INFO mapred.JobClient: Launched map tasks=3
11/02/26 02:52:27 INFO mapred.JobClient: Data-local map tasks=3
11/02/26 02:52:27 INFO mapred.JobClient: FileSystemCounters
11/02/26 02:52:27 INFO mapred.JobClient: FILE_BYTES_READ=2214725
11/02/26 02:52:27 INFO mapred.JobClient: HDFS_BYTES_READ=3671479
11/02/26 02:52:27 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3689100
11/02/26 02:52:27 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=880802
11/02/26 02:52:27 INFO mapred.JobClient: Map-Reduce Framework
11/02/26 02:52:27 INFO mapred.JobClient: Reduce input groups=82331
11/02/26 02:52:27 INFO mapred.JobClient: Combine output records=102317
11/02/26 02:52:27 INFO mapred.JobClient: Map input records=77931
11/02/26 02:52:27 INFO mapred.JobClient: Reduce shuffle bytes=1474279
11/02/26 02:52:27 INFO mapred.JobClient: Reduce output records=82331
11/02/26 02:52:27 INFO mapred.JobClient: Spilled Records=255947
11/02/26 02:52:27 INFO mapred.JobClient: Map output bytes=6076039
11/02/26 02:52:27 INFO mapred.JobClient: Combine input records=629167
11/02/26 02:52:27 INFO mapred.JobClient: Map output records=629167
11/02/26 02:52:27 INFO mapred.JobClient: Reduce input records=102317
发表评论
-
hadoop-replication written flow
2017-08-14 17:00 563w:net write r :net read( ... -
hbase-export table to json file
2015-12-25 17:21 1672i wanna export a table to j ... -
yarn-similar logs when starting up container
2015-12-09 17:17 95415/12/09 16:47:52 INFO yarn.E ... -
hadoop-compression
2015-10-26 16:52 496http://blog.cloudera.com/blog ... -
hoya--hbase on yarn
2015-04-23 17:00 449Introducing Hoya – HBase on YA ... -
compile hadoop-2.5.x on OS X(macbook)
2014-10-30 15:42 2503same as compile hbase ,it ' ... -
upgrades of hadoop and hbase
2014-10-28 11:39 7481.the match relationships ... -
how to submit jars to a map reduce job?
2014-04-02 01:23 550there maybe two ways : 1.serv ... -
install snappy compression in hadoop and hbase
2014-03-08 00:36 4621.what is snappy ... -
3。hbase rpc/ipc/proxy通信机制
2013-07-15 15:12 1311一。RPC VS IPC (relationship/di ... -
hadoop-2 dfs/yarn 相关概念
2012-10-03 00:22 1916一.dfs 1.旧的dfs方案 可以看到bloc ... -
hadoop 删除节点(Decommission nodes)
2012-09-02 03:28 2686具体的操作步骤网上已经很多,这里只说明一下自己操作过程注意事项 ... -
hadoop 2(0.23.x) 与 0.20.x比较
2012-07-01 12:09 2211以下大部分内容来自网络,这里主要是进行学习,比较 1、 ... -
hadoop-2.0 alpha standalone install
2012-06-10 12:02 2514看了一堆不太相关的东西... 其实只要解压运行即可,形 ... -
hadoop源码阅读-shell启动流程-start-all
2012-05-06 01:13 882when executes start-all.sh ... -
hadoop源码阅读-shell启动流程
2012-05-03 01:58 1893open the bin/hadoop file,you w ... -
hadoop源码阅读-第二回阅读开始
2012-05-03 01:03 1034出于工作需要及版本更新带来的变动,现在开始再次进入源码 ... -
hadoop 联合 join操作
2012-01-02 18:06 1061hadoop join操作类似于sql中的功能,就是对多表进行 ... -
hadoop几种排序简介
2011-12-16 21:52 1626在map reduce框架中,除了常用的分布式计算外,排序也算 ... -
nutch搜索架构关键类
2011-12-13 00:19 14todo
相关推荐
Hadoop在centOS系统下的安装文档,系统是虚拟机上做出来的,一个namenode,两个datanode,详细讲解了安装过程。
- name: Install Hadoop Cluster hosts: hadoop_cluster become: yes tasks: - name: Update package list apt: update_cache=yes - name: Install Java apt: name=java-package state=latest - name: ...
6. **运行和调试**:写好代码后,右键点击项目,选择"Hadoop" > "Run on Cluster"或"Debug on Cluster",Eclipse会自动将你的程序提交到Hadoop集群上运行。你可以在"Console"视图中查看运行日志,也可以在...
此外,可以使用Mini Hadoop Cluster或Hadoop模拟器(如Hadoop Local Mode)在本地进行快速测试。 以上步骤是基础的环境搭建过程,实际开发中可能还需要考虑其他因素,如配置Hadoop的YARN资源管理器,或者使用更高级...
2. 配置集群信息:如果你的Hadoop集群不是本地模式,需要在"Cluster Configuration"中添加集群的配置,包括JobTracker和NameNode的地址。 三、创建Hadoop项目 有了插件支持,创建Hadoop MapReduce项目变得非常简单...
2. **安装Java**:Hadoop依赖于Java环境,执行`sudo apt-get install default-jdk`来安装Java开发工具包。 3. **下载Hadoop**:从Apache镜像站点(如http://mirror.esocc.com/apache/hadoop/common/stable/)获取所...
- 访问Hadoop官网:[http://hadoop.apache.org/docs/stable/cluster_setup.html](http://hadoop.apache.org/docs/stable/cluster_setup.html),下载适合版本的Hadoop压缩包。 2. **解压并配置Hadoop** - 将下载...
3. 直接下载安装:将下载的插件文件解压后,通过Eclipse的“Help” -> "Install New Software" -> "Add" -> "Archive",选择解压后的插件文件夹,然后按照提示完成安装。 三、配置Hadoop环境 1. 安装完插件后,...
3. **安装CDH4之前的准备工作(BEFORE YOU INSTALL CDH4 ON A CLUSTER)** 4. **支持的操作系统(SUPPORTED OPERATING SYSTEMS FOR CDH4)** 5. **CDH4安装流程(CDH4 INSTALLATION)** 6. **CDH4与MapReduce(CDH4 ...
cluster { name = "myhadoop" } ``` - **同步所有被监控节点的gmond.conf** - 确保所有被监控节点上的配置文件一致。 - **配置hadoop-metrics2.properties文件** - 启用Ganglia支持,并指定服务器地址,例如...
- 设置必要的环境变量,如`${JAVA_HOME}`、`${CLUSTER_NAME}`、`${HADOOP_HOME}`、`${RM_QUEUE}`和`${RM_HOME}`等,以适应你的集群环境。 #### 结论 HOD的安装和配置涉及多个步骤,从Torque的安装到HOD的部署,每...
硬件HA的实现依赖于RHCS(Red Hat Cluster Suite),它包含luci、ricci、cman和rgmanager等组件。通过`yum install luci ricci cman rgmanager`命令来安装这些组件。启动服务并配置它们自动启动: 1. 在mfs01上启动...
Linux 配置 Hadoop 是指在 Linux 系统中安装和配置 Hadoop 分布式文件系统(HDFS),以便在 cluster 中存储和处理大规模数据。以下是 Linux 配置 Hadoop 的详细步骤和知识点: 1. Linux 系统安装和网络设置 在 ...
Hadoop,分布式的大数据存储和计算, 免费开源!有Linux基础的同学安装起来比较顺风顺水,写几个配置文件就可以启动了,本人菜鸟,所以写的比较详细。为了方便,本人使用三台的虚拟机系统是Ubuntu-12。设置虚拟机的...
可以根据官方地址(http://hadoop.apache.org/common/docs/r0.19.2/cn/cluster_setup.html)中的指南完成部署。 四、结语 Hadoop集群部署是大数据处理的关键步骤,正确的部署可以确保集群的高可用性和高性能。本文...
“ copy”角色将复制hadoop和jdk的安装包,“ install”角色将在实例上安装这些rpm包,“ masterconfig”将创建一个目录格式化它并在主实例上启动namenode服务,“ slaveconfig”将创建一个目录并在从属实例上启动...
在命令行中输入`pip install MyCluster-1.1.2-py3-none-any.whl`,即可将库安装到你的Python环境中。安装完成后,就可以在你的项目中导入MyCluster库,开始享受它带来的便利。 在实际应用中,MyCluster可以广泛应用...
p python3 .envsource .env/bin/activatepip install -r requirements.txt构建Docker映像./bin/build.sh slave./bin/build.sh master./bin/build.sh zoo./bin/build.sh network运行Docker容器./bin/start.sh slave./...
为什么选择Python它是开源的它是 Python 2.7,最新的 2.* 版本它包括 Numpy、IPython 和许多其他很酷的东西,为您的架构预先配置和预编译一个用例假设您有一个 Hadoop 集群,带有 CentOS 6.* 机器。 (因为 CentOS 6...