Chinese installing and using instruction - The best guidance in installing and using Nutch in China
国内首套免费的《Nutch相关框架视频教程》
土豆在线观看地址:http://www.tudou.com/home/item_u106249539s0p1.html
超清原版下载地址: http://pan.baidu.com/share/home?uk=3157595467
下载 Nutch相关框架安装使用最佳指南.docx
一、nutch1.2
二、nutch1.5.1
三、nutch2.0
四、配置SSH
五、安装Hadoop Cluster(伪分布式运行模式)并运行Nutch
六、安装Hadoop Cluster(分布式运行模式)并运行Nutch
七、配置Ganglia监控Hadoop集群和HBase集群
八、Hadoop配置Snappy压缩
九、Hadoop配置Lzo压缩
十、配置zookeeper集群以运行hbase
十一、配置Hbase集群以运行nutch-2.1(Region Servers会因为内存的问题宕机)
十二、配置Accumulo集群以运行nutch-2.1(gora存在BUG)
十三、配置Cassandra 集群以运行nutch-2.1(Cassandra 采用去中心化结构)
十四、配置MySQL 单机服务器以运行nutch-2.1
十五、nutch2.1 使用DataFileAvroStore作为数据源
十六、nutch2.1 使用AvroStore作为数据源
十七、配置SOLR
十八、Nagios监控
十九、配置Splunk
二十、配置Pig
二十一、配置Hive
二十二、配置Hadoop2.x集群
一、nutch1.2
步骤和二大同小异,在步骤 5、配置构建路径 中需要多两个操作:在左部Package Explorer的 nutch1.2文件夹上单击右键 > Build Path > Configure Build Path... > 选中Source选项 > Default output folder:修改nutch1.2/bin为nutch1.2/_bin,在左部Package Explorer的 nutch1.2文件夹下的bin文件夹上单击右键 > Team > 还原
二中黄色背景部分是版本号的差异,红色部分是1.2版本没有的,绿色部分是不一样的地方,如下:
1、Add JARs... > nutch1.2 > lib ,选中所有的.jar文件 > OK
2、crawl-urlfilter.txt
3、将crawl -urlfilter.txt.template改名为crawl -urlfilter.txt
4、修改crawl-urlfilter.txt,将
# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*MY.DOMAIN.NAME/
# skip everything else
-.
5、cd /home/ysc/workspace/nutch1.2
nutch1.2是一个完整的搜索引擎,nutch1.5.1只是一个爬虫。nutch1.2可以把索引提交给SOLR,也可以直接生成LUCENE索引,nutch1.5.1则只能把索引提交给SOLR:
1、cd /home/ysc
2、wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-7/v7.0.29/bin/apache-tomcat-7.0.29.tar.gz
3、tar -xvf apache-tomcat-7.0.29.tar.gz
4、在左部Package Explorer的 nutch1.2文件夹下的build.xml文件上单击右键 > Run As > Ant Build... > 选中war target > Run
5、cd /home/ysc/workspace/nutch1.2/build
6、unzip nutch-1.2.war -d nutch-1.2
7、cp -r nutch-1.2 /home/ysc/apache-tomcat-7.0.29/webapps
8、vi /home/ysc/apache-tomcat-7.0.29/webapps/nutch-1.2/WEB-INF/classes/nutch-site.xml
加入以下配置:
<property>
<name>searcher.dir</name>
<value>/home/ysc/workspace/nutch1.2/data</value>
<description>
Path to root of crawl. This directory is searched (in
order) for either the file search-servers.txt, containing a list of
distributed search servers, or the directory "index" containing
merged indexes, or the directory "segments" containing segment
indexes.
</description>
</property>
9、vi /home/ysc/apache-tomcat-7.0.29/conf/server.xml
将
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"/>
改为
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" URIEncoding="utf-8"/>
10、cd /home/ysc/apache-tomcat-7.0.29/bin
11、./startup.sh
12、访问:http://localhost:8080/nutch-1.2/
关于nutch1.2更多的BUG修复及资料,请参看我在CSDN发布的资源:http://download.csdn.net/user/yangshangchuan
二、nutch1.5.1
1、下载并解压eclipse(集成开发环境)
下载地址:http://www.eclipse.org/downloads/,下载Eclipse IDE for Java EE Developers
2、安装Subclipse插件(SVN客户端)
插件地址:http://subclipse.tigris.org/update_1.8.x,
3、安装IvyDE插件(下载依赖Jar)
插件地址:http://www.apache.org/dist/ant/ivyde/updatesite/
4、签出代码
File > New > Project > SVN > 从SVN 检出项目
创建新的资源库位置 > URL:https://svn.apache.org/repos/asf/nutch/tags/release-1.5.1/ > 选中URL > Finish
弹出New Project向导,选择Java Project > Next,输入Project name:nutch1.5.1 > Finish
5、配置构建路径
在左部Package Explorer的 nutch1.5.1文件夹上单击右键 > Build Path > Configure Build Path...
> 选中Source选项 > 选择src > Remove > Add Folder... > 选择src/bin, src/java, src/test 和 src/testresources(对于插件,需要选中src/plugin目录下的每一个插件目录下的src/java , src/test文件夹) > OK
切换到Libraries选项 >
Add Class Folder... > 选中nutch1.5.1/conf > OK
Add JARs... > 需要选中src/plugin目录下的每一个插件目录下的lib目录下的jar文件 > OK
Add Library... > IvyDE Managed Dependencies > Next > Main > Ivy File > Browse > ivy/ivy.xml > Finish
切换到Order and Export选项>
选中conf > Top
6、执行ANT
在左部Package Explorer的 nutch1.5.1文件夹下的build.xml文件上单击右键 > Run As > Ant Build
在左部Package Explorer的 nutch1.5.1文件夹上单击右键 > Refresh
在左部Package Explorer的 nutch1.5.1文件夹上单击右键 > Build Path > Configure Build Path... > 选中Libraries选项 > Add Class Folder... > 选中build > OK
7、修改配置文件nutch-site.xml 和regex-urlfilter.txt
将nutch-site.xml.template改名为nutch-site.xml
将regex-urlfilter.txt.template改名为regex-urlfilter.txt
在左部Package Explorer的 nutch1.5.1文件夹上单击右键 > Refresh
将如下配置项加入文件nutch-site.xml:
<property>
<name>http.agent.name</name>
<value>nutch</value>
</property>
<property>
<name>http.content.limit</name>
<value>-1</value>
</property>
修改regex-urlfilter.txt,将
# accept anything else
+.
替换为:
+^http://([a-z0-9]*\.)*news.163.com/
-.
8、开发调试
在左部Package Explorer的 nutch1.5.1文件夹上单击右键 > New > Folder > Folder name: urls
在刚新建的urls目录下新建一个文本文件url,文本内容为:http://news.163.com
打开src/java下的org.apache.nutch.crawl.Crawl.java类,单击右键Run As > Run Configurations > Arguments > 在Program arguments输入框中输入: urls -dir data -depth 3 > Run
在需要调试的地方打上断点Debug As > Java Applicaton
9、查看结果
查看segments目录:
打开src/java下的org.apache.nutch.segment.SegmentReader.java类
单击右键Run As > Java Applicaton,控制台会输出该命令的使用方法
单击右键Run As > Run Configurations > Arguments > 在Program arguments输入框中输入: -dump data/segments/* data/segments/dump
用文本编辑器打开文件data/segments/dump/dump查看segments中存储的信息
查看crawldb目录:
打开src/java下的org.apache.nutch.crawl.CrawlDbReader.java类
单击右键Run As > Java Applicaton,控制台会输出该命令的使用方法
单击右键Run As > Run Configurations > Arguments > 在Program arguments输入框中输入: data/crawldb -stats
控制台会输出 crawldb统计信息
查看linkdb目录:
打开src/java下的org.apache.nutch.crawl.LinkDbReader.java类
单击右键Run As > Java Applicaton,控制台会输出该命令的使用方法
单击右键Run As > Run Configurations > Arguments > 在Program arguments输入框中输入: data/linkdb -dump data/linkdb_dump
用文本编辑器打开文件data/linkdb_dump/part-00000查看linkdb中存储的信息
10、全网分步骤抓取
在左部Package Explorer的 nutch1.5.1文件夹下的build.xml文件上单击右键 > Run As > Ant Build
cd /home/ysc/workspace/nutch1.5.1/runtime/local
#准备URL列表
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz
mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/url
#注入URL
bin/nutch inject crawl/crawldb dmoz
#生成抓取列表
bin/nutch generate crawl/crawldb crawl/segments
#第一次抓取
s1=`ls -d crawl/segments/2* | tail -1`
echo $s1
#抓取网页
bin/nutch fetch $s1
#解析网页
bin/nutch parse $s1
#更新URL状态
bin/nutch updatedb crawl/crawldb $s1
#第二次抓取
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s2=`ls -d crawl/segments/2* | tail -1`
echo $s2
bin/nutch fetch $s2
bin/nutch parse $s2
bin/nutch updatedb crawl/crawldb $s2
#第三次抓取
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s3=`ls -d crawl/segments/2* | tail -1`
echo $s3
bin/nutch fetch $s3
bin/nutch parse $s3
bin/nutch updatedb crawl/crawldb $s3
#生成反向链接库
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
11、索引和搜索
cd /home/ysc/
wget http://mirror.bjtu.edu.cn/apache/lucene/solr/3.6.1/apache-solr-3.6.1.tgz
tar -xvf apache-solr-3.6.1.tgz
cd apache-solr-3.6.1 /example
NUTCH_RUNTIME_HOME=/home/ysc/workspace/nutch1.5.1/runtime/local
APACHE_SOLR_HOME=/home/ysc/apache-solr-3.6.1
cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/conf/
如果需要把网页内容存储到索引中,则修改 schema.xml文件中的
<field name="content" type="text" stored="false" indexed="true"/>
为
<field name="content" type="text" stored="true" indexed="true"/>
修改${APACHE_SOLR_HOME}/example/solr/conf/solrconfig.xml,将里面的<str name="df">text</str>都替换为<str name="df">content</str>
把${APACHE_SOLR_HOME}/example/solr/conf/schema.xml中的 <schema name="nutch" version="1.5.1">修改为<schema name="nutch" version="1.5">
#启动SOLR服务器
java -jar start.jar
http://127.0.0.1:8983/solr/admin/
http://127.0.0.1:8983/solr/admin/stats.jsp
cd /home/ysc/workspace/nutch1.5.1/runtime/local
#提交索引
bin/nutch solrindex http://127.0.0.1:8983/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/*
执行完整crawl:
bin/nutch crawl urls -dir data -depth 2 -topN 100 -solr http://127.0.0.1:8983/solr/
使用以下命令分页查看所有索引的文档:
http://127.0.0.1:8983/solr/select/?q=*%3A*&version=2.2&start=0&rows=10&indent=on
标题包含“网易”的文档:
http://127.0.0.1:8983/solr/select/?q=title%3A%E7%BD%91%E6%98%93&version=2.2&start=0&rows=10&indent=on
12、查看索引信息
cd /home/ysc/
wget http://luke.googlecode.com/files/lukeall-3.5.0.jar
java -jar lukeall-3.5.0.jar
Path: /home/ysc/apache-solr-3.6.1/example/solr/data
13、配置SOLR的中文分词
cd /home/ysc/
wget http://mmseg4j.googlecode.com/files/mmseg4j-1.8.5.zip
unzip mmseg4j-1.8.5.zip -d mmseg4j-1.8.5
APACHE_SOLR_HOME=/home/ysc/apache-solr-3.6.1
mkdir $APACHE_SOLR_HOME/example/solr/lib
mkdir $APACHE_SOLR_HOME/example/solr/dic
cp mmseg4j-1.8.5/mmseg4j-all-1.8.5.jar $APACHE_SOLR_HOME/example/solr/lib
cp mmseg4j-1.8.5/data/*.dic $APACHE_SOLR_HOME/example/solr/dic
将${APACHE_SOLR_HOME}/example/solr/conf/schema.xml文件中的
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
和
<tokenizer class="solr.StandardTokenizerFactory"/>
替换为
<tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="complex" dicPath="/home/ysc/apache-solr-3.6.1/example/solr/dic"/>
#重新启动SOLR服务器
java -jar start.jar
#重建索引,演示在开发环境中如何操作
打开src/java下的org.apache.nutch.indexer.solr.SolrIndexer.java类
单击右键Run As > Java Applicaton,控制台会输出该命令的使用方法
单击右键Run As > Run Configurations > Arguments > 在Program arguments输入框中输入: http://127.0.0.1:8983/solr/ data/crawldb -linkdb data/linkdb data/segments/*
使用luke重新打开索引就会发现分词起作用了
三、nutch2.0
nutch2.0和二中的nutch1.5.1的步骤相同,但在8、开发调试之前需要做以下配置:
在左部Package Explorer的 nutch2.0文件夹上单击右键 > New > Folder > Folder name: data并指定数据存储方式,选如下之一:
1、使用mysql作为数据存储
1)、在nutch2.0/conf/nutch-site.xml中加入如下配置:
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.sql.store.SqlStore</value>
</property>
2)、将nutch2.0/conf/gora.properties文件中的
gora.sqlstore.jdbc.driver=org.hsqldb.jdbc.JDBCDriver
gora.sqlstore.jdbc.url=jdbc:hsqldb:hsql://localhost/nutchtest
gora.sqlstore.jdbc.user=sa
gora.sqlstore.jdbc.password=
修改为
gora.sqlstore.jdbc.driver=com.mysql.jdbc.Driver
gora.sqlstore.jdbc.url=jdbc:mysql://127.0.0.1:3306/nutch2
gora.sqlstore.jdbc.user=root
gora.sqlstore.jdbc.password=ROOT
3)、打开nutch2.0/ivy/ivy.xml中的mysql-connector-java依赖
4)、sudo apt-get install mysql-server
2、使用hbase作为数据存储
1)、在nutch2.0/conf/nutch-site.xml中加入如下配置:
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.hbase.store.HBaseStore</value>
</property>
2)、打开nutch2.0/ivy/ivy.xml中的gora-hbase依赖
3)、cd /home/ysc
4)、wget http://mirror.bit.edu.cn/apache/hbase/hbase-0.90.5/hbase-0.90.5.tar.gz
5)、tar -xvf hbase-0.90.5.tar.gz
6)、vi hbase-0.90.5/conf/hbase-site.xml
加入以下配置:
<property>
<name>hbase.rootdir</name>
<value>file:///home/ysc/hbase-0.90.5-database</value>
</property>
7)、hbase-0.90.5/bin/start-hbase.sh
8)、将/home/ysc/hbase-0.90.5/hbase-0.90.5.jar加入开发环境eclipse的build path
四、配置SSH
三台机器 devcluster01, devcluster02, devcluster03,分别在每一台机器上面执行如下操作:
1、sudo vi /etc/hosts
加入以下配置:
192.168.1.1 devcluster01
192.168.1.2 devcluster02
192.168.1.3 devcluster03
2、安装SSH服务:
sudo apt-get install openssh-server
3、(有提示的时候回车键确认)
ssh-keygen -t rsa
该命令会在用户主目录下创建 .ssh 目录,并在其中创建两个文件:id_rsa 私钥文件。是基于 RSA 算法创建。该私钥文件要妥善保管,不要泄漏。id_rsa.pub 公钥文件。和 id_rsa 文件是一对儿,该文件作为公钥文件,可以公开。
4、cp .ssh/id_rsa.pub .ssh/authorized_keys
把 三台机器 devcluster01, devcluster02, devcluster03 的文件/home/ysc/.ssh/authorized_keys的内容复制出来合并成一个文件并替换每一台机器上的/home/ysc/.ssh/authorized_keys文件
在devcluster01上面执行时,以下两条命令的主机为02和03
在devcluster02上面执行时,以下两条命令的主机为01和03
在devcluster03上面执行时,以下两条命令的主机为01和02
5、ssh-copy-id -i .ssh/id_rsa.pub ysc@ devcluster02
6、ssh-copy-id -i .ssh/id_rsa.pub ysc@ devcluster03
以上两条命令实际上是将 .ssh/id_rsa.pub 公钥文件追加到远程主机 server 的 user 主目录下的 .ssh/authorized_keys 文件中。
五、安装Hadoop Cluster(伪分布式运行模式)并运行Nutch
步骤和四大同小异,只需要1台机器 devcluster01,所以黄色背景部分全部设置为devcluster01,不需要第11步
六、安装Hadoop Cluster(分布式运行模式)并运行Nutch
三台机器 devcluster01, devcluster02, devcluster03(vi /etc/hostname)
使用用户ysc登陆 devcluster01:
1、cd /home/ysc
2、wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-1.1.1/hadoop-1.1.1-bin.tar.gz
3、tar -xvf hadoop-1.1.1-bin.tar.gz
4、cd hadoop-1.1.1
5、vi conf/masters
替换内容为 :
devcluster01
6、vi conf/slaves
替换内容为 :
devcluster02
devcluster03
7、vi conf/core-site.xml
加入配置:
<property>
<name>fs.default.name</name>
<value>hdfs://devcluster01:9000</value>
<description>
Where to find the Hadoop Filesystem through the network.
Note 9000 is not the default port.
(This is slightly changed from previous versions which didnt have "hdfs")
</description>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
编辑conf/hadoop-policy.xml
8、vi conf/hdfs-site.xml
加入配置:
<property>
<name>dfs.name.dir</name>
<value>/home/ysc/dfs/filesystem/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/ysc/dfs/filesystem/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.block.size</name>
<value>671088640</value>
<description>The default block size for new files.</description>
</property>
9、vi conf/mapred-site.xml
加入配置:
<property>
<name>mapred.job.tracker</name>
<value>devcluster01:9001</value>
<description>
The host and port that the MapReduce job tracker runs at. If
"local", then jobs are run in-process as a single map and
reduce task.
Note 9001 is not the default port.
</description>
</property>
<property>
<name>mapred.reduce.tasks.speculative.execution</name>
<value>false</value>
<description>If true, then multiple instances of some reduce tasks
may be executed in parallel.</description>
</property>
<property>
<name>mapred.map.tasks.speculative.execution</name>
<value>false</value>
<description>If true, then multiple instances of some map tasks
may be executed in parallel.</description>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx2000m</value>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
<description>
the core number of host
</description>
</property>
<property>
<name>mapred.map.tasks</name>
<value>4</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>4</value>
<description>
define mapred.map tasks to be number of slave hosts.the best number is the number of slave hosts plus the core numbers of per host
</description>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>4</value>
<description>
define mapred.reduce tasks to be number of slave hosts.the best number is the number of slave hosts plus the core numbers of per host
</description>
</property>
<property>
<name>mapred.output.compression.type</name>
<value>BLOCK</value>
<description>If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK.
</description>
</property>
<property>
<name>mapred.output.compress</name>
<value>true</value>
<description>Should the job outputs be compressed?
</description>
</property>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
<description>Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression.
</description>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/ysc/mapreduce/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/ysc/mapreduce/local</value>
</property>
10、vi conf/hadoop-env.sh
追加:
export JAVA_HOME=/home/ysc/jdk1.7.0_05
export HADOOP_HEAPSIZE=2000
#替换掉默认的垃圾回收器,因为默认的垃圾回收器在多线程环境下会有更多的wait等待
export HADOOP_OPTS="-server -Xmn256m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70"
11、复制HADOOP文件
scp -r /home/ysc/hadoop-1.1.1 ysc@devcluster02:/home/ysc/hadoop-1.1.1
scp -r /home/ysc/hadoop-1.1.1 ysc@devcluster03:/home/ysc/hadoop-1.1.1
12、sudo vi /etc/profile
追加并重启系统:
export PATH=/home/ysc/hadoop-1.1.1/bin:$PATH
13、格式化名称节点并启动集群
hadoop namenode -format
start-all.sh
14、cd /home/ysc/workspace/nutch1.5.1/runtime/deploy
mkdir urls
echo http://news.163.com > urls/url
hadoop dfs -put urls urls
bin/nutch crawl urls -dir data -depth 2 -topN 100
15、访问 http://localhost:50030 可以查看 JobTracker 的运行状态。访问 http://localhost:50060 可以查看 TaskTracker 的运行状态。访问 http://localhost:50070 可以查看 NameNode 以及整个分布式文件系统的状态,浏览分布式文件系统中的文件以及 log 等
16、通过stop-all.sh停止集群
17、如果NameNode和SecondaryNameNode不在同一台机器上,则在SecondaryNameNode的conf/hdfs-site.xml文件中加入配置:
<property>
<name>dfs.http.address</name>
<value>namenode:50070</value>
</property>
七、配置Ganglia监控Hadoop集群和HBase集群
1、服务器端(安装到master devcluster01上)
1)、ssh devcluster01
2)、addgroup ganglia
adduser --ingroup ganglia ganglia
3)、sudo apt-get install ganglia-monitor ganglia-webfront gmetad
//补充:在Ubuntu10.04上,ganglia-webfront这个package名字叫ganglia-webfrontend
//如果install出错,则运行sudo apt-get update,如果update出错,则删除出错路径
4)、vi /etc/ganglia/gmond.conf
先找到setuid = yes,改成setuid =no;
在找到cluster块中的name,改成name =”hadoop-cluster”;
5)、sudo apt-get install rrdtool
6)、vi /etc/ganglia/gmetad.conf
在这个配置文件中增加一些datasource,即其他2个被监控的节点,增加以下内容:
data_source “hadoop-cluster” devcluster01:8649 devcluster02:8649 devcluster03:8649
gridname "Hadoop"
2、数据源端(安装到所有slaves上)
1)、ssh devcluster02
addgroup ganglia
adduser --ingroup ganglia ganglia
sudo apt-get install ganglia-monitor
2)、ssh devcluster03
addgroup ganglia
adduser --ingroup ganglia ganglia
sudo apt-get install ganglia-monitor
3)、ssh devcluster01
scp /etc/ganglia/gmond.conf devcluster02:/etc/ganglia/gmond.conf
scp /etc/ganglia/gmond.conf devcluster03:/etc/ganglia/gmond.conf
3、配置WEB
1)、ssh devcluster01
2)、sudo ln -s /usr/share/ganglia-webfrontend /var/www/ganglia
3)、vi /etc/apache2/apache2.conf
添加:
ServerName devcluster01
4、重启服务
1)、ssh devcluster02
sudo /etc/init.d/ganglia-monitor restart
ssh devcluster03
sudo /etc/init.d/ganglia-monitor restart
2)、ssh devcluster01
sudo /etc/init.d/ganglia-monitor restart
sudo /etc/init.d/gmetad restart
sudo /etc/init.d/apache2 restart
5、访问页面
http:// devcluster01/ganglia
6、集成hadoop
1)、ssh devcluster01
2)、cd /home/ysc/hadoop-1.1.1
3)、vi conf/hadoop-metrics2.properties
# 大于0.20以后的版本用ganglia31 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
*.sink.ganglia.period=10
# default for supportsparse is false
*.sink.ganglia.supportsparse=true
*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
#广播IP地址,这是缺省的,统一设该值(只能用组播地址239.2.11.71)
namenode.sink.ganglia.servers=239.2.11.71:8649
datanode.sink.ganglia.servers=239.2.11.71:8649
jobtracker.sink.ganglia.servers=239.2.11.71:8649
tasktracker.sink.ganglia.servers=239.2.11.71:8649
maptask.sink.ganglia.servers=239.2.11.71:8649
reducetask.sink.ganglia.servers=239.2.11.71:8649
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
dfs.period=10
dfs.servers=239.2.11.71:8649
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
mapred.period=10
mapred.servers=239.2.11.71:8649
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=239.2.11.71:8649
4)、scp conf/hadoop-metrics2.properties root@devcluster02:/home/ysc/hadoop-1.1.1/conf/hadoop-metrics2.properties
5)、scp conf/hadoop-metrics2.properties root@devcluster03:/home/ysc/hadoop-1.1.1/conf/hadoop-metrics2.properties
6)、stop-all.sh
7)、start-all.sh
7、集成hbase
1)、ssh devcluster01
2)、cd /home/ysc/hbase-0.92.2
3)、vi conf/hadoop-metrics.properties(只能用组播地址239.2.11.71)
hbase.extendedperiod = 3600
hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
hbase.period=10
hbase.servers=239.2.11.71:8649
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=239.2.11.71:8649
rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
rpc.period=10
rpc.servers=239.2.11.71:8649
4)、scp conf/hadoop-metrics.properties root@devcluster02:/home/ysc/ hbase-0.92.2/conf/hadoop-metrics.properties
5)、scp conf/hadoop-metrics.properties root@devcluster03:/home/ysc/ hbase-0.92.2/conf/hadoop-metrics.properties
6)、stop-hbase.sh
7)、start-hbase.sh
八、Hadoop配置Snappy压缩
1、wget http://snappy.googlecode.com/files/snappy-1.0.5.tar.gz
2、tar -xzvf snappy-1.0.5.tar.gz
3、cd snappy-1.0.5
4、./configure
5、make
6、make install
7、scp /usr/local/lib/libsnappy* devcluster01:/home/ysc/hadoop-1.1.1/lib/native/Linux-amd64-64/
scp /usr/local/lib/libsnappy* devcluster02:/home/ysc/hadoop-1.1.1/lib/native/Linux-amd64-64/
scp /usr/local/lib/libsnappy* devcluster03:/home/ysc/hadoop-1.1.1/lib/native/Linux-amd64-64/
8、vi /etc/profile
追加:
export LD_LIBRARY_PATH=/home/ysc/hadoop-1.1.1/lib/native/Linux-amd64-64
9、修改mapred-site.xml
<property>
<name>mapred.output.compression.type</name>
<value>BLOCK</value>
<description>If the job outputs are to compressed as SequenceFiles, how should
they be compressed? Should be one of NONE, RECORD or BLOCK.
</description>
</property>
<property>
<name>mapred.output.compress</name>
<value>true</value>
<description>Should the job outputs be compressed?
</description>
</property>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
<description>Should the outputs of the maps be compressed before being
sent across the network. Uses SequenceFile compression.
</description>
</property>
<property>
<name>mapred.map.output.compression.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
<description>If the map outputs are compressed, how should they be
compressed?
</description>
</property>
<property>
<name>mapred.output.compression.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
<description>If the job outputs are compressed, how should they be compressed?
</description>
</property>
九、Hadoop配置Lzo压缩
1、wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
2、tar -zxvf lzo-2.06.tar.gz
3、cd lzo-2.06
4、./configure --enable-shared
5、make
6、make install
7、scp /usr/local/lib/liblzo2.* devcluster01:/lib/x86_64-linux-gnu
scp /usr/local/lib/liblzo2.* devcluster02:/lib/x86_64-linux-gnu
scp /usr/local/lib/liblzo2.* devcluster03:/lib/x86_64-linux-gnu
8、wget http://hadoop-gpl-compression.apache-extras.org.codespot.com/files/hadoop-gpl-compression-0.1.0-rc0.tar.gz
9、tar -xzvf hadoop-gpl-compression-0.1.0-rc0.tar.gz
10、cd hadoop-gpl-compression-0.1.0
11、cp lib/native/Linux-amd64-64/* /home/ysc/hadoop-1.1.1/lib/native/Linux-amd64-64/
12、cp hadoop-gpl-compression-0.1.0.jar /home/ysc/hadoop-1.1.1/lib/(这里hadoop集群的版本要和compression使用的版本一致)
13、scp -r /home/ysc/hadoop-1.1.1/lib devcluster02:/home/ysc/hadoop-1.1.1/
scp -r /home/ysc/hadoop-1.1.1/lib devcluster03:/home/ysc/hadoop-1.1.1/
14、vi /etc/profile
追加:
export LD_LIBRARY_PATH=/home/ysc/hadoop-1.1.1/lib/native/Linux-amd64-64
15、修改core-site.xml
<property>
<name>io.compression.codecs</name>
<value>com.hadoop.compression.lzo.LzoCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value>
<description>A list of the compression codec classes that can be used
for compression/decompression.</description>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>1440</value>
<description>Number of minutes between trash checkpoints.
If zero, the trash feature is disabled.
</description>
</property>
16、修改mapred-site.xml
<property>
<name>mapred.output.compression.type</name>
<value>BLOCK</value>
<description>If the job outputs are to compressed as SequenceFiles, how should
they be compressed? Should be one of NONE, RECORD or BLOCK.
</description>
</property>
<property>
<name>mapred.output.compress</name>
<value>true</value>
<description>Should the job outputs be compressed?
</description>
</property>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
<description>Should the outputs of the maps be compressed before being
sent across the network. Uses SequenceFile compression.
</description>
</property>
<property>
<name>mapred.map.output.compression.codec</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
<description>If the map outputs are compressed, how should they be
compressed?
</description>
</property>
<property>
<name>mapred.output.compression.codec</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
<description>If the job outputs are compressed, how should they be compressed?
</description>
</property>
十、配置zookeeper集群以运行hbase
1、ssh devcluster01
2、cd /home/ysc
3、wget http://mirror.bjtu.edu.cn/apache/zookeeper/stable/zookeeper-3.4.5.tar.gz
4、tar -zxvf zookeeper-3.4.5.tar.gz
5、cd zookeeper-3.4.5
6、cp conf/zoo_sample.cfg conf/zoo.cfg
7、vi conf/zoo.cfg
修改:dataDir=/home/ysc/zookeeper
添加:
server.1=devcluster01:2888:3888
server.2=devcluster02:2888:3888
server.3=devcluster03:2888:3888
maxClientCnxns=100
8、scp -r zookeeper-3.4.5 devcluster01:/home/ysc
scp -r zookeeper-3.4.5 devcluster02:/home/ysc
scp -r zookeeper-3.4.5 devcluster03:/home/ysc
9、分别在三台机器上面执行:
ssh devcluster01
mkdir /home/ysc/zookeeper(注:dataDir是zookeeper的数据目录,需要手动创建)
echo 1 > /home/ysc/zookeeper/myid
ssh devcluster02
mkdir /home/ysc/zookeeper
echo 2 > /home/ysc/zookeeper/myid
ssh devcluster03
mkdir /home/ysc/zookeeper
echo 3 > /home/ysc/zookeeper/myid
10、分别在三台机器上面执行:
cd /home/ysc/zookeeper-3.4.5
bin/zkServer.sh start
bin/zkCli.sh -server devcluster01:2181
bin/zkServer.sh status
十一、配置Hbase集群以运行nutch-2.1(Region Servers会因为内存的问题宕机)
1、nutch-2.1使用gora-0.2.1, gora-0.2.1使用hbase-0.90.4,hbase-0.90.4和hadoop-1.1.1不兼容,hbase-0.94.4和gora-0.2.1不兼容,hbase-0.92.2没问题。hbase存在系统时间同步的问题,并且误差要再30s以内。
sudo apt-get install ntp
sudo ntpdate -u 210.72.145.44
2、HBase是数据库,会在同一时间使用很多的文件句柄。大多数linux系统使用的默认值1024是不能满足的。还需要修改 hbase 用户的 nproc,在压力下,如果过低会造成 OutOfMemoryError异常。
vi /etc/security/limits.conf
添加:
ysc soft nproc 32000
ysc hard nproc 32000
ysc soft nofile 32768
ysc hard nofile 32768
vi /etc/pam.d/common-session
添加:
session required pam_limits.so
3、登陆master,下载并解压hbase
ssh devcluster01
cd /home/ysc
wget http://apache.etoak.com/hbase/hbase-0.92.2/hbase-0.92.2.tar.gz
tar -zxvf hbase-0.92.2.tar.gz
cd hbase-0.92.2
4、修改配置文件hbase-env.sh
vi conf/hbase-env.sh
追加:
export JAVA_HOME=/home/ysc/jdk1.7.0_05
export HBASE_MANAGES_ZK=false
export HBASE_HEAPSIZE=10000
#替换掉默认的垃圾回收器,因为默认的垃圾回收器在多线程环境下会有更多的wait等待
export HBASE_OPTS="-server -Xmn256m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70"
5、修改配置文件hbase-site.xml
vi conf/hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://devcluster01:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>devcluster01,devcluster02,devcluster03</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.25</value>
<description>
Percentage of maximum heap (-Xmx setting) to allocate to block cache
used by HFile/StoreFile. Default of 0.25 means allocate 25%.
Set to 0 to disable but it's not recommended.
</description>
</property>
<property>
<name>hbase.regionserver.global.memstore.upperLimit</name>
<value>0.4</value>
<description>Maximum size of all memstores in a region server before new
updates are blocked and flushes are forced. Defaults to 40% of heap
</description>
</property>
<property>
<name>hbase.regionserver.global.memstore.lowerLimit</name>
<value>0.35</value>
<description>When memstores are being forced to flush to make room in
memory, keep flushing until we hit this mark. Defaults to 35% of heap.
This value equal to hbase.regionserver.global.memstore.upperLimit causes
the minimum possible flushing to occur when updates are blocked due to
memstore limiting.
</description>
</property>
<property>
<name>hbase.hregion.majorcompaction</name>
<value>0</value>
<description>The time (in miliseconds) between 'major' compactions of all
HStoreFiles in a region. Default: 1 day.
Set to 0 to disable automated major compactions.
</description>
</property>
6、修改配置文件regionservers
vi conf/regionservers
devcluster01
devcluster02
devcluster03
7、因为HBase建立在Hadoop之上,Hadoop使用的hadoop*.jar和HBase使用的 必须 一致。所以要将 HBase lib 目录下的hadoop*.jar替换成Hadoop里面的那个,防止版本冲突。
cp /home/ysc/hadoop-1.1.1/hadoop-core-1.1.1.jar /home/ysc/hbase-0.92.2/lib
rm /home/ysc/hbase-0.92.2/lib/hadoop-core-1.0.3.jar
8、复制文件到regionservers
scp -r /home/ysc/hbase-0.92.2 devcluster01:/home/ysc
scp -r /home/ysc/hbase-0.92.2 devcluster02:/home/ysc
scp -r /home/ysc/hbase-0.92.2 devcluster03:/home/ysc
9、启动hadoop并创建目录
hadoop fs -mkdir /hbase
10、管理HBase集群:
启动初始 HBase 集群:
bin/start-hbase.sh
停止HBase 集群:
bin/stop-hbase.sh
启动额外备份主服务器,可以启动到 9 个备份服务器 (总数10 个):
bin/local-master-backup.sh start 1
bin/local-master-backup.sh start 2 3
启动更多 regionservers, 支持到 99 个额外regionservers (总100个):
bin/local-regionservers.sh start 1
bin/local-regionservers.sh start 2 3 4 5
停止备份主服务器:
cat /tmp/hbase-ysc-1-master.pid |xargs kill -9
停止单独 regionserver:
bin/local-regionservers.sh stop 1
使用HBase命令行模式:
bin/hbase shell
11、web界面
http://devcluster01:60010
http://devcluster01:60030
12、如运行nutch2.1则方法一:
cp conf/hbase-site.xml /home/ysc/nutch-2.1/conf
cd /home/ysc/nutch-2.1
ant
cd runtime/deploy
unzip -d apache-nutch-2.1 apache-nutch-2.1.job
rm apache-nutch-2.1.job
cd apache-nutch-2.1
rm lib/hbase-0.90.4.jar
cp /home/ysc/hbase-0.92.2/hbase-0.92.2.jar lib
zip -r ../apache-nutch-2.1.job ./*
cd ..
rm -r apache-nutch-2.1
13、如运行nutch2.1则方法二:
cp conf/hbase-site.xml /home/ysc/nutch-2.1/conf
cd /home/ysc/nutch-2.1
cp /home/ysc/hbase-0.92.2/hbase-0.92.2.jar lib
ant
cd runtime/deploy
zip -d apache-nutch-2.1.job lib/hbase-0.90.4.jar
启用snappy压缩:
1、vi conf/gora-hbase-mapping.xml
在family上面添加属性:compression="SNAPPY"
2、mkdir /home/ysc/hbase-0.92.2/lib/native/Linux-amd64-64
3、cp /home/ysc/hadoop-1.1.1/lib/native/Linux-amd64-64/* /home/ysc/hbase-0.92.2/lib/native/Linux-amd64-64
4、vi /home/ysc/hbase-0.92.2/conf/hbase-site.xml
增加:
<property>
<name>hbase.regionserver.codecs</name>
<value>snappy</value>
</property>
十二、配置Accumulo集群以运行nutch-2.1(gora存在BUG)
1、wget http://apache.etoak.com/accumulo/1.4.2/accumulo-1.4.2-dist.tar.gz
2、tar -xzvf accumulo-1.4.2-dist.tar.gz
3、cd accumulo-1.4.2
4、cp conf/examples/3GB/standalone/* conf
5、vi conf/accumulo-env.sh
export HADOOP_HOME=/home/ysc/cluster3
export ZOOKEEPER_HOME=/home/ysc/zookeeper-3.4.5
export JAVA_HOME=/home/jdk1.7.0_01
export ACCUMULO_HOME=/home/ysc/accumulo-1.4.2
6、vi conf/slaves
devcluster01
devcluster02
devcluster03
7、vi conf/masters
devcluster01
8、vi conf/accumulo-site.xml
<property>
<name>instance.zookeeper.host</name>
<value>host6:2181,host8:2181</value>
<description>comma separated list of zookeeper servers</description>
</property>
<property>
<name>logger.dir.walog</name>
<value>walogs</value>
<description>The directory used to store write-ahead logs on the local filesystem. It is possible to specify a comma-separated list of directories.</description>
</property>
<property>
<name>instance.secret</name>
<value>ysc</value>
<description>A secret unique to a given instance that all servers must know in order to communicate with one another.
Change it before initialization. To change it later use ./bin/accumulo org.apache.accumulo.server.util.ChangeSecret [oldpasswd] [newpasswd],
and then update this file.
</description>
</property>
<property>
<name>tserver.memory.maps.max</name>
<value>3G</value>
</property>
<property>
<name>tserver.cache.data.size</name>
<value>50M</value>
</property>
<property>
<name>tserver.cache.index.size</name>
<value>512M</value>
</property>
<property>
<name>trace.password</name>
<!--
change this to the root user's password, and/or change the user below
-->
<value>ysc</value>
</property>
<property>
<name>trace.user</name>
<value>root</value>
</property>
9、bin/accumulo init
10、bin/start-all.sh
11、bin/stop-all.sh
12、web访问:http://devcluster01:50095/
修改nutch2.1:
1、cd /home/ysc/nutch-2.1
2、vi conf/gora.properties
增加:
gora.datastore.default=org.apache.gora.accumulo.store.AccumuloStore
gora.datastore.accumulo.mock=false
gora.datastore.accumulo.instance=accumulo
gora.datastore.accumulo.zookeepers=host6,host8
gora.datastore.accumulo.user=root
gora.datastore.accumulo.password=ysc
3、vi conf/nutch-site.xml
增加:
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.accumulo.store.AccumuloStore</value>
</property>
4、vi ivy/ivy.xml
增加:
<dependency org="org.apache.gora" name="gora-accumulo" rev="0.2.1" conf="*->default" />
5、升级accumulo
cp /home/ysc/accumulo-1.4.2/lib/accumulo-core-1.4.2.jar /home/ysc/nutch-2.1/lib
cp /home/ysc/accumulo-1.4.2/lib/accumulo-start-1.4.2.jar /home/ysc/nutch-2.1/lib
cp /home/ysc/accumulo-1.4.2/lib/cloudtrace-1.4.2.jar /home/ysc/nutch-2.1/lib
6、ant
7、cd runtime/deploy
8、删除旧jar
zip -d apache-nutch-2.1.job lib/accumulo-core-1.4.0.jar
zip -d apache-nutch-2.1.job lib/accumulo-start-1.4.0.jar
zip -d apache-nutch-2.1.job lib/cloudtrace-1.4.2.jar
十三、配置Cassandra 集群以运行nutch-2.1(Cassandra 采用去中心化结构)
1、vi /etc/hosts(注意:需要登录到每一台机器上面,将localhost解析到实际地址)
192.168.1.1 localhost
2、wget http://labs.mop.com/apache-mirror/cassandra/1.2.0/apache-cassandra-1.2.0-bin.tar.gz
3、tar -xzvf apache-cassandra-1.2.0-bin.tar.gz
4、cd apache-cassandra-1.2.0
5、vi conf/cassandra-env.sh
增加:
MAX_HEAP_SIZE="4G"
HEAP_NEWSIZE="800M"
6、vi conf/log4j-server.properties
修改:
log4j.appender.R.File=/home/ysc/cassandra/system.log
7、vi conf/cassandra.yaml
修改:
cluster_name: 'Cassandra Cluster'
data_file_directories:
- /home/ysc/cassandra/data
commitlog_directory: /home/ysc/cassandra/commitlog
saved_caches_directory: /home/ysc/cassandra/saved_caches
- seeds: "192.168.1.1"
listen_address: 192.168.1.1
rpc_address: 192.168.1.1
thrift_framed_transport_size_in_mb: 1023
thrift_max_message_length_in_mb: 1024
8、vi bin/stop-server
增加:
user=`whoami`
pgrep -u $user -f cassandra | xargs kill -9
9、复制cassandra到其他节点:
cd ..
scp -r apache-cassandra-1.2.0 devcluster02:/home/ysc
scp -r apache-cassandra-1.2.0 devcluster03:/home/ysc
分别在devcluster02和devcluster03上面修改:
vi conf/cassandra.yaml
listen_address: 192.168.1.2
rpc_address: 192.168.1.2
vi conf/cassandra.yaml
listen_address: 192.168.1.3
rpc_address: 192.168.1.3
10、分别在3个节点上面运行
bin/cassandra
bin/cassandra -f 参数 -f 的作用是让 Cassandra 以前端程序方式运行,这样有利于调试和观察日志信息,而在实际生产环境中这个参数是不需要的(即 Cassandra 会以 daemon 方式运行)
11、bin/nodetool -host devcluster01 ring
bin/nodetool -host devcluster01 info
12、bin/stop-server
13、bin/cassandra-cli
修改nutch2.1:
1、cd /home/ysc/nutch-2.1
2、vi conf/gora.properties
增加:
gora.cassandrastore.servers=host2:9160,host6:9160,host8:9160
3、vi conf/nutch-site.xml
增加:
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.cassandra.store.CassandraStore</value>
</property>
4、vi ivy/ivy.xml
增加:
<dependency org="org.apache.gora" name="gora-cassandra" rev="0.2.1" conf="*->default" />
5、升级cassandra
cp /home/ysc/apache-cassandra-1.2.0/lib/apache-cassandra-1.2.0.jar /home/ysc/nutch-2.1/lib
cp /home/ysc/apache-cassandra-1.2.0/lib/apache-cassandra-thrift-1.2.0.jar /home/ysc/nutch-2.1/lib
cp /home/ysc/apache-cassandra-1.2.0/lib/jline-1.0.jar /home/ysc/nutch-2.1/lib
6、ant
7、cd runtime/deploy
8、删除旧jar
zip -d apache-nutch-2.1.job lib/cassandra-thrift-1.1.2.jar
zip -d apache-nutch-2.1.job lib/jline-0.9.1.jar
十四、配置MySQL 单机服务器以运行nutch-2.1
1、apt-get install mysql-server mysql-client
2、vi /etc/mysql/my.cnf
修改:
bind-address = 221.194.43.2
在[client]下增加:
default-character-set=utf8
在[mysqld]下增加:
default-character-set=utf8
3、mysql –uroot –pysc
SHOW VARIABLES LIKE '%character%';
4、service mysql restart
5、mysql –uroot –pysc
GRANT ALL PRIVILEGES ON *.* TO root@"%" IDENTIFIED BY "ysc";
6、vi conf/gora-sql-mapping.xml
修改字段的长度
<primarykey column="id" length="333"/>
<field name="content" column="content" />
<field name="text" column="text" length="19892"/>
7、启动nutch之后登陆mysql
ALTER TABLE webpage MODIFY COLUMN content MEDIUMBLOB;
ALTER TABLE webpage MODIFY COLUMN text MEDIUMTEXT;
ALTER TABLE webpage MODIFY COLUMN title MEDIUMTEXT;
ALTER TABLE webpage MODIFY COLUMN reprUrl MEDIUMTEXT;
ALTER TABLE webpage MODIFY COLUMN baseUrl MEDIUMTEXT;
ALTER TABLE webpage MODIFY COLUMN typ MEDIUMTEXT;
ALTER TABLE webpage MODIFY COLUMN inlinks MEDIUMBLOB;
ALTER TABLE webpage MODIFY COLUMN outlinks MEDIUMBLOB;
修改nutch2.1:
1、cd /home/ysc/nutch-2.1
2、vi conf/gora.properties
增加:
gora.sqlstore.jdbc.driver=com.mysql.jdbc.Driver
gora.sqlstore.jdbc.url=jdbc:mysql://host2:3306/nutch?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=utf8
gora.sqlstore.jdbc.user=root
gora.sqlstore.jdbc.password=ysc
3、vi conf/nutch-site.xml
增加:
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.sql.store.SqlStore </value>
</property>
<property>
<name>encodingdetector.charset.min.confidence</name>
<value>1</value>
<description>A integer between 0-100 indicating minimum confidence value
for charset auto-detection. Any negative value disables auto-detection.
</description>
</property>
4、vi ivy/ivy.xml
增加:
<dependency org="mysql" name="mysql-connector-java" rev="5.1.18" conf="*->default"/>
十五、nutch2.1 使用DataFileAvroStore作为数据源
1、cd /home/ysc/nutch-2.1
2、vi conf/gora.properties
增加:
gora.datafileavrostore.output.path=datafileavrostore
gora.datafileavrostore.input.path=datafileavrostore
3、vi conf/nutch-site.xml
增加:
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.avro.store.DataFileAvroStore</value>
</property>
<property>
<name>encodingdetector.charset.min.confidence</name>
<value>1</value>
<description>A integer between 0-100 indicating minimum confidence value
for charset auto-detection. Any negative value disables auto-detection.
</description>
</property>
十六、nutch2.1 使用AvroStore作为数据源
1、cd /home/ysc/nutch-2.1
2、vi conf/gora.properties
增加:
gora.avrostore.codec.type=BINARY
gora.avrostore.input.path=avrostore
gora.avrostore.output.path=avrostore
3、vi conf/nutch-site.xml
增加:
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.avro.store.AvroStore</value>
</property>
<property>
<name>encodingdetector.charset.min.confidence</name>
<value>1</value>
<description>A integer between 0-100 indicating minimum confidence value
for charset auto-detection. Any negative value disables auto-detection.
</description>
</property>
十七、配置SOLR
配置tomcat:
1、wget http://www.fayea.com/apache-mirror/tomcat/tomcat-7/v7.0.35/bin/apache-tomcat-7.0.35.tar.gz
2、tar -xzvf apache-tomcat-7.0.35.tar.gz
3、cd apache-tomcat-7.0.35
4、vi conf/server.xml
增加URIEncoding="UTF-8":
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" URIEncoding="UTF-8"/>
5、mkdir conf/Catalina
6、mkdir conf/Catalina/localhost
7、vi conf/Catalina/localhost/solr.xml
增加:
<Context path="/solr">
<Environment name="solr/home" type="java.lang.String" value="/home/ysc/solr/configuration/" override="false"/>
</Context>
8、cd ..
下载SOLR:
1、wget http://mirrors.tuna.tsinghua.edu.cn/apache/lucene/solr/4.1.0/solr-4.1.0.tgz
2、tar -xzvf solr-4.1.0.tgz
复制资源:
1、mkdir /home/ysc/solr
2、cp -r solr-4.1.0/example/solr /home/ysc/solr/configuration
3、unzip solr-4.1.0/example/webapps/solr.war -d /home/ysc/apache-tomcat-7.0.35/webapps/solr
配置nutch:
1、复制schema:
cp /home/ysc/nutch-1.6/conf/schema-solr4.xml /home/ysc/solr/configuration/collection1/conf/schema.xml
2、vi /home/ysc/solr/configuration/collection1/conf/schema.xml
在<fields>下增加:
<field name="_version_" type="long" indexed="true" stored="true"/>
配置中文分词:
1、wget http://mmseg4j.googlecode.com/files/mmseg4j-1.9.1.v20130120-SNAPSHOT.zip
2、unzip mmseg4j-1.9.1.v20130120-SNAPSHOT.zip
3、cp mmseg4j-1.9.1-SNAPSHOT/dist/* /home/ysc/apache-tomcat-7.0.35/webapps/solr/WEB-INF/lib
4、unzip mmseg4j-1.9.1-SNAPSHOT/dist/mmseg4j-core-1.9.1-SNAPSHOT.jar -d mmseg4j-1.9.1-SNAPSHOT/dist/mmseg4j-core-1.9.1-SNAPSHOT
5、mkdir /home/ysc/dic
6、cp mmseg4j-1.9.1-SNAPSHOT/dist/mmseg4j-core-1.9.1-SNAPSHOT/data/* /home/ysc/dic
7、vi /home/ysc/solr/configuration/collection1/conf/schema.xml
将文件中的
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
和
<tokenizer class="solr.StandardTokenizerFactory"/>
替换为
<tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="complex" dicPath="/home/ysc/dic"/>
配置tomcat本地库:
1、wget http://apache.spd.co.il/apr/apr-1.4.6.tar.gz
2、tar -xzvf apr-1.4.6.tar.gz
3、cd apr-1.4.6
4、./configure
5、make
6、make install
1、wget http://mirror.bjtu.edu.cn/apache/apr/apr-util-1.5.1.tar.gz
2、tar -xzvf apr-util-1.5.1.tar.gz
3、cd apr-util-1.5.1
4、./configure --with-apr=/usr/local/apr
5、make
6、make install
1、wget http://mirror.bjtu.edu.cn/apache//tomcat/tomcat-connectors/native/1.1.24/source/tomcat-native-1.1.24-src.tar.gz
2、tar -zxvf tomcat-native-1.1.24-src.tar.gz
3、cd tomcat-native-1.1.24-src/jni/native
4、./configure --with-apr=/usr/local/apr \
--with-java-home=/home/ysc/jdk1.7.0_01 \
--with-ssl=no \
--prefix=/home/ysc/apache-tomcat-7.0.35
5、make
6、make install
7、vi /etc/profile
增加:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/ysc/apache-tomcat-7.0.35/lib:/usr/local/apr/lib
8、source /etc/profile
启动tomcat:
cd apache-tomcat-7.0.35
bin/catalina.sh start
http://devcluster01:8080/solr/
十八、Nagios监控
服务端:
1、apt-get install apache2 nagios3 nagios-nrpe-plugin
输入密码:nagiosadmin
2、apt-get install nagios3-doc
3、vi /etc/nagios3/conf.d/hostgroups_nagios2.cfg
define hostgroup {
hostgroup_name nagios-servers
alias nagios servers
members devcluster01,devcluster02,devcluster03
}
4、cp /etc/nagios3/conf.d/localhost_nagios2.cfg /etc/nagios3/conf.d/devcluster01_nagios2.cfg
vi /etc/nagios3/conf.d/devcluster01_nagios2.cfg
替换:
g/localhost/s//devcluster01/g
g/127.0.0.1/s//192.168.1.1/g
5、cp /etc/nagios3/conf.d/localhost_nagios2.cfg /etc/nagios3/conf.d/devcluster02_nagios2.cfg
vi /etc/nagios3/conf.d/devcluster02_nagios2.cfg
替换:
g/localhost/s//devcluster02/g
g/127.0.0.1/s//192.168.1.2/g
6、cp /etc/nagios3/conf.d/localhost_nagios2.cfg /etc/nagios3/conf.d/devcluster03_nagios2.cfg
vi /etc/nagios3/conf.d/devcluster03_nagios2.cfg
替换:
g/localhost/s//devcluster03/g
g/127.0.0.1/s//192.168.1.3/g
7、vi /etc/nagios3/conf.d/services_nagios2.cfg
将hostgroup_name改为nagios-servers
增加:
# check that web services are running
define service {
hostgroup_name nagios-servers
service_description HTTP
check_command check_http
use generic-service
notification_interval 0 ; set > 0 if you want to be renotified
}
# check that ssh services are running
define service {
hostgroup_name nagios-servers
service_description SSH
check_command check_ssh
use generic-service
notification_interval 0 ; set > 0 if you want to be renotified
}
8、vi /etc/nagios3/conf.d/extinfo_nagios2.cfg
将hostgroup_name改为nagios-servers
增加:
define hostextinfo{
hostgroup_name nagios-servers
notes nagios-servers
# notes_url http://webserver.localhost.localdomain/hostinfo.pl?host=netware1
icon_image base/debian.png
icon_image_alt Debian GNU/Linux
vrml_image debian.png
statusmap_image base/debian.gd2
}
9、sudo /etc/init.d/nagios3 restart
10、访问http://devcluster01/nagios3/
用户名:nagiosadmin密码:nagiosadmin
监控端:
1、apt-get install nagios-nrpe-server
2、vi /etc/nagios/nrpe.cfg
替换:
g/127.0.0.1/s//192.168.1.1/g
3、sudo /etc/init.d/nagios-nrpe-server restart
十九、配置Splunk
1、wget http://download.splunk.com/releases/5.0.2/splunk/linux/splunk-5.0.2-149561-Linux-x86_64.tgz
2、tar -zxvf splunk-5.0.2-149561-Linux-x86_64.tgz
3、cd splunk
4、bin/splunk start --answer-yes --no-prompt --accept-license
5、访问http://devcluster01:8000
用户名:admin 密码:changeme
6、添加数据 -> 从 UDP 端口 -> UDP 端口 *: 1688 -> 来源类型 从列表 log4j -> 保存
7、配置hadoop
vi /home/ysc/hadoop-1.1.1/conf/log4j.properties
修改:
log4j.rootLogger=${hadoop.root.logger}, EventCounter, SYSLOG
增加:
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.facility=local1
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%p %c{2}: %m%n
log4j.appender.SYSLOG.SyslogHost=host6:1688
log4j.appender.SYSLOG.threshold=INFO
log4j.appender.SYSLOG.Header=true
log4j.appender.SYSLOG.FacilityPrinting=true
8、配置hbase
vi /home/ysc/hbase-0.92.2/conf/log4j.properties
修改:
log4j.rootLogger=${hbase.root.logger},SYSLOG
增加:
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.facility=local1
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%p %c{2}: %m%n
log4j.appender.SYSLOG.SyslogHost=host6:1688
log4j.appender.SYSLOG.threshold=INFO
log4j.appender.SYSLOG.Header=true
log4j.appender.SYSLOG.FacilityPrinting=true
9、配置nutch
vi /home/lanke/ysc/nutch-2.1-hbase/conf/log4j.properties
修改:
log4j.rootLogger=INFO,DRFA,SYSLOG
增加:
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.facility=local1
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%p %c{2}: %m%n
log4j.appender.SYSLOG.SyslogHost=host6:1688
log4j.appender.SYSLOG.threshold=INFO
log4j.appender.SYSLOG.Header=true
log4j.appender.SYSLOG.FacilityPrinting=true
10、启动hadoop和hbase
start-all.sh
start-hbase.sh
二十、配置Pig
1、wget http://labs.mop.com/apache-mirror/pig/pig-0.11.0/pig-0.11.0.tar.gz
2、tar -xzvf pig-0.11.0.tar.gz
3、cd pig-0.11.0
4、vi /etc/profile
增加:
export PIG_HOME=/home/ysc/pig-0.11.0
export PATH=$PIG_HOME/bin:$PATH
5、source /etc/profile
6、cp conf/log4j.properties.template conf/log4j.properties
7、vi conf/log4j.properties
8、pig
二十一、配置Hive
1、wget http://mirrors.cnnic.cn/apache/hive/hive-0.10.0/hive-0.10.0.tar.gz
2、tar -xzvf hive-0.10.0.tar.gz
3、cd hive-0.10.0
4、vi /etc/profile
增加:
export HIVE_HOME=/home/ysc/hive-0.10.0
export PATH=$HIVE_HOME/bin:$PATH
5、source /etc/profile
6、cp conf/hive-log4j.properties.template conf/hive-log4j.properties
7、vi conf/hive-log4j.properties
替换:
log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter
为:
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
二十二、配置Hadoop2.x集群
1、wget http://labs.mop.com/apache-mirror/hadoop/common/hadoop-2.0.2-alpha/hadoop-2.0.2-alpha.tar.gz
2、tar -xzvf hadoop-2.0.2-alpha.tar.gz
3、cd hadoop-2.0.2-alpha
4、vi etc/hadoop/hadoop-env.sh
追加:
export JAVA_HOME=/home/ysc/jdk1.7.0_05
export HADOOP_HEAPSIZE=2000
5、vi etc/hadoop/core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://devcluster01:9000</value>
<description>
Where to find the Hadoop Filesystem through the network.
Note 9000 is not the default port.
(This is slightly changed from previous versions which didnt have "hdfs")
</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>The size of buffer for use in sequence files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations.</description>
</property>
6、vi etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.reduce.input.buffer.percent</name>
<value>1</value>
<description>The percentage of memory- relative to the maximum heap size- to
retain map outputs during the reduce. When the shuffle is concluded, any
remaining map outputs in memory must consume less than this threshold before
the reduce can begin.
</description>
</property>
<property>
<name>mapred.job.shuffle.input.buffer.percent</name>
<value>1</value>
<description>The percentage of memory to be allocated from the maximum heap
size to storing map outputs during the shuffle.
</description>
</property>
<property>
<name>mapred.inmem.merge.threshold</name>
<value>0</value>
<description>The threshold, in terms of the number of files
for the in-memory merge process. When we accumulate threshold number of files
we initiate the in-memory merge and spill to disk. A value of 0 or less than
0 indicates we want to DON'T have any threshold and instead depend only on
the ramfs's memory consumption to trigger the merge.
</description>
</property>
<property>
<name>io.sort.factor</name>
<value>100</value>
<description>The number of streams to merge at once while sorting
files. This determines the number of open file handles.</description>
</property>
<property>
<name>io.sort.mb</name>
<value>240</value>
<description>The total amount of buffer memory to use while sorting
files, in megabytes. By default, gives each merge stream 1MB, which
should minimize seeks.</description>
</property>
<property>
<name>mapred.map.output.compression.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
<description>If the map outputs are compressed, how should they be
compressed?
</description>
</property>
<property>
<name>mapred.output.compression.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
<description>If the job outputs are compressed, how should they be compressed?
</description>
</property>
<property>
<name>mapred.output.compression.type</name>
<value>BLOCK</value>
<description>If the job outputs are to compressed as SequenceFiles, how should
they be compressed? Should be one of NONE, RECORD or BLOCK.
</description>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx2000m</value>
</property>
<property>
<name>mapred.output.compress</name>
<value>true</value>
<description>Should the job outputs be compressed?
</description>
</property>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
<description>Should the outputs of the maps be compressed before being
sent across the network. Uses SequenceFile compression.
</description>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>5</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>15</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>5</value>
<description>
define mapred.map tasks to be number of slave hosts.the best number is the number of slave hosts plus the core numbers of per host
</description>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>15</value>
<description>
define mapred.reduce tasks to be number of slave hosts.the best number is the number of slave hosts plus the core numbers of per host
</description>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/ysc/mapreduce/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/ysc/mapreduce/local</value>
</property>
<property>
<name>mapreduce.job.counters.max</name>
<value>12000</value>
<description>Limit on the number of counters allowed per job.
</description>
</property>
7、vi etc/hadoop/yarn-site.xml
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>devcluster01:8031</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>devcluster01:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>devcluster01:8030</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>devcluster01:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>devcluster01:8088</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
$YARN_HOME/*,$YARN_HOME/lib/*
</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name> <value>/home/ysc/h2/data/1/yarn/local,/home/ysc/h2/data/2/yarn/local,/home/ysc/h2/data/3/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name> <value>/home/ysc/h2/data/1/yarn/logs,/home/ysc/h2/data/2/yarn/logs,/home/ysc/h2/data/3/yarn/logs</value>
</property>
<property>
<description>Where to aggregate logs</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/home/ysc/h2/var/log/hadoop-yarn/apps</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>devcluster01:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>devcluster01:19888</value>
</property>
8、vi etc/hadoop/hdfs-site.xml
<property>
<name>dfs.permissions.superusergroup</name>
<value>root</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/ysc/dfs/filesystem/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/ysc/dfs/filesystem/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.block.size</name>
<value>6710886400</value>
<description>The default block size for new files.</description>
</property>
9、启动hadoop
bin/hdfs namenode -format
sbin/start-dfs.sh
sbin/start-yarn.sh
10、访问管理页面
http://devcluster01:8088
http://devcluster01:50070
- 浏览: 1602827 次
- 性别:
- 来自: 上海
文章分类
- 全部博客 (1585)
- Http Web (18)
- Java (194)
- 操作系统 (2)
- 算法 (30)
- 计算机 (45)
- 程序 (2)
- 性能 (50)
- php (45)
- 测试 (12)
- 服务器 (14)
- Linux (42)
- 数据库 (14)
- 管理 (9)
- 网络 (3)
- 架构 (83)
- 安全 (2)
- 数据挖掘 (16)
- 分析 (9)
- 数据结构 (2)
- 互联网 (6)
- 网络安全 (1)
- 框架 (9)
- 视频 (2)
- 计算机,SEO (3)
- 搜索引擎 (31)
- SEO (18)
- UML (1)
- 工具使用 (2)
- Maven (41)
- 其他 (7)
- 面向对象 (5)
- 反射 (1)
- 设计模式 (6)
- 内存数据库 (2)
- NoSql (9)
- 缓存 (7)
- shell (9)
- IQ (1)
- 源码 (1)
- Js (23)
- HttpClient (2)
- excel (1)
- Spring (7)
- 调试 (4)
- mysql (18)
- Ajax (3)
- JQuery (9)
- Comet (1)
- 英文 (1)
- C# (1)
- HTML5 (3)
- Socket (2)
- 养生 (1)
- 原理 (2)
- 倒排索引 (4)
- 海量数据处理 (1)
- C (2)
- Git (59)
- SQL (3)
- LAMP (1)
- 优化 (2)
- Mongodb (20)
- JMS (1)
- Json (15)
- 定位 (2)
- Google地图 (1)
- memcached (10)
- 压测 (4)
- php.性能优化 (1)
- 励志 (1)
- Python (7)
- 排序 (3)
- 数学 (3)
- 投票算法 (2)
- 学习 (1)
- 跨站攻击 (1)
- 前端 (8)
- SuperFish (1)
- CSS (2)
- 评论挖掘分析 (1)
- Google (13)
- 关键词分析 (1)
- 地图 (1)
- Gzip (1)
- 压缩 (1)
- 爬虫 (13)
- 流量统计 (1)
- 采集 (1)
- 日志分析 (2)
- 浏览器兼容 (1)
- 图片搜索引擎技术 (2)
- 空间 (1)
- 用户体验 (7)
- 免费空间 (1)
- 社交 (2)
- 图片处理 (2)
- 前端工具 (1)
- 商业 (3)
- 淘宝 (3)
- 站内搜索 (1)
- 网站收藏 (1)
- 理论 (1)
- 数据仓库 (2)
- 抓包 (1)
- Hadoop (105)
- 大数据 (6)
- Lucene (34)
- Solr (31)
- Drupal (1)
- 集群 (2)
- Lu (2)
- Mac (4)
- 索引 (9)
- Session共享 (1)
- sorl (10)
- JVM (9)
- 编码 (1)
- taobao (14)
- TCP/IP (4)
- 你可能會感興趣 (3)
- 幽默笑话 (7)
- 服务器整合 (1)
- Nginx (9)
- SorlCloud (4)
- 分佈式搜索 (1)
- ElasticSearch (30)
- 網絡安全 (1)
- MapReduce (8)
- 相似度 (1)
- 數學 (1)
- Session (3)
- 依賴注入 (11)
- Nutch (8)
- 云计算 (6)
- 虚拟化 (3)
- 财务自由 (1)
- 开源 (23)
- Guice (1)
- 推荐系统 (2)
- 人工智能 (1)
- 环境 (2)
- Ucenter (1)
- Memcached-session-manager (1)
- Storm (54)
- wine (1)
- Ubuntu (23)
- Hbase (44)
- Google App Engine (1)
- 短信 (2)
- 矩阵 (1)
- MetaQ (34)
- GitHub &Git &私/公有库 (8)
- Zookeeper (28)
- Exception (24)
- 商务 (1)
- drcp (1)
- 加密&解密 (1)
- 代码自动生成 (1)
- rapid-framework (1)
- 二次开发 (1)
- Facebook (3)
- EhCache (1)
- OceanBase (1)
- Netlog (1)
- 大数据量 (2)
- 分布式 (3)
- 事物 (2)
- 事务 (2)
- JPA (2)
- 通讯 (1)
- math (1)
- Setting.xml (3)
- 络驱动器 (1)
- 挂载 (1)
- 代理 (0)
- 日本語の (1)
- 花生壳 (7)
- Windows (1)
- AWS (2)
- RPC (11)
- jar (2)
- 金融 (1)
- MongDB (2)
- Cygwin (1)
- Distribute (1)
- Cache (1)
- Gora (1)
- Spark (31)
- 内存计算 (1)
- Pig (2)
- Hive (21)
- Mahout (17)
- 机器学习 (34)
- Sqoop (1)
- ssh (1)
- Jstack (2)
- Business (1)
- MapReduce.Hadoop (1)
- monitor (1)
- Vi (1)
- 高并发 (6)
- 海量数据 (2)
- Yslow (4)
- Slf4j (1)
- Log4j (1)
- Unix (3)
- twitter (2)
- yotube (0)
- Map-Reduce (2)
- Streaming (1)
- VMware (1)
- 物联网 (1)
- YUI (1)
- LazyLoad (1)
- RocketMQ (17)
- WiKi (1)
- MQ (1)
- RabbitMQ (2)
- kafka (3)
- SSO (8)
- 单点登录 (2)
- Hash (4)
- Redis (20)
- Memcache (2)
- Jmeter (1)
- Tsung (1)
- ZeroMQ (1)
- 通信 (7)
- 开源日志分析 (1)
- HDFS (1)
- zero-copy (1)
- Zero Copy (1)
- Weka (1)
- I/O (1)
- NIO (13)
- 锁 (3)
- 创业 (11)
- 线程池 (1)
- 投资 (3)
- 池化技术 (4)
- 集合 (1)
- Mina (1)
- JSMVC (1)
- Powerdesigner (1)
- thrift (6)
- 性能,架构 (0)
- Web (3)
- Enum (1)
- Spring MVC (15)
- 拦截器 (1)
- Web前端 (1)
- 多线程 (1)
- Jetty (1)
- emacs (1)
- Cookie (2)
- 工具 (1)
- 分布式消息队列 (1)
- 项目管理 (2)
- github (21)
- 网盘 (1)
- 仓库 (3)
- Dropbox (2)
- Tsar (1)
- 监控 (3)
- Argo (2)
- Atmosphere (1)
- WebSocket (5)
- Node.js (6)
- Kraken (1)
- Cassandra (3)
- Voldemort (1)
- VoltDB (2)
- Netflix (2)
- Hystrix (1)
- 心理 (1)
- 用户分析 (1)
- 用户行为分析 (1)
- JFinal (1)
- J2EE (1)
- Lua (2)
- Velocity (1)
- Tomcat (3)
- 负载均衡 (1)
- Rest (2)
- SerfJ (1)
- Rest.li (1)
- KrakenJS (1)
- Web框架 (1)
- Jsp (2)
- 布局 (2)
- NowJs (1)
- WebSoket (1)
- MRUnit (1)
- CouchDB (1)
- Hiibari (1)
- Tiger (1)
- Ebot (1)
- 分布式爬虫 (1)
- Sphinx (1)
- Luke (1)
- Solandra (1)
- 搜素引擎 (1)
- mysqlcft (1)
- IndexTank (1)
- Erlang (1)
- BeansDB (3)
- Bitcask (2)
- Riak (2)
- Bitbucket (4)
- Bitbuket (1)
- Tokyo Cabinet (2)
- TokyoCabinet (2)
- Tokyokyrant (1)
- Tokyo Tyrant (1)
- Memcached协议 (1)
- Jcrop (1)
- Thead (1)
- 详设 (1)
- 问答 (2)
- ROM (1)
- 计算 (1)
- epoll (2)
- libevent (1)
- BTrace (3)
- cpu (2)
- mem (1)
- Java模板引擎 (1)
- 有趣 (1)
- Htools (1)
- linu (1)
- node (3)
- 虚拟主机 (1)
- 闭包 (1)
- 线程 (1)
- 阻塞 (1)
- LMAX (2)
- Jdon (1)
- 乐观锁 (1)
- Disruptor (9)
- 并发 (6)
- 为共享 (1)
- volatile (1)
- 伪共享 (1)
- Ringbuffer (5)
- i18n (2)
- rsync (1)
- 部署 (1)
- 压力测试 (1)
- ORM (2)
- N+1 (1)
- Http (1)
- web开发脚手架 (1)
- Mybatis (1)
- 国际化 (2)
- Spring data (1)
- R (4)
- 网络爬虫 (1)
- 条形码 (1)
- 等比例缩放 (1)
- java,面向接口 (1)
- 编程规范 (1)
- CAP (1)
- 论文 (1)
- 大数据处理 (1)
- Controller (3)
- CDN (2)
- 程序员 (1)
- Spring Boot (3)
- sar (1)
- 博弈论 (1)
- 经济 (1)
- Scrapy (1)
- Twistedm (1)
- cron (1)
- quartz (1)
- Debug (1)
- AVO (1)
- 跨语言 (1)
- 中间服务 (2)
- Dubbo (4)
- Yarn (1)
- Spring OSGI (1)
- bundle (1)
- OSGI (1)
- Spring-Boot (1)
- CA证书 (1)
- SSL (1)
- CAS (7)
- FusionCharts (5)
- 存储过程 (3)
- 日志 (2)
- OOP (2)
- CentOS (5)
- JSONP (2)
- 跨域 (5)
- P3P (1)
- Java Cas (1)
- CentOS 6.5 Released – Installation Guide with Screenshots (1)
- Android (1)
- 队列 (2)
- Multitail (1)
- Maout (1)
- nohup (1)
- AOP (1)
- 长连接 (3)
- 轮循 (2)
- 聊天室 (1)
- Zeus (1)
- LSM-Tree (1)
- Slope One (1)
- 协同过滤 (1)
- 服务中间件 (1)
- KeyMeans (1)
- Bitmap (1)
- 实时统计 (1)
- B-Tree+ (1)
- PageRank (1)
- 性能分析 (1)
- 性能测试 (1)
- CDH (10)
- 迭代计算 (1)
- Jubatus (1)
- Hadoop家族 (8)
- Cloudera (2)
- RHadoop (1)
- 广告定价 (1)
- 广告系统 (9)
- 广告系统,架构 (1)
- Tag推荐算法 (1)
- 相似度算法 (1)
- 页面重构 (2)
- 高性能 (6)
- Maven3 (3)
- Gradle (11)
- Apache (1)
- Java并发 (1)
- Java多进程 (1)
- Rails (1)
- Ruby (3)
- 系统架构 (1)
- 运维 (36)
- 网页设计 (1)
- TFS (0)
- 推荐引擎 (0)
- Tag提取算法 (1)
- 概率统计 (1)
- 自然语言处理 (2)
- 分词 (1)
- Ruby.Python (1)
- 语义相似度 (0)
- Chukwa (0)
- 日志收集系统 (0)
- Data Mining (4)
- 开放Api (1)
- Scala (28)
- Ganglia (2)
- mmap (1)
- 贝叶斯分类 (1)
- 运营 (1)
- Mdrill (1)
- Lambda (2)
- Netty (5)
- Java8 (1)
- Solr4 (1)
- Akka (12)
- 计算广告 (2)
- 聊天系统 (1)
- 服务发现 (1)
- 统计指标 (1)
- NLP (1)
- 深度学习 (0)
最新评论
-
wahahachuang5:
web实时推送技术使用越来越广泛,但是自己开发又太麻烦了,我觉 ...
使用 HTML5 WebSocket 构建实时 Web 应用 -
秦时明月黑:
Jetty 服务器架构分析 -
chenghaitao111111:
楼主什么时候把gecko源码分析一下呢,期待
MetaQ技术内幕——源码分析(转) -
qqggcc:
为什么还要写代码啊,如果能做到不写代码就把功能实现就好了
快速构建--Spring-Boot (quote) -
yongdi2:
好厉害!求打包代码
Hadoop日志文件分析系统
发表评论
-
Hadoop DistributedCache使用及原理
2014-08-13 11:19 1077概览 DistributedCache 是一个提供给Ma ... -
HBase高性能复杂条件查询引擎
2014-07-30 09:57 1000写在前面 在这次的审稿过程中有幸得到了Ted ... -
HADOOP基本操作命令
2014-07-17 10:57 692启动与关闭 启动HADOOP 1. ... -
hbase shell 基础和常用命令详解
2014-07-10 10:10 755HBase是一个分布式的、面向列的开源数据库,源于goog ... -
(HBase+Lucene)
2014-07-10 10:11 7931、核心工具类 package junit; ... -
【HBase】Rowkey设计
2014-07-10 10:11 897本章将深入介绍由HBase的存储架构在设计上带来的影响。如 ... -
HBase的rowkey设计
2014-07-09 12:11 673HBase的查询实现只提供两种方式: 1、按指定RowKe ... -
hbase表结构设计研究
2014-07-09 12:04 470因为一直在做hbase的应用层面的开发,所以体会的比较深的一 ... -
在线分析查询系统mdrill
2014-07-09 11:21 7961:mdrill是阿里妈妈-adhoc-海量数据多维自助即 ... -
Java操作Hbase进行建表、删表以及对数据进行增删改查,条件查询
2014-07-08 20:12 6411、搭建环境 新建JAVA项目,添加的包有: ... -
Hadoop Tool,ToolRunner原理分析
2014-06-19 09:18 934先看Configurable 接口: 1234 ... -
Hadoop实现AbstractJob简化Job设置
2014-06-21 18:47 947在hadoop中编写一个job一般都是采用下面的方式: ... -
让你彻底明白hive数据存储各种模式
2014-06-16 11:31 8031.hive数据分为那两种类型?2.什么表数据?3.什么是 ... -
YARN 各种RPC通信协议及它们的作用介绍
2014-06-17 16:40 550RPC协议是连接各个组件的“大动脉”,了解不同组件之间的R ... -
YARN工作流程
2014-06-18 12:20 584运行在YARN上的应用程序主要分为两类: (1)短应用程 ... -
HADOOP工作流调度系统OOZIE
2014-06-23 09:30 995e.WordCount.Reduce</val ... -
Hadoop 中利用 mapreduce 读写 mysql 数据
2014-06-15 10:39 846问题导读1.hadoop mapreduce的通过哪两个类可 ... -
hadoop编程:解决eclipse能运行,打包放到集群上ClassNotFoundException:经验总结
2014-06-15 01:18 1225本文之所以称之为经验,是因为我们经常碰到莫名其妙的问题,从原 ... -
分别使用Hadoop MapReduce、hive统计手机流量
2014-06-15 01:09 1328问题导读1.hive实现统计的查询语句是什么?2.生产环境 ... -
eclipse中开发Hadoop2.x的Map/Reduce项目汇总
2014-06-24 15:16 643问题导读: 1.如何创建MR程序? 2.如何配置运行参 ...
相关推荐
资源名称:Nutch相关框架视频教程资源目录:【】Nutch相关框架视频教程1_杨尚川【】Nutch相关框架视频教程2_杨尚川【】Nutch相关框架视频教程3_杨尚川【】Nutch相关框架视频教程4_杨尚川【】Nutch相关框架视频教程5_...
教程名称:Nutch相关框架视频教程(20集)课程目录:【】Nutch相关框架视频教程01【】Nutch相关框架视频教程02【】Nutch相关框架视频教程03【】Nutch相关框架视频教程04【】Nutch相关框架视频教程05【】Nutch相关...
### Nutch相关框架知识点概述 #### 一、Nutch与Hadoop、Tika、Gora的关系 **Nutch**是一个开源的Web抓取框架,它不仅能够帮助开发者抓取网络上的数据,还促进了多个重要开源项目的诞生和发展。通过Nutch项目,衍生...
搭建Nutch框架涉及到多个步骤,首先需要确保服务器环境符合Nutch的运行要求,通常推荐使用Linux操作系统。具体步骤如下: 1. **环境准备**:安装Java运行环境,因为Nutch是基于Java开发的,Java版本应符合Nutch的...
Apache Nutch 是一个开源的网络爬虫框架,用于抓取互联网上的网页并建立索引,以便进行全文搜索。Nutch 2.2.1 是一个稳定版本,它依赖于其他几个组件来完成其功能,包括 Apache Ant、Apache Tomcat、Java 开发工具包...
可以通过运行`cygwin.bat`文件来启动Cygwin终端,并使用`cd`命令切换到Nutch的安装目录,然后使用`ls -l`查看目录内容。运行`bin/nutch`命令,如果没有错误提示,说明Nutch已经成功安装。 8. **Nutch的配置**:安装...
### Nutch相关框架知识点概述 #### 一、Nutch与Hadoop、Tika、Gora的关系 1. **Nutch的衍生项目**:Nutch不仅仅是一个独立的项目,它的研究和发展过程中孕育出了多个重要的开源项目,包括Hadoop、Tika和Gora。这...
### Nutch 使用指南 #### 一、概述 Nutch 是一个开源项目,旨在帮助用户构建自己的内部网搜索引擎或面向整个互联网的搜索引擎。本指南将基于 Nutch 的版本 0.7,详细介绍如何设置和配置 Nutch 以进行内部网爬取...
总之,Nutch的安装和使用涉及多个步骤,包括环境配置、源代码获取、配置参数、执行爬行任务等。理解并熟练掌握这些步骤,将有助于构建自己的搜索引擎系统。在实际操作过程中,可能会遇到各种问题,需要根据错误提示...
### Nutch 的安装方法详解 #### 一、前言 Nutch是一款开源的网络爬虫项目,基于Hadoop实现,可以抓取整个互联网,并且能够根据网页内容进行索引和检索。本文将详细介绍如何在Windows环境下安装配置Nutch,使初学者...
### Nutch 2.3.1 安装与配置指南 #### 一、配置 ant 环境 在安装 Nutch 之前,首先需要确保环境中已经安装了 Apache Ant 工具,因为 Nutch 的构建过程依赖于 Ant。以下是具体步骤: 1. **下载 ant**: - 访问 ...
Nutch是一个开源的Web搜索引擎框架,基于Java编写,它使用Lucene作为搜索引擎核心。Nutch能够抓取网站并建立索引,实现全文搜索功能。Cygwin是一个在Windows环境下运行的类Unix模拟环境,它提供一个庞大的类Unix工具...
总的来说,Nutch+Solr+Hadoop 的框架搭建涉及多个组件的安装、配置和协同工作。理解这些组件的功能和相互之间的关系,以及如何通过脚本和配置文件控制它们,是成功搭建和使用该框架的关键。这个教程提供了详细步骤,...
### Nutch配置与安装知识点详解 #### 一、Nutch简介 Nutch是一款开源的Web爬虫项目,基于Apache Hadoop构建,能够抓取、处理和索引互联网上的信息。Nutch提供了高度可定制化的配置选项,使得用户可以根据自己的需求...
本文将详细介绍 Apache Nutch 1.7 在 Windows 和 Linux 下的安装过程,包括安装前的准备工作、安装 Cygwin、安装 Nutch 1.7、测试 Nutch 1.7 以及与 Solr 结合使用等内容。 1. 准备工作 在安装 Apache Nutch 1.7 ...
在使用Nutch之前,你需要配置Nutch的运行环境,包括安装Java、设置Hadoop(如果需要分布式爬取)、下载和编译Nutch源代码。还需要配置Nutch的`conf/nutch-site.xml`文件,指定抓取策略、存储路径、爬虫范围等参数。 ...
- **与 Solr 的关系**: Nutch 通常与 Apache Solr 集成使用,Solr 是一款开源全文搜索引擎框架,能够帮助用户高效地搜索 Nutch 爬取的网页内容。 #### 二、安装前所需工具及软件 1. **JDK 1.7** - **下载地址**: ...
Nutch的很多代码是用Map Reduce和HDFS写的,哪里还能找到比Nutch更好的Hadoop应用案例呢? 大数据这个术语最早的引用可追溯到Nutch。当时,大数据用来描述为更新网络搜索索引需要同时进行批量处理或分析的大量数据...