`
文章列表
SOLRMASTER   Master SOLRSLAVE1   Slave SOLRSLAVE2  Slave 1. Add below replication handler to solrconfig.xml in the $SOLR_HOME dir for all 3 above servers. <requestHandler name="/replication" class="solr.ReplicationHandler" > <lst name="master"> <str name ...
In the production env, we need to create sponge account to do below steps. for now, just use solr as example. Server is SOLRMASTER 1. Install JDK or JRE(1.6) 2. Download the Solr and tomcat binary. You can get it from here. [solr@SOLRMASTER ~]$ pwd /home/solr [solr@SOLRMASTER ~]$ ls -lrt total 913 ...
http://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/

z7z8

http://www.tianya.cn/publicforum/content/no04/1/1994149.shtml http://news.mydrivers.com/1/225/225210.htm http://developer.51cto.com/art/200910/157060.htm http://www.javaworld.com/javaworld/javatips/jw-javatip68.html? page=3http://www.blogjava.net/nianzai/ http://www.pin5i.com/showtopic-lucene- ...
create 'test', {NAME => 'col', VERSIONS => 100} describe 'test' disable 'test' drop 'test' put 'test', '9993', 'col:EXCEPTION', '9993:1111:xxxxx' put 'test', '9993', 'col:EXCEPTION', '9993:1112:xxxxx' put 'test', '9993', 'col:EXCEPTION', '9993:1112:xxxxx' get 'test', '9993', {COLUMN =>'col ...

Java Regex Tutorial

    博客分类:
  • java
http://www.vogella.de/articles/JavaRegularExpressions/article.html
hive> create table dumprecord (line string); OK Time taken: 3.813 seconds hive> load data local inpath '/home/userkkk/dump20gfile/DumpFileDemo.out'     > overwrite into table dumprecord; Copying data from file:/home/userkkk/dump20gfile/DumpFileDemo.out Copying file: file:/home/userkkk/dump2 ...

Hive + Hbase

在hive-site.xml里面有两个配置选项,配置好后可以应用hive执行到hbase集群 <property>     <name>hive.aux.jars.path</name> <value>file:///app/java/hive/lib/hive-hbase-handler-0.7.1.jar,file:///app/java/hive/lib/hbase-0.90.3.jar,file:///app/java/hive/lib/zookeeper-3.3.1.jar</value>   </property& ...
Node Decommissioning 1   $ ./bin/hbase-daemon.sh stop regionserver Disabling the Load Balancer Before Decommissioning a Node hbase(main):001:0> balance_switch false hbase(main):002:0> balance_switch true 2  $ ./bin/graceful_stop.sh HOSTNAME where HOSTNAME is the host carrying the region server ...
There is a good blog on this. http://www.cloudera.com/blog/2009/03/hadoop-metrics/ The HDFS and MapReduce daemons collect information about events and measurements that are collectively known as metrics. For example, datanodes collect the following metrics (and many more): the number of bytes writt ...
1 stop all the hbase or hadoop deamon $stop-all.sh 2 check the physical file system location Hdfs-site.xml <property>   <name>dfs.name.dir</name>   <value>${hadoop.tmp.dir}/dfs/name</value> </property> <property>   <name>dfs.data.dir</name>   < ...
Setting log levels There are 2 ways we can use to set log level. http://jobtracker-host:50030/logLevel The same thing can be achieved from the command line as follows: % hadoop daemonlog -setlevel jobtracker-host:50030 \ org.apache.hadoop.mapred.JobTracker DEBUG Getting stack traces http://jobtrac ...
# start-balancer.sh starting balancer, logging to /usr/local/hadoop_20_1/bin/../logs/hadoop-loginuserid-balancer-hostname.out # The -threshold argument specifies the threshold percentage that defines what it means for the cluster to be balanced. The flag is optional, in which case the threshold is 1 ...
Every datanode runs a block scanner, which periodically verifies all the blocks stored on the datanode. This allows bad blocks to be detected and fixed before they are read by clients. The DataBlockScanner maintains a list of blocks to verify and scans them one by one for checksum errors. The scanner ...
Hadoop provides an fsck utility for checking the health of files in HDFS. % hadoop fsck / ................. Status: HEALTHY Total size:    2928057882 B Total dirs:    271 Total files:   173 (Files currently being written: 1) Total blocks (validated):      194 (avg. block size 15093081 B) (Total ...
Global site tag (gtag.js) - Google Analytics