今天又重新看了下hbase的操作,以前虽说是运行过对Hbase的操作,比如直接的建表,导入数据,或者是使用MR操作Hbase,但是都是在单节点上做的,而且是用eclipse下操作的,不用担心一些包的问题。今天打算把代码拷贝到hadoop的lib下面,然后在命令行中运行,下午遇到的一个问题如下:
12/09/29 12:29:36 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/09/29 12:29:36 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:2181
12/09/29 12:29:36 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
12/09/29 12:29:36 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 6479@fansyPC
12/09/29 12:29:36 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
12/09/29 12:29:36 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
看到这个提示,第一次感觉是不是我配置错了?怎么显示的是127.0.0.1的机子呢?应该是其他节点机或者是本机的计算机名字才对的吧,这样就好像是单机启动一样。我的配置如下:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://fansyPC:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>slave1</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/fansy/zookeeper</value>
</property>
</configuration>
看来感觉还是感觉没有配置错误,所以就又去看了官方文档,上面说要ubuntu的系统有点不同的,所以我就又改了下:
/etc/security/limits.conf :
添加这两句:
hadoop - nofile 32768
hadoop soft/hard nproc 32000
/etc/pam.d/common-session:
session required pam_limits.so
但还是不行,然后我就想 会不会是
hbase.zookeeper.quorum
配置 不能和slave节点机在一个上面?所以我就又改了这个值 ,全部改为 fansyPC了,同时我有看到官方文档上面说要用hbase/lib下面的hadoop-core-1.0.2.jar 去代替haoop/下面的hadoop-core-1.0.2.jar这个文件,然后我又照做了,然后就出现了说类找不到,好了,应该差不多 了。最后我把hbase/lib下面的JAR包都放在了hadoop/lib下面(重复的跳过),然后就Ok了。
在编译java文件的时候我只用到了下面三个jar包:hadoop-core-1.0.2.jar,hbase-0.94.0.jar,zookeeper-3.4.3.jar,java代码如下:
package org.fansy.date905;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
public class TableOperation {
/**
* include all table operation like scan,input,create table,drop table, and so on
* date:15:12
* @throws IOException
*/
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
// createTable("tabledemo","name");
int status=Integer.parseInt(args[0]);
if(status==1){
createTable(args[1],args[2]);
}else if(status==2){
deleteTable(args[1]);
}
}
/*
* create a table and its family
*/
public static void createTable(String tablename ,String family) throws IOException{
Configuration conf=HBaseConfiguration.create();
HBaseAdmin admin=new HBaseAdmin(conf);
HTableDescriptor tableDesc=new HTableDescriptor(tablename);
tableDesc.addFamily(new HColumnDescriptor(family));
admin.createTable(tableDesc);
admin.close();
}
/*
* add data to table ,include row ,family,column,value,tablename
*/
public static void addData(String tablename,String family,String qualifier,String row,String value)throws IOException {
Configuration conf=HBaseConfiguration.create();
HTable table=new HTable(conf,tablename);
Put putrow=new Put(row.getBytes());
putrow.add(family.getBytes(), qualifier.getBytes(), value.getBytes());
table.put(putrow);
table.close();
}
/*
* drop a table by the tablename given
*/
public static void deleteTable(String tablename)throws IOException{
Configuration conf=HBaseConfiguration.create();
HBaseAdmin admin=new HBaseAdmin(conf);
admin.disableTable(tablename);
admin.deleteTable(tablename);
admin.close();
}
}
命令行的测试结果如下:
fansy@fansyPC:~/hadoop-1.0.2$ bin/hadoop TableOperation 1 testdemo name
Warning: $HADOOP_HOME is deprecated.
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:host.name=fansyPC
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_07
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/fansy/jdk1.7.0_07/jre
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/fansy/hadoop-1.0.2/libexec/../lib/native/Linux-amd64-64
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-31-generic
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:user.name=fansy
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/fansy
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/fansy/hadoop-1.0.2
12/09/29 15:28:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/09/29 15:28:04 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:2181
12/09/29 15:28:04 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
12/09/29 15:28:04 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 18076@bwa108
12/09/29 15:28:04 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
12/09/29 15:28:04 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x13a10cf8247000e, negotiated timeout = 180000
12/09/29 15:28:05 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x13a10cf8247000e
12/09/29 15:28:05 INFO zookeeper.ZooKeeper: Session: 0x13a10cf8247000e closed
create table:testdemo done
这样基本的命令行操作hbase就完成了。
分享,快乐,成长
分享到:
相关推荐
- **背景层**(Backdrop):HBase运行于Hadoop之上,利用Hadoop提供的分布式文件系统(HDFS)来存储数据。 - **表格、行、列与单元格**(Tables, Rows, Columns, and Cells):HBase的基本存储单位是表,表由多个...
《HBase:权威指南》是一本深度探讨分布式列式数据库HBase的专业书籍,它为读者提供了全面、深入的HBase知识。HBase是基于Apache Hadoop的开源项目,旨在为大规模数据集提供低延迟的随机读写能力。本书是HBase开发者...
《HBase:The Definition Guide》是一本全面深入探讨HBase的权威指南,旨在为读者提供HBase的详尽理解。HBase,作为Apache Hadoop生态系统中的一个分布式、面向列的数据库,源自Google的BigTable设计,被Facebook等...
### HBase: 权威指南 #### 一、HBase概览 《HBase:权威指南》是一本全面介绍HBase技术的电子书,旨在帮助读者深入理解HBase的工作原理和应用场景。本书不仅覆盖了HBase的基础知识,还探讨了其在大数据处理中的角色...
HBase基于Google的Bigtable模型构建,并且运行在Hadoop生态系统之上,利用Hadoop的文件存储系统HDFS(Hadoop Distributed File System)来存储数据。 HBase特别适用于那些需要随机访问和更新大数据集的应用场景。它...
赠送jar包:hbase-server-1.4.3.jar; 赠送原API文档:hbase-server-1.4.3-javadoc.jar; 赠送源代码:hbase-server-1.4.3-sources.jar; 赠送Maven依赖信息文件:hbase-server-1.4.3.pom; 包含翻译后的API文档:...
赠送jar包:hbase-common-1.4.3.jar; 赠送原API文档:hbase-common-1.4.3-javadoc.jar; 赠送源代码:hbase-common-1.4.3-sources.jar; 赠送Maven依赖信息文件:hbase-common-1.4.3.pom; 包含翻译后的API文档:...
赠送jar包:hbase-server-1.1.3.jar; 赠送原API文档:hbase-server-1.1.3-javadoc.jar; 赠送源代码:hbase-server-1.1.3-sources.jar; 赠送Maven依赖信息文件:hbase-server-1.1.3.pom; 包含翻译后的API文档:...
1.1 HBase 的概念与特点 HBase 是一个分布式、版本化的非关系型数据库,是Hadoop生态系统中的重要组成部分。它基于Google的Bigtable论文设计,提供高可靠性、高性能、面向列,可伸缩的数据库服务。HBase主要特点包括...
赠送jar包:hbase-server-1.2.12.jar; 赠送原API文档:hbase-server-1.2.12-javadoc.jar; 赠送源代码:hbase-server-1.2.12-sources.jar; 赠送Maven依赖信息文件:hbase-server-1.2.12.pom; 包含翻译后的API文档...
标题:HBase在阿里搜索推荐中的应用 知识点: 1. HBase应用场景:HBase是一种开源的非关系型分布式数据库,即NoSQL数据库,它基于Google的Bigtable设计并用Java编写。HBase在阿里搜索推荐系统中发挥了关键作用,它...
While reading into Hadoop you found that for random access to the accumulated data there is something call HBase. Or it was the hype that is prevalent these days addressing a new kind of data storage...
HBase(hbase-2.4.9-bin.tar.gz)是一个分布式的、面向列的开源数据库,该技术来源于 Fay Chang 所撰写的Google论文“Bigtable:一个结构化数据的分布式存储系统”。就像Bigtable利用了Google文件系统(File System...
赠送jar包:hbase-server-1.2.12.jar; 赠送原API文档:hbase-server-1.2.12-javadoc.jar; 赠送源代码:hbase-server-1.2.12-sources.jar; 赠送Maven依赖信息文件:hbase-server-1.2.12.pom; 包含翻译后的API文档...