`
tenght
  • 浏览: 52086 次
社区版块
存档分类
最新评论

hbase开发,hbase表操作及其java api实现

 
阅读更多

开发环境:

hadoop: hadoop-1.1.2

hbase:hbase-0.94.11-security

eclipse:Juno Service Release 2

配置Eclipse

通过 Eclipse 创建一个新 Java 工程,右击项目根目录,选择“Properties> Java Build Path> Library> Add External JARs”,将 HBase 安装文件解压后根目录下的hbase-0.94.11-security.jar、hbase-0.94.11-security-tests.jar 和 lib 子目录下所有的 jar 包添加到本工程的 Build Path下。

示例使用表users及表的基本操作:

创建表

>create 'users','user_id','address','info'

添加记录


put 'users','xiaoming','info:age','24';
put 'users','xiaoming','info:birthday','1987-06-17';
put 'users','xiaoming','info:company','alibaba';
put 'users','xiaoming','address:contry','china';
put 'users','xiaoming','address:province','zhejiang';
put 'users','xiaoming','address:city','hangzhou';
put 'users','zhangyifei','info:birthday','1987-4-17';
put 'users','zhangyifei','info:favorite','movie';
put 'users','zhangyifei','info:company','alibaba';
put 'users','zhangyifei','address:contry','china';
put 'users','zhangyifei','address:province','guangdong';
put 'users','zhangyifei','address:city','jieyang';
put 'users','zhangyifei','address:town','xianqiao'
注意:最后一个语句没有分号,此时,按回车后将添加数据到表'users'中。

获取一条记录

1.取得一个id的所有数据
>get 'users','xiaoming'
2.获取一个id,一个列族的所有数据
>get 'users','xiaoming','info'
3.获取一个id,一个列族中一个列的所有数据
>get 'users','xiaoming','info:age'

更新记录

>put 'users','xiaoming','info:age' ,'29'
>get 'users','xiaoming','info:age'
>put 'users','xiaoming','info:age' ,'30'
>get 'users','xiaoming','info:age'

获取单元格数据的版本数据

>get 'users','xiaoming',{COLUMN=>'info:age',VERSIONS=>1}
>get 'users','xiaoming',{COLUMN=>'info:age',VERSIONS=>2}
>get 'users','xiaoming',{COLUMN=>'info:age',VERSIONS=>3}

获取单元格数据的某个版本数据

〉get 'users','xiaoming',{COLUMN=>'info:age',TIMESTAMP=>1364874937056}

删除xiaoming值的'info:age'字段

>delete 'users','xiaoming','info:age'
>get 'users','xiaoming'

删除整行

>deleteall 'users','xiaoming'

统计表的行数

>count 'users'

清空表

>truncate 'users'

全表扫描

>scan 'users'

输出结果:

hbase(main):022:0> scan 'users'
ROW                                          COLUMN+CELL                                                                                                                    
 xiaoming                                    column=address:city, timestamp=1378733106132, value=hangzhou                                                                   
 xiaoming                                    column=address:contry, timestamp=1378733106058, value=china                                                                    
 xiaoming                                    column=address:province, timestamp=1378733106120, value=zhejiang                                                               
 xiaoming                                    column=info:age, timestamp=1378733105943, value=24                                                                             
 xiaoming                                    column=info:birthday, timestamp=1378733105961, value=1987-06-17                                                                
 xiaoming                                    column=info:company, timestamp=1378733106006, value=alibaba                                                                    
 zhangyifei                                  column=address:city, timestamp=1378733106184, value=jieyang                                                                    
 zhangyifei                                  column=address:contry, timestamp=1378733106176, value=china                                                                    
 zhangyifei                                  column=address:province, timestamp=1378733106180, value=guangdong                                                              
 zhangyifei                                  column=address:town, timestamp=1378733106189, value=xianqiao                                                                   
 zhangyifei                                  column=info:birthday, timestamp=1378733106161, value=1987-4-17                                                                 
 zhangyifei                                  column=info:company, timestamp=1378733106171, value=alibaba                                                                    
 zhangyifei                                  column=info:favorite, timestamp=1378733106167, value=movie                                                                     
2 row(s) in 0.1900 seconds

示例一:输出表“users”的列族名称

代码:

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.util.Bytes;

public class ExampleClient {
	public static void main(String[] args) throws IOException {
		
		  Configuration conf = HBaseConfiguration.create();
		  conf.set("hbase.zookeeper.quorum", "master");//使用eclipse时必须添加这个,否则无法定位
		  conf.set("hbase.zookeeper.property.clientPort", "2181");
		  HBaseAdmin admin = new HBaseAdmin(conf);// 新建一个数据库管理员
		  HTableDescriptor tableDescriptor = admin.getTableDescriptor(Bytes.toBytes("users"));
		  byte[] name = tableDescriptor.getName();
		  System.out.println("result:");

		  System.out.println("table name: "+ new String(name));
		  HColumnDescriptor[] columnFamilies = tableDescriptor
				  .getColumnFamilies();
		  for(HColumnDescriptor d : columnFamilies){
			  System.out.println("column Families: "+ d.getNameAsString());
			  }
	    }
}

编译结果:

2013-09-09 15:58:51,890 WARN  conf.Configuration (Configuration.java:<clinit>(195)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:host.name=w253245.ppp.asahi-net.or.jp
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.version=1.6.0_10-rc2
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.vendor=Sun Microsystems Inc.
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.home=C:\Java\jre6
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.class.path=...............//太占篇幅此处省略
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.library.path=D:\workspace\Eclipse-jee\hadoop-1.1.21\lib\native;D:\workspace\Eclipse-jee\hadoop-1.1.21\lib\native
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA>
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Windows XP
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=x86
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=5.1
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=hadoop
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=C:\Documents and Settings\Administrator
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=D:\workspace\Eclipse-jee\Hadoop_APPs_U_tht
2013-09-09 15:58:55,031 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=hconnection
2013-09-09 15:58:56,171 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(104)) - The identifier of this process is 6032@tht
2013-09-09 15:58:56,234 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server master/121.1.253.251:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
2013-09-09 15:58:56,296 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to master/121.1.253.251:2181, initiating session
2013-09-09 15:58:56,484 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server master/121.1.253.251:2181, sessionid = 0x141011ad7db000e, negotiated timeout = 180000
result:
table name: users
column Families: address
column Families: info
column Families: user_id

示例二:使用HBase Java API对表users2进行操作:

代码:

import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

public class OperateTable {
	// 声明静态配置
	private static Configuration conf = null;
	static {
		conf = HBaseConfiguration.create();
		conf.set("hbase.zookeeper.quorum",
				"master");
		conf.set("hbase.zookeeper.property.clientPort", "2181");
	}

	// 创建数据库表
	public static void createTable(String tableName, String[] columnFamilys)
			throws Exception {
		// 新建一个数据库管理员
		HBaseAdmin hAdmin = new HBaseAdmin(conf);

		if (hAdmin.tableExists(tableName)) {
			System.out.println("表已经存在");
			System.exit(0);
		} else {
			// 新建一个 scores 表的描述
			HTableDescriptor tableDesc = new HTableDescriptor(tableName);
			// 在描述里添加列族
			for (String columnFamily : columnFamilys) {
				tableDesc.addFamily(new HColumnDescriptor(columnFamily));
			}
			// 根据配置好的描述建表
			hAdmin.createTable(tableDesc);
			System.out.println("创建表成功");
		}
	}

	// 删除数据库表
	public static void deleteTable(String tableName) throws Exception {
		// 新建一个数据库管理员
		HBaseAdmin hAdmin = new HBaseAdmin(conf);

		if (hAdmin.tableExists(tableName)) {
			// 关闭一个表
			hAdmin.disableTable(tableName);
			// 删除一个表
			hAdmin.deleteTable(tableName);
			System.out.println("删除表成功");

		} else {
			System.out.println("删除的表不存在");
			System.exit(0);
		}
	}

	// 添加一条数据
	public static void addRow(String tableName, String row,
			String columnFamily, String column, String value) throws Exception {
		HTable table = new HTable(conf, tableName);
		Put put = new Put(Bytes.toBytes(row));
		// 参数出分别:列族、列、值
		put.add(Bytes.toBytes(columnFamily), Bytes.toBytes(column),
				Bytes.toBytes(value));
		table.put(put);
	}

	// 删除一条数据
	public static void delRow(String tableName, String row) throws Exception {
		HTable table = new HTable(conf, tableName);
		Delete del = new Delete(Bytes.toBytes(row));
		table.delete(del);
	}

	// 删除多条数据
	public static void delMultiRows(String tableName, String[] rows)
			throws Exception {
		HTable table = new HTable(conf, tableName);
		List<Delete> list = new ArrayList<Delete>();

		for (String row : rows) {
			Delete del = new Delete(Bytes.toBytes(row));
			list.add(del);
		}

		table.delete(list);
	}

	// get row
	public static void getRow(String tableName, String row) throws Exception {
		HTable table = new HTable(conf, tableName);
		Get get = new Get(Bytes.toBytes(row));
		Result result = table.get(get);
		// 输出结果
		for (KeyValue rowKV : result.raw()) {
			System.out.print("Row Name: " + new String(rowKV.getRow()) + " ");
			System.out.print("Timestamp: " + rowKV.getTimestamp() + " ");
			System.out.print("column Family: " + new String(rowKV.getFamily()) + " ");
			System.out.print("Row Name:  " + new String(rowKV.getQualifier()) + " ");
			System.out.println("Value: " + new String(rowKV.getValue()) + " ");
		}
	}

	// get all records
	public static void getAllRows(String tableName) throws Exception {
		HTable table = new HTable(conf, tableName);
		Scan scan = new Scan();
		ResultScanner results = table.getScanner(scan);
		// 输出结果
		for (Result result : results) {
			for (KeyValue rowKV : result.raw()) {
				System.out.print("Row Name: " + new String(rowKV.getRow()) + " ");
				System.out.print("Timestamp: " + rowKV.getTimestamp() + " ");
				System.out.print("column Family: " + new String(rowKV.getFamily()) + " ");
				System.out
						.print("Row Name:  " + new String(rowKV.getQualifier()) + " ");
				System.out.println("Value: " + new String(rowKV.getValue()) + " ");
			}
		}
	}

	// main
	public static void main(String[] args) {
		try {
			String tableName = "users2";

			// 第一步:创建数据库表:“users2”
			String[] columnFamilys = { "info", "course" };
			OperateTable.createTable(tableName, columnFamilys);

			// 第二步:向数据表的添加数据
			// 添加第一行数据
			OperateTable.addRow(tableName, "tht", "info", "age", "20");
			OperateTable.addRow(tableName, "tht", "info", "sex", "boy");
			OperateTable.addRow(tableName, "tht", "course", "china", "97");
			OperateTable.addRow(tableName, "tht", "course", "math", "128");
			OperateTable.addRow(tableName, "tht", "course", "english", "85");
			// 添加第二行数据
			OperateTable.addRow(tableName, "xiaoxue", "info", "age", "19");
			OperateTable.addRow(tableName, "xiaoxue", "info", "sex", "boy");
			OperateTable.addRow(tableName, "xiaoxue", "course", "china", "90");
			OperateTable.addRow(tableName, "xiaoxue", "course", "math", "120");
			OperateTable
					.addRow(tableName, "xiaoxue", "course", "english", "90");
			// 添加第三行数据
			OperateTable.addRow(tableName, "qingqing", "info", "age", "18");
			OperateTable.addRow(tableName, "qingqing", "info", "sex", "girl");
			OperateTable
					.addRow(tableName, "qingqing", "course", "china", "100");
			OperateTable.addRow(tableName, "qingqing", "course", "math", "100");
			OperateTable.addRow(tableName, "qingqing", "course", "english",
					"99");
			// 第三步:获取一条数据
			System.out.println("获取一条数据");
			OperateTable.getRow(tableName, "tht");
			// 第四步:获取所有数据
			System.out.println("获取所有数据");
			OperateTable.getAllRows(tableName);
			// 第五步:删除一条数据
			System.out.println("删除一条数据");
			OperateTable.delRow(tableName, "tht");
			OperateTable.getAllRows(tableName);
			// 第六步:删除多条数据
			System.out.println("删除多条数据");
			String[] rows = { "xiaoxue", "qingqing" };
			OperateTable.delMultiRows(tableName, rows);
			OperateTable.getAllRows(tableName);
			// 第八步:删除数据库
			System.out.println("删除数据库");
			OperateTable.deleteTable(tableName);

		} catch (Exception err) {
			err.printStackTrace();
		}
	}
}

结果:

13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:host.name=tht
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:java.home=D:\Java\jre6
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:java.class.path=。。。。。。。。。。。。
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:java.library.path=。。。。。。。。。。。
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:os.name=Windows 7
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:os.arch=x86
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:os.version=6.1
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:user.home=C:\Users\Administrator
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Client environment:user.dir=D:\workspace\eclipse-workspace-jee-kepler\hadoop-Apps-tht
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=hconnection
13/09/09 22:01:18 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 9608@tht
13/09/09 22:01:18 INFO zookeeper.ClientCnxn: Opening socket connection to server master/192.168.1.101:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
13/09/09 22:01:18 INFO zookeeper.ClientCnxn: Socket connection established to master/192.168.1.101:2181, initiating session
13/09/09 22:01:18 INFO zookeeper.ClientCnxn: Session establishment complete on server master/192.168.1.101:2181, sessionid = 0x14102d851f8000b, negotiated timeout = 180000
13/09/09 22:01:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@ba8602
13/09/09 22:01:18 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 9608@tht
13/09/09 22:01:18 INFO zookeeper.ClientCnxn: Opening socket connection to server master/192.168.1.101:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
13/09/09 22:01:18 INFO zookeeper.ClientCnxn: Socket connection established to master/192.168.1.101:2181, initiating session
13/09/09 22:01:18 INFO zookeeper.ClientCnxn: Session establishment complete on server master/192.168.1.101:2181, sessionid = 0x14102d851f8000c, negotiated timeout = 180000
13/09/09 22:01:20 INFO zookeeper.ZooKeeper: Session: 0x14102d851f8000c closed
13/09/09 22:01:20 INFO zookeeper.ClientCnxn: EventThread shut down
13/09/09 22:01:36 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@ba8602
13/09/09 22:01:36 INFO zookeeper.ClientCnxn: Opening socket connection to server master/192.168.1.101:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
13/09/09 22:01:36 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 9608@tht
13/09/09 22:01:36 INFO zookeeper.ClientCnxn: Socket connection established to master/192.168.1.101:2181, initiating session
13/09/09 22:01:38 INFO zookeeper.ClientCnxn: Session establishment complete on server master/192.168.1.101:2181, sessionid = 0x14102d851f8000d, negotiated timeout = 180000
13/09/09 22:01:38 INFO zookeeper.ZooKeeper: Session: 0x14102d851f8000d closed
13/09/09 22:01:38 INFO zookeeper.ClientCnxn: EventThread shut down
创建表成功
获取一条数据
Row Name: tht Timestamp: 1378735285456 column Family: course Row Name:  china Value: 97 
Row Name: tht Timestamp: 1378735285918 column Family: course Row Name:  english Value: 85 
Row Name: tht Timestamp: 1378735285591 column Family: course Row Name:  math Value: 128 
Row Name: tht Timestamp: 1378735285056 column Family: info Row Name:  age Value: 20 
Row Name: tht Timestamp: 1378735285368 column Family: info Row Name:  sex Value: boy 
获取所有数据
Row Name: qingqing Timestamp: 1378735286503 column Family: course Row Name:  china Value: 100 
Row Name: qingqing Timestamp: 1378735286547 column Family: course Row Name:  english Value: 99 
Row Name: qingqing Timestamp: 1378735286524 column Family: course Row Name:  math Value: 100 
Row Name: qingqing Timestamp: 1378735286463 column Family: info Row Name:  age Value: 18 
Row Name: qingqing Timestamp: 1378735286482 column Family: info Row Name:  sex Value: girl 
Row Name: tht Timestamp: 1378735285456 column Family: course Row Name:  china Value: 97 
Row Name: tht Timestamp: 1378735285918 column Family: course Row Name:  english Value: 85 
Row Name: tht Timestamp: 1378735285591 column Family: course Row Name:  math Value: 128 
Row Name: tht Timestamp: 1378735285056 column Family: info Row Name:  age Value: 20 
Row Name: tht Timestamp: 1378735285368 column Family: info Row Name:  sex Value: boy 
Row Name: xiaoxue Timestamp: 1378735286268 column Family: course Row Name:  china Value: 90 
Row Name: xiaoxue Timestamp: 1378735286403 column Family: course Row Name:  english Value: 90 
Row Name: xiaoxue Timestamp: 1378735286343 column Family: course Row Name:  math Value: 120 
Row Name: xiaoxue Timestamp: 1378735286114 column Family: info Row Name:  age Value: 19 
Row Name: xiaoxue Timestamp: 1378735286236 column Family: info Row Name:  sex Value: boy 
删除一条数据
Row Name: qingqing Timestamp: 1378735286503 column Family: course Row Name:  china Value: 100 
Row Name: qingqing Timestamp: 1378735286547 column Family: course Row Name:  english Value: 99 
Row Name: qingqing Timestamp: 1378735286524 column Family: course Row Name:  math Value: 100 
Row Name: qingqing Timestamp: 1378735286463 column Family: info Row Name:  age Value: 18 
Row Name: qingqing Timestamp: 1378735286482 column Family: info Row Name:  sex Value: girl 
Row Name: xiaoxue Timestamp: 1378735286268 column Family: course Row Name:  china Value: 90 
Row Name: xiaoxue Timestamp: 1378735286403 column Family: course Row Name:  english Value: 90 
Row Name: xiaoxue Timestamp: 1378735286343 column Family: course Row Name:  math Value: 120 
Row Name: xiaoxue Timestamp: 1378735286114 column Family: info Row Name:  age Value: 19 
Row Name: xiaoxue Timestamp: 1378735286236 column Family: info Row Name:  sex Value: boy 
删除多条数据
删除数据库
13/09/09 22:01:40 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@ba8602
13/09/09 22:01:40 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 9608@tht
13/09/09 22:01:40 INFO zookeeper.ClientCnxn: Opening socket connection to server master/192.168.1.101:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
13/09/09 22:01:40 INFO zookeeper.ClientCnxn: Socket connection established to master/192.168.1.101:2181, initiating session
13/09/09 22:01:42 INFO zookeeper.ClientCnxn: Session establishment complete on server master/192.168.1.101:2181, sessionid = 0x14102d851f8000e, negotiated timeout = 180000
13/09/09 22:01:44 INFO zookeeper.ZooKeeper: Session: 0x14102d851f8000e closed
13/09/09 22:01:44 INFO zookeeper.ClientCnxn: EventThread shut down
13/09/09 22:01:48 INFO client.HBaseAdmin: Started disable of users2
13/09/09 22:01:50 INFO client.HBaseAdmin: Disabled users2
13/09/09 22:01:51 INFO client.HBaseAdmin: Deleted users2
删除表成功




分享到:
评论

相关推荐

    Hbase调用JavaAPI实现批量导入操作

    这篇博客“Hbase调用Java API实现批量导入操作”聚焦于如何利用Java编程语言高效地向HBase中批量导入数据。在这个过程中,我们将探讨以下几个关键知识点: 1. **HBase架构**: HBase是基于列族的存储模型,数据被...

    java-hbase开发包

    1. **HBase客户端API**:这是Java-HBase开发包的核心,提供了一组Java接口和类,用于连接到HBase集群,创建表,插入、查询和更新数据。例如,`HTableInterface` 和 `HBaseAdmin` 接口,以及 `Put`、`Get` 和 `Scan` ...

    HBase JavaAPI开发

    使用JavaAPI实现HBase的ddl(创建表、删除表、修改表(添加列族等))、dml(添加数据、删除数据)、dql(查询数据(get、scan))等操作 除此之外还包含一些其他操作:命名空间的应用、快照的应用等 对应(《HBase...

    HBase与MapReduce处理操作(基于JavaAPI)

    该案例中主要使用MapReduce作为处理组件进行数据处理,实现的案例有如通过javaapi实现hbase数据写入hdfs、hbase表数据复制到另一个表中等操作 对应(《HBase分布式存储系统应用》胡鑫喆 张志刚著)教材中案例

    使用Java API连接虚拟机HBase并进行数据库操作,Java源代码

    通过Java API,我们可以方便地与HBase交互,实现数据的增、删、改、查等操作。 首先,确保你的开发环境已经配置了HBase的Java客户端库。这通常可以通过在`pom.xml`(如果你使用的是Maven)或`build.gradle`(如果你...

    Hbase调用JavaAPI实现批量导入操作.docx

    Hbase 调用 JavaAPI 实现批量导入操作 在大数据时代,Hbase 作为一个分布式、面向列的 NoSQL 数据库,广泛应用于大规模数据存储和处理中。同时,JavaAPI 作为一个强大且流行的编程语言,广泛应用于各种软件开发中。...

    Hadoop+HBase+Java API

    **Java API** 是Java编程语言提供的应用程序接口,允许开发者使用Java来访问和操作Hadoop和HBase的功能。对于Hadoop,Java API主要包括`org.apache.hadoop.mapreduce`包下的类,如Job、Mapper、Reducer等,用于实现...

    Hbase笔记 —— 利用JavaAPI的方式操作Hbase数据库(往hbase的表中批量插入数据).pdf

    在本文档中,我们将深入探讨如何使用Java API与HBase数据库进行交互,特别是关于如何创建表、修改表结构以及批量插入数据。HBase是Apache的一个分布式、可扩展的大数据存储系统,它基于谷歌的Bigtable设计,适用于...

    java操作Hbase之从Hbase中读取数据写入hdfs中源码

    在Java中操作HBase,我们需要使用HBase的Java客户端API。首先,确保引入了所有必要的jar包,这些包通常包括hbase-client、hbase-common、hadoop-client等。这些依赖可以使用Maven或Gradle等构建工具管理,或者直接在...

    Java操作Hbase进行建表、删表以及对数据进行增删改查

    Java 操作 Hbase 进行建表、删表以及对数据进行增删改查 一、Hbase 简介 Hbase 是一个开源的、分布式的、基于 column-...这些操作都是基于 Hbase 的 API 实现的。同时,也介绍了使用 Filter 对象进行条件查询的方法。

    hbase java api 访问 查询、分页

    在HBase这个分布式列式数据库中,Java API是开发者常用的一种接口来操作HBase,包括创建表、插入数据、查询数据以及实现分页等操作。本文将深入探讨如何使用HBase Java API进行数据访问和分页查询。 首先,我们要...

    HBase Java API类介绍

    本文将详细介绍HBase Java API中的几个核心类及其功能。 #### 二、HBase Java API类介绍 ##### 1. HBaseConfiguration **关系**:`org.apache.hadoop.hbase.HBaseConfiguration` **作用**:用于配置HBase的相关...

    hbase java api 所需最精简 jar

    "hbase java api 所需最精简 jar"这个标题意味着我们将探讨的是为了在Java环境中最小化依赖,但仍能实现基本HBase操作所需的JAR文件。 首先,我们需要理解HBase Java API的核心组件。HBase的Java客户端API提供了一...

    Hbase Java API

    HBase Java API HBase 是 Hadoop 的数据库,能够对大数据提供随机、实时读写访问。他是开源的,分布式的,多版本的,面向列...HBase 的 Java API 提供了多种方法来操作数据和管理表结构,是大数据处理的重要工具之一。

    javaApi_sparkhiveAPI_hbaseAPI.zip

    本压缩包"javaApi_sparkhiveAPI_hbaseAPI.zip"包含了2019年8月至10月期间针对这些技术的Java版API实现,以及与Spark相关的Hive和HBase API。以下是关于这些技术的详细知识: 1. **Java API for Hive**: - **Hive*...

    scala API 操作hbase表

    在本文中,我们将深入探讨如何...理解HBase的表结构、行键设计、列族和时间戳等概念对于有效地使用Scala API操作HBase至关重要。同时,熟悉HBase的RegionServer和Master节点的工作原理也有助于优化你的应用程序性能。

    hbase java api 访问 增加修改删除(一)

    在本文中,我们将深入探讨如何使用HBase的Java API进行数据的增加、修改和删除操作。HBase是一个基于Google Bigtable设计的开源分布式数据库,它属于Apache Hadoop生态系统的一部分,适用于处理大规模数据存储。通过...

    如何使用Java API操作Hbase(基于0.96新的api)

    接下来,我们将讨论如何使用Java API来实现HBase操作: ### 初始化连接 首先,我们需要创建`Configuration`对象并设置HBase的配置信息,如Zookeeper的地址: ```java Configuration config = HBaseConfiguration....

    Hbase的JavaAPI

    下面我们将深入探讨HBase的Java API及其在实际应用中的使用。 1. **HBase连接** 在Java中使用HBase,首先要建立与HBase服务器的连接。这通常通过`HBaseConfiguration.create()`方法创建一个配置对象,然后设置相关...

    java操作Hbase之实现表的创建删除源码

    本教程将详细介绍如何使用Java API来创建和删除HBase表,并针对不使用Maven的初学者提供必要的jar包支持。 首先,你需要在项目中引入HBase的客户端库。由于这里没有使用Maven,你需要手动下载并添加以下jar包到项目...

Global site tag (gtag.js) - Google Analytics