- 浏览: 1580608 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (571)
- Flex (301)
- AIR (5)
- ComPiere (8)
- ExtJS (15)
- SAP (0)
- jBPM (3)
- Java-Hibernate (5)
- Java-Spring (10)
- XML (2)
- java-web (30)
- db2 (2)
- websphere (4)
- Google (5)
- 虚拟机 (10)
- eclipse (8)
- Win2003 (11)
- Ubuntu (27)
- VMware (4)
- CentOS (3)
- JSON (1)
- Oracle (15)
- SilverLight (1)
- 大事记 (6)
- VirtualBox (5)
- Solaris (10)
- AIX (6)
- 云计算 (10)
- MacOS (4)
- iOS (1)
- SQLite (1)
- Activiti (1)
- jdk (5)
- hadoop (8)
- Ant (4)
- PhoneGap (2)
- JavaScript (11)
- jQuery (1)
- FireFox (1)
- HBase (2)
- sqoop (0)
- ffmpeg (7)
- mencode (0)
- jdbc (1)
- SQLServer (1)
- 组件平台 (12)
- struts2 (4)
- intellj (4)
- android (1)
- ExtJS4 (1)
- 色卡司 (1)
- Linux (3)
- ExtJS5 (1)
- zookeeper (2)
- maven (1)
- Java (6)
- vpn (0)
- squid (1)
- MySQL (2)
- webpackage (1)
- antd (1)
- lucene (1)
最新评论
-
qq_24800465:
请问这里的库从哪下载啊
ffmpeg所有的编码器(encoders) -
neusoft_jerry:
貌似这里没用到StreamingAMFChannel哦,只是用 ...
Flex BlazeDS 推送技术 -
anyone:
感谢你的博文,看了受益匪浅
记住这个IntelliJ IDEA的编译设置 -
keren:
现在还有什么应用需要用flex来开发的?
flex tree的展开,关闭,添加、删除子节点 -
neusoft_jerry:
简洁明快,好贴好贴!楼主V5!
flex tree的展开,关闭,添加、删除子节点
最近在学hbase,看了好多资料,具体的参考:
http://blog.chinaunix.net/u3/102568/article_144792.html
http://blog.csdn.net/dajuezhao/category/724896.aspx
http://www.javabloger.com/article/apache-hbase-shell-and-install-key-value.html
以上三个里面的所有资料都看了,相信你就知道一定的hbase概念了。
好了,现在讲讲我的配置环境:
cygwin + hadoop-0.20.2 + zookeeper-3.3.2 + hbase-0.20.6 (+ eclipse3.6)
具体的配置细节,这里不讲了,网上很多,只要细心就没问题。
假设都配置好了,那么启动这些服务吧,据说启动顺序也是有要求的:
1,hadoop ./start-all.sh
2,zookeeper ./zkServer.sh start
3,hbase ./start-hbase.sh
停止的时候也是有顺序的, hbase--zookeeper--hadoop
成功后的界面截图:
http://localhost:60010/master.jsp 【hbase的管理信息】
下面就写java代码来操作hbase,我写了简单的增删改查:
然后在eclipse里面运行,可以看到结果:
http://blog.chinaunix.net/u3/102568/article_144792.html
http://blog.csdn.net/dajuezhao/category/724896.aspx
http://www.javabloger.com/article/apache-hbase-shell-and-install-key-value.html
以上三个里面的所有资料都看了,相信你就知道一定的hbase概念了。
好了,现在讲讲我的配置环境:
cygwin + hadoop-0.20.2 + zookeeper-3.3.2 + hbase-0.20.6 (+ eclipse3.6)
具体的配置细节,这里不讲了,网上很多,只要细心就没问题。
假设都配置好了,那么启动这些服务吧,据说启动顺序也是有要求的:
1,hadoop ./start-all.sh
2,zookeeper ./zkServer.sh start
3,hbase ./start-hbase.sh
停止的时候也是有顺序的, hbase--zookeeper--hadoop
成功后的界面截图:
http://localhost:60010/master.jsp 【hbase的管理信息】
下面就写java代码来操作hbase,我写了简单的增删改查:
package org.test; import java.util.ArrayList; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Scanner; import org.apache.hadoop.hbase.io.BatchUpdate; import org.apache.hadoop.hbase.io.Cell; import org.apache.hadoop.hbase.io.RowResult; import org.apache.hadoop.hbase.util.Bytes; public class TestHBase { public final static String COLENDCHAR = String.valueOf(KeyValue.COLUMN_FAMILY_DELIMITER);//":" final String key_colName = "colN"; final String key_colCluster = "colClut"; final String key_colDataType = "colDT"; final String key_colVal = "colV"; //hbase的环境变量 HBaseConfiguration conf; HBaseAdmin admin = null; /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub TestHBase app = new TestHBase(); //app.test(); app.init(); app.go(); app.list(); } void list(){ try { String tableName = "htcjd0"; Map rsMap = this.getHTData(tableName); System.out.println(rsMap.toString()); } catch (Exception e) { e.printStackTrace(); } } void go(){ try { //建表 String tableName = "htcjd0"; String[] columns = new String[]{"col"}; this.createHTable(tableName, columns); //插入数据 List list = new ArrayList(); List rowList = null; Map rowMap = null; for (int i = 0; i < 10; i++) { rowList = new ArrayList(); rowMap = new HashMap(); rowMap.put(key_colName, "col"); //rowMap.put(key_colCluster, "cl_name"); rowMap.put(key_colVal, "陈杰堆nocluster"+i); rowList.add(rowMap); rowMap = new HashMap(); rowMap.put(key_colName, "col"); rowMap.put(key_colCluster, "cl_name"); rowMap.put(key_colVal, "陈杰堆cl_"+i); rowList.add(rowMap); rowMap = new HashMap(); rowMap.put(key_colName, "col"); rowMap.put(key_colCluster, "cl_age"); rowMap.put(key_colVal, "cl_"+i); rowList.add(rowMap); rowMap = new HashMap(); rowMap.put(key_colName, "col"); rowMap.put(key_colCluster, "cl_sex"); rowMap.put(key_colVal, "列cl_"+i); rowList.add(rowMap); list.add(rowList); } HTable hTable = this.getHTable(tableName); this.insertRow(hTable, list); } catch (Exception e) { e.printStackTrace(); } } void go0(){ try { //建表 String tableName = "htcjd"; String[] columns = new String[]{"name","age","col"}; this.createHTable(tableName, columns); //插入数据 List list = new ArrayList(); List rowList = null; Map rowMap = null; for (int i = 0; i < 10; i++) { rowList = new ArrayList(); rowMap = new HashMap(); rowMap.put(key_colName, "name"); rowMap.put(key_colVal, "测试hbase"+i); rowMap.put(key_colName, "age"); rowMap.put(key_colVal, ""+i); rowMap.put(key_colName, "col"); rowMap.put(key_colVal, "列"+i); rowList.add(rowMap); list.add(rowList); } HTable hTable = this.getHTable(tableName); this.insertRow(hTable, list); } catch (Exception e) { e.printStackTrace(); } } void init() { try { Configuration HBASE_CONFIG = new Configuration(); HBASE_CONFIG.set("hbase.zookeeper.quorum", "127.0.0.1"); HBASE_CONFIG.set("hbase.zookeeper.property.clientPort", "2181"); this.conf = new HBaseConfiguration(HBASE_CONFIG);// new HBaseConfiguration(); this.admin = new HBaseAdmin(conf); } catch (Exception e) { e.printStackTrace(); } } /** * 创建表的描述 * @param tableName * @return * @throws Exception */ HTableDescriptor createHTDesc(final String tableName)throws Exception{ try { return new HTableDescriptor(tableName); } catch (Exception e) { throw e; } } /** * 针对hbase的列的特殊情况进行处理,列的情况: course: or course:math, * 就是要么带列族,要么不带列族(以冒号结尾) * @param colName 列 * @param cluster 列族 * @return */ String fixColName(String colName,String cluster){ if(cluster!=null&&cluster.trim().length()>0&&colName.endsWith(cluster)){ return colName; } String tmp = colName; int index = colName.indexOf(COLENDCHAR); //int leng = colName.length(); if(index == -1){ tmp += COLENDCHAR; } //直接加入列族 if(cluster!=null&&cluster.trim().length()>0){ tmp += cluster; } return tmp; } String fixColName(String colName){ return this.fixColName(colName, null); } /** * 创建列的描述,添加后,该列会有一个冒号的后缀,用于存储(列)族, * 将来如果需要扩展,那么就在该列后加入(列)族 * @param colName * @return * @throws Exception */ HColumnDescriptor createHCDesc(String colName)throws Exception{ try { String tmp = this.fixColName(colName); byte[] colNameByte = Bytes.toBytes(tmp); return new HColumnDescriptor(colNameByte); } catch (Exception e) { throw e; } } /** * 给表添加列,此时不带列族 * @param htdesc * @param colName * @param readonly * @throws Exception */ void addFamily(HTableDescriptor htdesc,String colName,final boolean readonly)throws Exception{ try { htdesc.addFamily(this.createHCDesc(colName)); htdesc.setReadOnly(readonly); } catch (Exception e) { throw e; } } /** * 删除列--不带列族 * @param tableName * @param colName * @throws Exception */ void removeFamily(String tableName,String colName)throws Exception{ try { String tmp = this.fixColName(colName); this.admin.deleteColumn(tableName, tmp); } catch (Exception e) { throw e; } } /** * 删除列--带列族 * @param tableName * @param colName * @param cluster * @throws Exception */ void removeFamily(String tableName,String colName,String cluster)throws Exception{ try { String tmp = this.fixColName(colName,cluster); this.admin.deleteColumn(tableName, tmp); } catch (Exception e) { throw e; } } /** * 建表 * @param tableName * @param columns * @throws Exception */ void createHTable(String tableName)throws Exception{ try { if(admin.tableExists(tableName))return;//判断表是否已经存在 HTableDescriptor htdesc = this.createHTDesc(tableName); admin.createTable(htdesc); } catch (Exception e) { throw e; } } void createHTable(String tableName,String[] columns)throws Exception{ try { if(admin.tableExists(tableName))return;//判断表是否已经存在 HTableDescriptor htdesc = this.createHTDesc(tableName); for (int i = 0; i < columns.length; i++) { String colName = columns[i]; this.addFamily(htdesc, colName, false); } admin.createTable(htdesc); } catch (Exception e) { throw e; } } /** * 删除表 * @param tableName * @throws Exception */ void removeHTable(String tableName)throws Exception{ try { admin.disableTable(tableName);//使无效 admin.deleteTable(tableName);//再删除 } catch (Exception e) { throw e; } } /** * 取得某个表 * @param tableName * @return * @throws Exception */ HTable getHTable(String tableName)throws Exception{ try { return new HTable(conf, tableName); } catch (Exception e) { throw e; } } void updateColumn(String tableName,String rowID,String colName,String cluster,String value)throws Exception{ try { BatchUpdate batchUpdate = new BatchUpdate(rowID); String tmp = this.fixColName(colName, cluster); batchUpdate.put(tmp, Bytes.toBytes(value)); HTable hTable = this.getHTable(tableName); hTable.commit(batchUpdate); } catch (Exception e) { throw e; } } void updateColumn(String tableName,String rowID,String colName,String value)throws Exception{ try { this.updateColumn(tableName, rowID, colName, null, value); } catch (Exception e) { throw e; } } void deleteColumn(String tableName,String rowID,String colName,String cluster)throws Exception{ try { BatchUpdate batchUpdate = new BatchUpdate(rowID); String tmp = this.fixColName(colName, cluster); batchUpdate.delete(tmp); HTable hTable = this.getHTable(tableName); hTable.commit(batchUpdate); } catch (Exception e) { throw e; } } void deleteColumn(String tableName,String rowID,String colName)throws Exception{ try { this.deleteColumn(tableName, rowID, colName, null); } catch (Exception e) { throw e; } } /** * 取得某一行,某一列的值 * @param tableName * @param rowID * @param colName * @param cluster * @return * @throws Exception */ String getColumnValue(String tableName,String rowID,String colName,String cluster)throws Exception{ try { String tmp = this.fixColName(colName, cluster); HTable hTable = this.getHTable(tableName); Cell cell = hTable.get(rowID, tmp); if(cell==null)return null; return new String(cell.getValue()); } catch (Exception e) { throw e; } } /** * 取得所属列的值 * @param tableName * @param colName * @param cluster 如果该参数为空,那么返回所有列族的值 * @return * @throws Exception */ Map getColumnValue(String tableName, String colName, String cluster)throws Exception { Scanner scanner = null; try { String tmp = this.fixColName(colName, cluster); HTable hTable = this.getHTable(tableName); scanner = hTable.getScanner(new String[] { tmp });// "myColumnFamily:columnQualifier1" RowResult rowResult = scanner.next(); Map resultMap = new HashMap(); String row, value; Cell cell = null; while (rowResult != null) { // print out the row we found and the columns we were looking // for // System.out.println("Found row: " // + new String(rowResult.getRow()) // + " with value: " // + rowResult.get("myColumnFamily:columnQualifier1" // .getBytes())); row = new String(rowResult.getRow()); cell = rowResult.get(Bytes.toBytes(tmp)); if (cell == null) { resultMap.put(row, null); } else { resultMap.put(row, cell.getValue()); } rowResult = scanner.next(); } return resultMap; } catch (Exception e) { throw e; }finally{ if(scanner!=null){ scanner.close();//一定要关闭 } } } /** * 取得所有数据 * @param tableName * @return Map * @throws Exception */ public Map getHTData(String tableName) throws Exception { ResultScanner rs = null; try { HTable table = new HTable(this.conf, tableName); Scan s = new Scan(); rs = table.getScanner(s); Map resultMap = new HashMap(); for (Result r : rs) { for (KeyValue kv : r.raw()) { resultMap.put(new String(kv.getColumn()), new String(kv.getValue())); } } return resultMap; } catch (Exception e) { throw e; } finally { if (rs != null) rs.close(); } } //插入记录 void insertRow(HTable table,List dataList)throws Exception{ try { Put put = null; String colName = null; String colCluster = null; String colDataType = null; byte[] value; List rowDataList = null; Map rowDataMap = null; for (Iterator iterator = dataList.iterator(); iterator.hasNext();) { rowDataList = (List) iterator.next(); for(int i=0;i<rowDataList.size();i++){ rowDataMap = (Map) rowDataList.get(i); colName = (String)rowDataMap.get(key_colName); colCluster = (String)rowDataMap.get(key_colCluster); colDataType = (String)rowDataMap.get(key_colDataType); Object val = rowDataMap.get(key_colVal); value = Bytes.toBytes(String.valueOf(val)); // //根据数据类型来处理 // if("string".equalsIgnoreCase(colDataType)){ // value = Bytes.toBytes((String)val); // }else if("int".equalsIgnoreCase(colDataType)){ // value = Bytes.toInt(Integer.parseInt(String.valueOf(val))); // }else if("float".equalsIgnoreCase(colDataType)){ // value = Bytes.toBytes(Float.parseFloat(String.valueOf(val))); // }else if("long".equalsIgnoreCase(colDataType)){ // value = Bytes.toBytes(Long.parseLong(String.valueOf(val))); // }else if("double".equalsIgnoreCase(colDataType)){ // value = Bytes.toBytes(Double.parseDouble(String.valueOf(val))); // }else if("char".equalsIgnoreCase(colDataType)){ // value = Bytes.toBytes((String.valueOf(val))); // }else if("byte".equalsIgnoreCase(colDataType)){ // value = Bytes.totoBytes((byte[])val); // } put = new Put(value); String tmp = this.fixColName(colName, colCluster); byte[] colNameByte = Bytes.toBytes(tmp); byte[][] famAndQf = KeyValue.parseColumn(colNameByte); put.add(famAndQf[0], famAndQf[1], value); table.put(put); } } } catch (Exception e) { throw e; } } //取得表的结构信息 }
然后在eclipse里面运行,可以看到结果:
[hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:host.name=chenjiedui [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.version=1.6.0_05 [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.vendor=Sun Microsystems Inc. [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.home=D:\jdk1.6.0_05\jre [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.class.path=D:\workspace\MyHadoopApp\bin;D:\workspace\MyHadoopApp\lib\commons-lang-2.4.jar;D:\workspace\MyHadoopApp\lib\commons-logging-1.1.1.jar;D:\workspace\Hadoop0.20.2\bin;D:\workspace\Hadoop0.20.2\lib\commons-cli-1.2.jar;D:\workspace\Hadoop0.20.2\lib\commons-codec-1.3.jar;D:\workspace\Hadoop0.20.2\lib\commons-el-1.0.jar;D:\workspace\Hadoop0.20.2\lib\commons-httpclient-3.0.1.jar;D:\workspace\Hadoop0.20.2\lib\commons-logging-1.0.4.jar;D:\workspace\Hadoop0.20.2\lib\commons-logging-api-1.0.4.jar;D:\workspace\Hadoop0.20.2\lib\commons-net-1.4.1.jar;D:\workspace\Hadoop0.20.2\lib\core-3.1.1.jar;D:\workspace\Hadoop0.20.2\lib\hsqldb-1.8.0.10.jar;D:\workspace\Hadoop0.20.2\lib\jasper-compiler-5.5.12.jar;D:\workspace\Hadoop0.20.2\lib\jasper-runtime-5.5.12.jar;D:\workspace\Hadoop0.20.2\lib\jets3t-0.6.1.jar;D:\workspace\Hadoop0.20.2\lib\jetty-6.1.14.jar;D:\workspace\Hadoop0.20.2\lib\jetty-util-6.1.14.jar;D:\workspace\Hadoop0.20.2\lib\junit-3.8.1.jar;D:\workspace\Hadoop0.20.2\lib\kfs-0.2.2.jar;D:\workspace\Hadoop0.20.2\lib\log4j-1.2.15.jar;D:\workspace\Hadoop0.20.2\lib\mockito-all-1.8.0.jar;D:\workspace\Hadoop0.20.2\lib\oro-2.0.8.jar;D:\workspace\Hadoop0.20.2\lib\servlet-api-2.5-6.1.14.jar;D:\workspace\Hadoop0.20.2\lib\slf4j-api-1.4.3.jar;D:\workspace\Hadoop0.20.2\lib\slf4j-log4j12-1.4.3.jar;D:\workspace\Hadoop0.20.2\lib\xmlenc-0.52.jar;D:\workspace\Hadoop0.20.2\lib\ant.jar;D:\workspace\Hadoop0.20.2\lib\jsp-2.1.jar;D:\workspace\Hadoop0.20.2\lib\jsp-api-2.1.jar;D:\workspace\Hadoop0.20.2\lib\ftplet-api-1.0.0-SNAPSHOT.jar;D:\workspace\Hadoop0.20.2\lib\ftpserver-core-1.0.0-SNAPSHOT.jar;D:\workspace\Hadoop0.20.2\lib\ftpserver-server-1.0.0-SNAPSHOT.jar;D:\workspace\Hadoop0.20.2\lib\libthrift.jar;D:\workspace\Hadoop0.20.2\lib\mina-core-2.0.0-M2-20080407.124109-12.jar;D:\workspace\Hadoop0.20.2\libs\lucene\lucene-core-3.0.1.jar;D:\workspace\HBase0.20.6\bin;D:\workspace\HBase0.20.6\lib\commons-cli-2.0-SNAPSHOT.jar;D:\workspace\HBase0.20.6\lib\commons-el-from-jetty-5.1.4.jar;D:\workspace\HBase0.20.6\lib\commons-httpclient-3.0.1.jar;D:\workspace\HBase0.20.6\lib\commons-logging-1.0.4.jar;D:\workspace\HBase0.20.6\lib\commons-logging-api-1.0.4.jar;D:\workspace\HBase0.20.6\lib\commons-math-1.1.jar;D:\workspace\HBase0.20.6\lib\hadoop-0.20.2-core.jar;D:\workspace\HBase0.20.6\lib\jasper-compiler-5.5.12.jar;D:\workspace\HBase0.20.6\lib\jasper-runtime-5.5.12.jar;D:\workspace\HBase0.20.6\lib\jetty-6.1.14.jar;D:\workspace\HBase0.20.6\lib\jetty-util-6.1.14.jar;D:\workspace\HBase0.20.6\lib\jruby-complete-1.2.0.jar;D:\workspace\HBase0.20.6\lib\junit-4.8.1.jar;D:\workspace\HBase0.20.6\lib\libthrift-r771587.jar;D:\workspace\HBase0.20.6\lib\log4j-1.2.15.jar;D:\workspace\HBase0.20.6\lib\lucene-core-2.2.0.jar;D:\workspace\HBase0.20.6\lib\servlet-api-2.5-6.1.14.jar;D:\workspace\HBase0.20.6\lib\xmlenc-0.52.jar;D:\workspace\HBase0.20.6\lib\zookeeper-3.3.2.jar;D:\workspace\MyHadoopApp\lib\commons-cli-2.0-SNAPSHOT.jar;D:\workspace\MyHadoopApp\lib\log4j-1.2.15.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-el-from-jetty-5.1.4.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-httpclient-3.0.1.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-logging-api-1.0.4.jar;D:\workspace\MyHadoopApp\lib\hbase\commons-math-1.1.jar;D:\workspace\MyHadoopApp\lib\hbase\jasper-compiler-5.5.12.jar;D:\workspace\MyHadoopApp\lib\hbase\jasper-runtime-5.5.12.jar;D:\workspace\MyHadoopApp\lib\hbase\jetty-6.1.14.jar;D:\workspace\MyHadoopApp\lib\hbase\jetty-util-6.1.14.jar;D:\workspace\MyHadoopApp\lib\hbase\jruby-complete-1.2.0.jar;D:\workspace\MyHadoopApp\lib\hbase\libthrift-r771587.jar;D:\workspace\MyHadoopApp\lib\hbase\lucene-core-2.2.0.jar;D:\workspace\MyHadoopApp\lib\hbase\servlet-api-2.5-6.1.14.jar;D:\workspace\MyHadoopApp\lib\hbase\xmlenc-0.52.jar;D:\workspace\MyHadoopApp\lib\hbase\zookeeper-3.3.2.jar [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.library.path=D:\jdk1.6.0_05\bin;.;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;d:/jdk1.6.0_05/bin/../jre/bin/client;d:/jdk1.6.0_05/bin/../jre/bin;d:/jdk1.6.0_05/bin/../jre/lib/i386;D:\cygwin\bin;D:\cygwin\usr\sbin;d:\oracle\product\10.2.0\db_1\bin;d:\jdk1.6.0_05\bin;D:\apache-ant-1.8.0RC1\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\ThinkPad\ConnectUtilities;C:\Program Files\Common Files\Lenovo;d:\Program Files\cvsnt;C:\Program Files\Common Files\Thunder Network\KanKan\Codecs;C:\Program Files\Common Files\TTKN\Bin;C:\Program Files\StormII\Codec;C:\Program Files\StormII [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.io.tmpdir=C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\ [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:java.compiler=<NA> [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:os.name=Windows XP [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:os.arch=x86 [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:os.version=5.1 [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:user.name=Administrator [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:user.home=C:\Documents and Settings\Administrator [hadoop] INFO [main] ZooKeeper.logEnv(97) | Client environment:user.dir=D:\workspace\MyHadoopApp [hadoop] INFO [main] ZooKeeper.<init>(373) | Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=60000 watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@cd2c3c [hadoop] INFO [main-SendThread()] ClientCnxn.startConnect(1041) | Opening socket connection to server /127.0.0.1:2181 [hadoop] INFO [main-SendThread(localhost:2181)] ClientCnxn.primeConnection(949) | Socket connection established to localhost/127.0.0.1:2181, initiating session [hadoop] INFO [main-SendThread(localhost:2181)] ClientCnxn.readConnectResult(738) | Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x12c6c8d8f6d0010, negotiated timeout = 40000 输出结果: {col:cl_name=陈杰堆cl_9, col:name=陈杰堆9, col:sex=列9, col:cl_age=cl_9, col:cl_sex=列cl_9, col:age=9, col:=陈杰堆nocluster9} [hadoop] INFO [HCM.shutdownHook] ZooKeeper.close(538) | Session: 0x12c6c8d8f6d0010 closed [hadoop] INFO [main-EventThread] ClientCnxn.run(520) | EventThread shut down
发表评论
-
把hadoop上多个文件合并然后copyt到本地
2012-06-05 09:04 2911http://www.hadoopor.com/thread- ... -
Hadoop1.0.0 eclipse 插件编译包
2012-04-23 00:51 3579hadoop1.0.0没有eclipse 插件包,今晚自己编译 ... -
Hadoop必须知道的几个问题
2012-04-12 09:02 11101,master slaves文件只有 namenode se ... -
chmod -R 777 /usr/local/java/jdk1.6.0_13/jre/bin 我爱你
2012-03-12 10:44 0因为少了这句话,我的hadoop环境重新安装了3次,还是报错 ... -
hadoop hbase的相关命令
2012-03-03 14:07 1395//是否进入或者是否还在安全模式 hadoop dfsa ... -
Hadoop环境部署成功(ubuntu+hadoop0.20.2+eclipse3.3.2)
2011-11-19 18:48 2569经过一段时间的折腾,终于能在eclipse里面运行MR了。 主 ... -
分布式监控系统 Ganglia
2011-06-03 10:03 1151http://ganglia.sourceforge.net/ ... -
hadoop 在redhat linux5 上部署成功(三机)
2010-11-07 09:44 3201断断续续,折腾了一个礼拜,终于在出差的时候把这个hadoop部 ... -
第一个Mapreduce例子
2010-10-27 21:00 3211经过几天的尝试,终于部署起来了hadoop运行环境, hado ... -
Hadoop入门资料
2010-10-18 09:43 1082很好很详细的入门资料 http://hadoop.apache ...
相关推荐
《大数据与云计算培训学习资料——Hadoop集群:HBase之旅》 HBase,作为Apache Hadoop生态中的分布式列式数据库,被广泛应用于处理大规模数据存储和检索。此份学习资料详细介绍了HBase的基础概念和应用实践,旨在...
HBase是一个分布式的、面向列的NoSQL数据库,它是构建在Apache Hadoop文件系统(HDFS)之上的。HBase提供强一致性的读写操作,适用于大数据存储,尤其适合实时查询。 【数据模型】 HBase的数据模型包括表(Table)...
随着大数据处理需求的日益增长,Hadoop生态中的HBase因其卓越的数据处理能力和灵活性,成为了众多企业的大数据解决方案之一。本文旨在帮助初次接触HBase的业务开发与测试人员快速理解HBase的基本概念和技术要点,...
《深入解析geomesa-hbase安装包:开启大数据地理空间分析之旅》 在现代大数据领域,地理空间数据处理已经成为不可或缺的一部分。Geomesa作为一个开源的分布式地理空间数据存储系统,为处理大规模地理空间数据提供了...
HBase是一种分布式、高性能、基于列族的NoSQL数据库,主要设计用于处理大规模数据集。在深入了解HBase之前,我们先来理解一下它的基本概念。...提供的“hbase 培训”资料应该能帮助你开始这段学习之旅。
《Learning HBase(中文版)》是一本专为学习HBase设计的中文教材,适合对大数据处理和分布式数据库感兴趣的读者...现在,你可以下载《Learning HBase中文版.pdf》开始你的学习之旅,探索这个强大数据库的无限可能性。
书中提供了HBase的安装指南以及一个快速入门指南,这将使读者能够快速开始他们的HBase学习之旅。这些指南可能包括安装HBase所需的系统要求,以及如何设置和运行HBase环境。 9. 历史和命名规则 HBase作为一个数据库...
《HBase权威指南》是一本...总之,《HBase权威指南》是一部值得每一位关注大数据技术的读者仔细研读的作品,它不仅提供了丰富的理论知识,更注重实践指导,旨在帮助读者掌握HBase的精髓,开启大数据领域的探索之旅。
Hadoop集群·CentOS安装配置(第1期) Hadoop集群·机器信息分布表(第2期) Hadoop集群·VSFTP安装配置(第3期) Hadoop集群·SecureCRT使用(第4期) Hadoop集群·Hadoop安装...Hadoop集群·HBase之旅(第11期副刊)
### 亿级大数据实时分析之旅 #### 概述与背景 在当今数字化时代,随着互联网技术的迅猛发展,数据量呈爆炸式增长,这不仅为企业带来了前所未然的挑战,也开启了全新的机遇窗口。《亿级大数据实时分析之旅》是小米...
特别好的Hadoop教程,基本上等于手把手教了,每...(第11期副刊)_HBase之旅 (第12期)_HBase应用开发 (第12期副刊)_HBase性能优化 (第13期)_Hive简介及安装 (第14期)_Hive应用开发 (第15期)_HBase、Hive与RDBMS
在这份题为“亿级大数据实时分析之旅v520.pdf”的文档中,欧阳辰详细分享了小米在大数据实施中的心路历程,包括不同阶段的实施、关键组件的选型等,下面将对此进行详细的知识点梳理。 ### 大数据的价值与挑战 ...
#### 阿里的HBase之旅 阿里巴巴自2010年开始研究HBase,并持续投入了长达8年的研发时间。在此期间,阿里巴巴不仅实现了HBase在公有云环境下的部署和服务,还构建了超过12000台服务器的规模,其中最大的集群包含超过...
在《0到3000万用户微服务之旅》这份文档中,作者潘志伟分享了信用算力公司在消费金融领域的微服务架构演进历程。作为一家领先的信息技术服务提供商,信用算力致力于利用先进的技术和算法为信贷机构提供一体化解决...
对于新手来说,Hadoop学习之旅通常包括安装基础组件、掌握Hive、HBase、Spark等工具的使用。Hive提供了一个数据仓库架构,使得Hadoop能够使用SQL查询处理大量结构化数据。HBase则是一个非关系型、分布式、可扩展的...
李扬《Apache Kylin 深入之旅 - Streaming及Plugin 架构探秘》DCon2015交流专贴 何亮亮《HBase服务化实践》DCon2015交流专贴 梁堰波《Spark MLlib在金融行业的应用》DCon2015交流专贴 傅志华《大数据在互联网企业中...