1.2. Quick Start
This guide describes setup of a standalone HBase instance. It will run against the local filesystem. In later sections we will take you through how to run HBase on HDFS, a distributed filesystem. This section leads you through creating a table, inserting rows via the HBase shell, and then cleaning up and shutting down your standalone, local filesystem HBase instance. The below exercise should take no more than ten minutes (not including download time).
Local Filesystem and Durability
Using HBase with a LocalFileSystem does not currently guarantee durability. You need to run HBase on HDFS to ensure all writes are preserved. Running against the local filesystem though will get you off the ground quickly and get you familiar with how the general system works so lets run with it for now. See https://issues.apache.org/jira/browse/HBASE-3696 and its associated issues for more details.
Loopback IP
The below advice is for hbase-0.94.0 (and older) versions; we believe this fixed in hbase-0.96.0 and beyond (let us know if we have it wrong) -- there should be no need of modification to /etc/hosts
.
HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, for example, will default to 127.0.1.1 and this will cause problems for you [1].
/etc/hosts
should look something like this:
127.0.0.1 localhost
127.0.0.1 ubuntu.ubuntu-domain ubuntu
Choose a download site from this list of Apache Download Mirrors. Click on the suggested top link. This will take you to a mirror of HBase Releases. Click on the folder named stable
and then download the file that ends in .tar.gz
to your local filesystem; e.g. hbase-0.94.2.tar.gz
.
Decompress and untar your download and then change into the unpacked directory.
$ tar xfz hbase-0.97.0-SNAPSHOT.tar.gz
$ cd hbase-0.97.0-SNAPSHOT
At this point, you are ready to start HBase. But before starting it, edit conf/hbase-site.xml
, the file you write your site-specific configurations into. Set hbase.rootdir
, the directory HBase writes data to, and hbase.zookeeper.property.dataDir
, the directory ZooKeeper writes its data too:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///DIRECTORY/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/DIRECTORY/zookeeper</value>
</property>
</configuration>
Replace DIRECTORY
in the above with the path to the directory you would have HBase and ZooKeeper write their data. By default, hbase.rootdir
is set to /tmp/hbase-${user.name}
and similarly so for the default ZooKeeper data location which means you'll lose all your data whenever your server reboots unless you change it (Most operating systems clear /tmp
on restart).
Now start HBase:
$ ./bin/start-hbase.sh
starting Master, logging to logs/hbase-user-master-example.org.out
You should now have a running standalone HBase instance. In standalone mode, HBase runs all daemons in the the one JVM; i.e. both the HBase and ZooKeeper daemons. HBase logs can be found in the logs
subdirectory. Check them out especially if it seems HBase had trouble starting.
Is java installed?
All of the above presumes a 1.6 version of Oracle java is installed on your machine and available on your path (See Section 2.1.1, “Java”); i.e. when you type java, you see output that describes the options the java program takes (HBase requires java 6). If this is not the case, HBase will not start. Install java, edit conf/hbase-env.sh
, uncommenting the JAVA_HOME
line pointing it to your java install, then, retry the steps above.
Connect to your running HBase via the shell.
$ ./bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version: 0.90.0, r1001068, Fri Sep 24 13:55:42 PDT 2010
hbase(main):001:0>
Type help and then <RETURN> to see a listing of shell commands and options. Browse at least the paragraphs at the end of the help emission for the gist of how variables and command arguments are entered into the HBase shell; in particular note how table names, rows, and columns, etc., must be quoted.
Create a table named test
with a single column family named cf
. Verify its creation by listing all tables and then insert some values.
hbase(main):003:0> create 'test', 'cf'
0 row(s) in 1.2200 seconds
hbase(main):003:0> list 'test'
..
1 row(s) in 0.0550 seconds
hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0560 seconds
hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds
hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 seconds
Above we inserted 3 values, one at a time. The first insert is at row1
, column cf:a
with a value of value1
. Columns in HBase are comprised of a column family prefix -- cf
in this example -- followed by a colon and then a column qualifier suffix (a
in this case).
Verify the data insert by running a scan of the table as follows
hbase(main):007:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1288380727188, value=value1
row2 column=cf:b, timestamp=1288380738440, value=value2
row3 column=cf:c, timestamp=1288380747365, value=value3
3 row(s) in 0.0590 seconds
Get a single row
hbase(main):008:0> get 'test', 'row1'
COLUMN CELL
cf:a timestamp=1288380727188, value=value1
1 row(s) in 0.0400 seconds
Now, disable and drop your table. This will clean up all done above.
hbase(main):012:0> disable 'test'
0 row(s) in 1.0930 seconds
hbase(main):013:0> drop 'test'
0 row(s) in 0.0770 seconds
Exit the shell by typing exit.
hbase(main):014:0> exit
Stop your hbase instance by running the stop script.
$ ./bin/stop-hbase.sh
stopping hbase...............
转自:http://hbase.apache.org/book/quickstart.html
相关推荐
You will then learn about the Hadoop ecosystem, and tools such as Kafka, Sqoop, Flume, Pig, Hive, and HBase. Finally, you will look at advanced topics, including real time streaming using Apache Storm...
5. **启动Hadoop守护进程**:通过执行`start-dfs.sh`和`start-mapred.sh`脚本来启动HDFS和MapReduce服务。 6. **验证安装**:使用`jps`命令检查Hadoop守护进程是否正在运行,也可以通过Web界面访问HDFS和JobTracker...
The hbase 'book' at http://hbase.apache.org/book.html has a 'quick start' section and is where you should being your exploration of the hbase project. The latest HBase can be downloaded from an ...
- **Quick Start - Standalone HBase**(快速入门 - 单机模式的HBase): - 配置HBase环境:设置环境变量、配置文件等。 - 启动单机模式下的HBase集群。 - 使用HBase Shell进行基本操作,如创建表、插入数据、...
在这个“hadoop3-quick-start”存储库中,你可能会找到一系列示例,帮助初学者快速掌握Hadoop 3的基本操作和编程模型。这些示例可能包括数据上传到HDFS,运行MapReduce作业,以及使用Hadoop命令行工具进行数据操作等...
- [Cloudera官方文档](http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Quick-Start/cdh4qs_topic_3_3.html) - [Impala官方文档]...
DataX DataX 是阿里巴巴集团内被广泛使用的离线数据同步工具/平台,实现包括 MySQL、Oracle、SqlServer、...Quick Start Download 请点击: Support Data Channels DataX目前已经有了比较全面的插件体系,主流的R