Mapr框架安装完后,安装与配置hbase、hive。
其中mapr框架的安装路径为/opt/mapr
Hbase的安装路径为/opt/mapr/hbase/hbase-0.90.4
Hive的安装路径为/opt/mapr/hive/hive-0.7.1
整合hive与hbase的过程如下:
1. 将文件 /opt/mapr/hbase/hbase-0.90.4/hbase-0.90.4.jar 与/opt/mapr/hbase/hbase-0.90.4/lib/zookeeper-3.3.2.jar拷贝到/opt/mapr/hive /hive-0.7.1/lib文件夹下面
注意:如果hive/lib下已经存在这两个文件的其他版本(例如zookeeper-3.3.1.jar),建议删除后使用hbase下的相关版本
2 修改hive/conf下hive-site.xml文件,在底部添加如下内容:
<property>
<name>hive.querylog.location</name>
<value>/opt/mapr/hive/hive-0.7.1/logs</value>
</property>
<property>
<name>hive.aux.jars.path</name> <value>file:///opt/mapr/hive/hive-0.7.1/lib/hive-hbase-handler-0.7.1.jar,file:///opt/mapr/hive/hive-0.7.1/lib/hbase-0.90.4.jar,file:///opt/mapr/hive/hive-0.7.1/lib/zookeeper-3.3.2.jar</value>
</property>
注意:如果hive-site.xml不存在则自行创建,或者把hive-default.xml.template文件改名后使用。
3. 拷贝hbase-0.90.4.jar到所有hadoop节点(包括master)的hadoop/lib下。
4. 拷贝hbase/conf下的hbase-site.xml文件到所有hadoop节点(包括master)的hadoop/conf下。
注意,如果3,4两步跳过的话,运行hive时很可能出现如下错误:
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and
then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information. at org.apache.hadoop.
hbase.zookeeper.ZooKeeperWatcher.
5 启动hive
单节点启动
bin/hive -hiveconf hbase.master=master:60000
集群启动
bin/hive -hiveconf hbase.zookeeper.quorum=node1,node2,node3 (所有的zookeeper节点)
如果hive-site.xml文件中没有配置hive.aux.jars.path,则可以按照如下方式启动。
hive --auxpath /opt/mapr/hive/hive-0.7.1/lib/hive-hbase-handler-0.7.1.jar,/opt/mapr/hive/hive-0.7.1/lib/hbase-0.90.4.jar,/opt/mapr/hive/hive-0.7.1/lib/zookeeper-3.3.2.jar -hiveconf hbase.master=localhost:60000
经测试修改hive的配置文件hive-site.xml
<property>
<name>hive.zookeeper.quorum</name>
<value>node1,node2,node3</value>
<description>The list of zookeeper servers to talk to. This is only needed for read/write locks.</description>
</property>
不用增加参数启动hive就可以联合hbase
6 启动后进行测试
(1) 创建hbase识别的表
CREATE TABLE hbase_table_1(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz");
hbase.table.name 定义在hbase的table名称,多列时,data:1,data:2;多列族时,data1:1,data2:1;
hbase.columns.mapping 定义在hbase的列族,里面的:key 是固定值而且要保证在表pokes中的foo字段是唯一值
创建有分区的表
CREATE TABLE hbase_table_1(key int, value string) partitioned by (day string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz");
(2) 使用sql导入数据
新建hive的数据表
create table pokes(foo int,bar string)row format delimited fields terminated by ',';
批量导入数据
load data local inpath '/home/1.txt' overwrite into table pokes;
1.txt文件的内容为
1,hello
2,pear
3,world
使用sql导入hbase_table_1
insert overwrite table hbase_table_1 select * from pokes;
导入有分区的表
insert overwrite table hbase_table_1 partition (day='2012-01-01') select * from pokes;
(3) 查看数据
hive> select * from hbase_table_1;
OK
1 hello
2 pear
3 world
(注:与hbase整合的有分区的表存在个问题 select * from table查询不到数据,select key,value from table可以查到数据)
(4)登录Hbase去查看数据
hbase shell
hbase(main):002:0> describe 'xyz'
DESCRIPTION ENABLED {NAME => 'xyz', FAMILIES => [{NAME => 'cf1', BLOOMFILTER => 'NONE', REPLICATION_S true
COPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSI
ZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}
1 row(s) in 0.0830 seconds
hbase(main):003:0> scan 'xyz'
ROW COLUMN+CELL
1 column=cf1:val, timestamp=1331002501432, value=hello
2 column=cf1:val, timestamp=1331002501432, value=pear
3 column=cf1:val, timestamp=1331002501432, value=world
这时在Hbase中可以看到刚才在hive中插入的数据了。
7 对于在hbase已经存在的表,在hive中使用CREATE EXTERNAL TABLE来建立
例如hbase中的表名称为test1,字段为 a: , b: ,c: 在hive中建表语句为
create external table hive_test (key int,gid map<string,string>,sid map<string,string>,uid map<string,string>) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" ="a:,b:,c:") TBLPROPERTIES ("hbase.table.name" = "test1");
在hive中建立好表后,查询hbase中test1表内容
Select * from hive_test;
OK
1 {"":"qqq"} {"":"aaa"} {"":"bbb"}
2 {"":"qqq"} {} {"":"bbb"}
查询gid字段中value值的方法为
select gid[''] from hbase2;
得到查询结果
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201203052222_0017, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201203052222_0017
Kill Command = /opt/mapr/hadoop/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=maprfs:/// -kill job_201203052222_0017
2012-03-06 14:38:29,141 Stage-1 map = 0%, reduce = 0%
2012-03-06 14:38:33,171 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201203052222_0017
OK
qqq
qqq
如果hbase表test1中的字段为user:gid,user:sid,info:uid,info:level,在hive中建表语句为
create external table hive_test(key int,user map<string,string>,info map<string,string>) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" ="user:,info:") TBLPROPERTIES ("hbase.table.name" = "test1");
查询hbase表的方法为
select user['gid'] from hbase2;
建立关联表
这里我们要查询的表在hbase中已经存在所以,使用CREATE EXTERNAL TABLE来建立,如下:
- CREATE EXTERNAL TABLE hbase_table_2(key string, value string)
- STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
- WITH SERDEPROPERTIES ("hbase.columns.mapping" = "data:1")
- TBLPROPERTIES("hbase.table.name" = "test");
hbase.columns.mapping指向对应的列族;多列时,data:1,data:2;多列族时,data1:1,data2:1;
hbase.table.name指向对应的表;
hbase_table_2(key string, value string),这个是关联表
注意数据为int时候:建表语句不同
CREATE EXTERNAL TABLE HisDiagnose(key string, doctorId int, patientId int, description String, rtime int) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,diagnoseFamily:doctorId,diagnoseFamily:patientId,diagnoseFamily:description,diagnoseFamily:rtime","hbase.table.default.storage.type"="binary") TBLPROPERTIES("hbase.table.name" = "HisDiagnose");
功能测试:
使用hive
hive> create table pokes (foo int, bar striing); OK Time taken: 0.251 seconds hive>create table invites (foo INT, bar STRING) partitioned by (ds string); OK Time taken: 0.106 seconds hive>show tables; OK invites pokes Time taken: 0.107 seconds hive> descripe invites; OK foo int bar string ds string Time taken: 0.151 seconds hive> alter table pokes add columns (new_col int); OK Time taken: 0.117 seconds hive> alter table invites add columns (new_col2 int); OK Time taken: 0.152 seconds hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes; Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv1.txt Loading data to table pokes OK Time taken: 0.288 seconds hive> load data local inpath './examples/files/kv2.txt' overwrite into table invites partition (ds=’2008-08-15′); Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv2.txt Loading data to table invites partition {ds=2008-08-15} OK Time taken: 0.524 seconds hive> LOAD DATA LOCAL INPATH './examples/files/kv3.txt' OVERWRITE INTO TABLE invites PARTITION (ds=’2008-08-08′); Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv3.txt Loading data to table invites partition {ds=2008-08-08} OK Time taken: 0.406 seconds hive> INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT a.* FROM invites a; Total MapReduce jobs = 1 Starting Job = job_200902261245_0002, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0002 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0002 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% Ended Job = job_200902261245_0002 Moving data to: /tmp/hdfs_out OK Time taken: 18.551 seconds hive> select count(1) from pokes; Total MapReduce jobs = 2 Number of reducers = 1 In order to change numer of reducers use: set mapred.reduce.tasks = <number> Starting Job = job_200902261245_0003, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0003 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0003 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =17% map = 100%, reduce =100% Ended Job = job_200902261245_0003 Starting Job = job_200902261245_0004, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0004 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0004 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =100% Ended Job = job_200902261245_0004 OK 500 Time taken: 57.285 seconds hive> INSERT OVERWRITE DIRECTORY ‘/tmp/hdfs_out’ SELECT a.* FROM invites a; Total MapReduce jobs = 1 Starting Job = job_200902261245_0005, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0005 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0005 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% Ended Job = job_200902261245_0005 Moving data to: /tmp/hdfs_out OK Time taken: 18.349 seconds hive> INSERT OVERWRITE DIRECTORY ‘/tmp/reg_5′ SELECT COUNT(1) FROM invites a; Total MapReduce jobs = 2 Number of reducers = 1 In order to change numer of reducers use: set mapred.reduce.tasks = <number> Starting Job = job_200902261245_0006, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0006 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0006 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =17% map = 100%, reduce =100% Ended Job = job_200902261245_0006 Starting Job = job_200902261245_0007, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0007 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0007 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =17% map = 100%, reduce =100% Ended Job = job_200902261245_0007 Moving data to: /tmp/reg_5 OK Time taken: 70.956 seconds
相关推荐
Umi-OCR-main.zip
基于springboot+Web的毕业设计选题系统源码数据库文档.zip
基于springboot校外兼职教师考勤管理系统源码数据库文档.zip
58商铺全新UI试客试用平台网站源码
基于springboot大学生就业信息管理系统源码数据库文档.zip
基于SpringBoot的口腔诊所系统源码数据库文档.zip
数据存放网盘,txt文件内包含下载链接及提取码,永久有效。失效会第一时间进行补充。样例数据及详细介绍参见文章:https://blog.csdn.net/T0620514/article/details/143956923
3-240P2162218.zip
网络安全 基于Qt创建的Linux系统下的浏览器.zip
C++ 类和对象:多态-练习题目2(制作咖啡和茶叶)
基于springboot+J2EE在线项目管理与任务分配中的应用源码数据库文档.zip
简介本项目提供了一个在51单片机上运行的简单操作系统,旨在帮助学习者深入理解操作系统的基本原理和任务调度机制。该操作系统通过汇编和C语言编写,实现了任务调度、中断处理等核心功能,并提供了详细的源代码和注释,方便学习和实践。
本文将深度卷积神经网络(CNN)设计实现一个复杂结构的生成模型,旨在通过多阶段的编码器-解码器结构,能够有效地将灰度图像转换为彩色图像。最后,本文将实现一个简单的Web应用,用户可以通过上传灰度图像,应用会使用预训练的Caffe模型对其进行颜色化,并将结果返回给用户。 1.模型设计:模型由多个卷积层、ReLU激活函数和批归一化层组成,通过前向传播函数将输入的灰度图像(L通道)转换为彩色图像(ab通道)。如果指定了 pretrained=True,则会自动下载并加载预训练的模型权重。 2. 系统通过Flask框架提供了一个Web应用,用户可以上传灰度图像,系统会自动将其转换为彩色图像,并在网页上显示结果。整个过程包括文件验证、图像处理、颜色化预测和结果展示,具有较高的实用性和用户体验。
一个JAVA图形化的、联网的五子棋游戏.zip javaweb
KWDB 是一款面向 【AIoT 场景】的【分布式多模数据库】,支持在同一实例同时建立时序库和关系库并融合处理多模数据,具备千万级设备接入、百万级数据秒级写入、亿级数据秒级读取等时序数据高效处理能力,具有稳定安全、高可用、易运维等特点。
页面数量:7页 网页主题:网站模板、酒店网站模板、官方网站模板 网页页面:首页、关于我们、相关服务、服务详情、在线博客、博客详情、在线留言 页面实现元素:加载动画、滚动加载、主题切换、导航栏 、轮播图、图文列表、图片切换、 文字列表、 按钮悬停、图片悬停、表单 实现技术:HTML、CSS 、JQuery 源码样式及js文件均分开存放,所有内容仅供初学者学习参考
内容概要:本文档提供了详细的 Neo4j 安装与配置指南,涵盖 Windows、Linux 和 Mac 系统的安装步骤。具体包括下载、安装、启动服务、修改配置文件(如端口配置、远程访问和内存限制)、设置管理员密码以及基本的 Cypher 查询语言使用方法。同时,还提供了一些常见问题及其解决方案。 适合人群:数据库管理员、软件开发人员、系统管理员。 使用场景及目标:①帮助初学者快速掌握 Neo4j 的安装与配置;②适用于需要搭建和使用图数据库的项目;③为已有用户解决常见问题。 其他说明:本文档不仅包含了基础的安装和配置流程,还提供了实际操作中可能遇到的问题及其解决方法,有助于提高使用者的实际操作能力。
基于SpringBoot+Vue的软件产品展示销售系统源码数据库文档.zip
《书戴嵩画牛》教学课件.pptx
20届智能车 【项目资源】:包含前端、后端、移动开发、人工智能、物联网、信息化管理、数据库、硬件开发、大数据、课程资源,毕业设计等各种技术项目的源码。包括C++、Java、python、web、C#、EDA等项目的源码。 【适用人群】:适用于希望学习不同技术领域的初学者或进阶学习者。可作为毕设项目、课程设计、大作业、工程实训或初期项目立项。 【附加价值】:项目具有较高的学习借鉴价值,也可直接拿来修改复刻。对于有一定基础或热衷于研究的人来说,可以在这些基础代码上进行修改和扩展,实现其他功能。 【沟通交流】:有任何使用上的问题,欢迎随时与博主沟通,博主会及时解答。鼓励下载和使用,并欢迎大家互相学习,共同进步。