`
LJ你是唯一LT
  • 浏览: 243500 次
社区版块
存档分类
最新评论

使用eclipse远程连接hive---基于CDH5

 
阅读更多

我已经用cloudera manager安装好了CDH5.4.10上面的hive连接配置:
由于我的server2和hive都是在master01上面启动好了的,因此,我只需要测试连接即可

基础环境:
CDH 5.4.10
hadoop 2.6.0
hive  1.1.0
hbase 1.0.0
zookeeper  3.4.5
sqoop 1.4.5
jdk 1.7.0_67
os  centos6.5


[root@master01 ~]# cd /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/bin/
[root@master01 bin]# ./beeline
Beeline version 1.1.0-cdh5.4.10 by Apache Hive
beeline>

beeline> !connect jdbc:hive2://192.168.1.207:10000
Connecting to jdbc:hive2://192.168.1.207:10000
Enter username for jdbc:hive2://192.168.1.207:10000: root
Enter password for jdbc:hive2://192.168.1.207:10000: ****   --root/root直接连上了!
Connected to: Apache Hive (version 1.1.0-cdh5.4.10)
Driver: Hive JDBC (version 1.1.0-cdh5.4.10)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.1.207:10000>


先来创建几个表,导入点数据,方便后面操作:
[root@master01 bin]# hive
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars/hive-common-1.1.0-cdh5.4.10.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive>
hive> create table runningrecord_old(id int,systemno string,longitude string,latitude string,speed string,direction smallint,elevation string,acc string,islocation string,mileage string,oil string,currenttime timestamp,signalname string,currentvalue string) row format delimited fields terminated by ',';
OK
Time taken: 2.607 seconds
hive> load data local inpath '/tmp/rtest.txt' into table runningrecord_old;
Loading data to table default.runningrecord_old
Table default.runningrecord_old stats: [numFiles=1, totalSize=5480004]
OK
Time taken: 1.575 seconds

eclipse安装在我的win7系统上
下一步,我们拷贝jar包到eclipse项目中去:
新建java project:hiveconnect
新建class:hiveconnecttest
新建folder:lib/    跟src同级

[root@master01 jars]# cd /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars
[root@master01 jars]# sz hive*.jar   下载包放到刚刚那个lib目录下:d:/workspace/hiveconnect/lib/
[root@master01 jars]# sz hadoop*.jar
[root@master01 jars]# ll hive*.jar
hive-accumulo-handler-1.1.0-cdh5.4.10.jar
hive-ant-1.1.0-cdh5.4.10.jar
hive-beeline-1.1.0-cdh5.4.10.jar
hive-cli-1.1.0-cdh5.4.10.jar
hive-common-1.1.0-cdh5.4.10.jar
hive-contrib-1.1.0-cdh5.4.10.jar
hive-exec-1.1.0-cdh5.4.10.jar
hive-hbase-handler-1.1.0-cdh5.4.10.jar
hive-hcatalog-core-1.1.0-cdh5.4.10.jar
hive-hcatalog-pig-adapter-1.1.0-cdh5.4.10.jar
hive-hcatalog-server-extensions-1.1.0-cdh5.4.10.jar
hive-hcatalog-streaming-1.1.0-cdh5.4.10.jar
hive-hwi-1.1.0-cdh5.4.10.jar
hive-jdbc-1.1.0-cdh5.4.10.jar
hive-jdbc-1.1.0-cdh5.4.10-standalone.jar
hive-metastore-1.1.0-cdh5.4.10.jar
hive-serde-1.1.0-cdh5.4.10.jar
hive-service-1.1.0-cdh5.4.10.jar
hive-shims-0.23-1.1.0-cdh5.4.10.jar
hive-shims-1.1.0-cdh5.4.10.jar
hive-shims-common-1.1.0-cdh5.4.10.jar
hive-shims-scheduler-1.1.0-cdh5.4.10.jar
hive-testutils-1.1.0-cdh5.4.10.jar
hive-webhcat-1.1.0-cdh5.4.10.jar
hive-webhcat-java-client-1.1.0-cdh5.4.10.jar
hadoop-annotations-2.6.0-cdh5.4.10.jar
hadoop-ant-2.6.0-cdh5.4.10.jar
hadoop-ant-2.6.0-mr1-cdh5.4.10.jar
hadoop-archives-2.6.0-cdh5.4.10.jar
hadoop-auth-2.6.0-cdh5.4.10.jar
hadoop-aws-2.6.0-cdh5.4.10.jar
hadoop-azure-2.6.0-cdh5.4.10.jar
hadoop-capacity-scheduler-2.6.0-mr1-cdh5.4.10.jar
hadoop-common-2.6.0-cdh5.4.10.jar
hadoop-common-2.6.0-cdh5.4.10-tests.jar
hadoop-core-2.6.0-mr1-cdh5.4.10.jar
hadoop-datajoin-2.6.0-cdh5.4.10.jar
hadoop-distcp-2.6.0-cdh5.4.10.jar
hadoop-examples-2.6.0-mr1-cdh5.4.10.jar
hadoop-examples.jar
hadoop-extras-2.6.0-cdh5.4.10.jar
hadoop-fairscheduler-2.6.0-mr1-cdh5.4.10.jar
hadoop-gridmix-2.6.0-cdh5.4.10.jar
hadoop-gridmix-2.6.0-mr1-cdh5.4.10.jar
hadoop-hdfs-2.6.0-cdh5.4.10.jar
hadoop-hdfs-2.6.0-cdh5.4.10-tests.jar
hadoop-hdfs-nfs-2.6.0-cdh5.4.10.jar
hadoop-kms-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-app-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-common-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-core-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-hs-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.10-tests.jar
hadoop-mapreduce-client-nativetask-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-shuffle-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-examples-2.6.0-cdh5.4.10.jar
hadoop-nfs-2.6.0-cdh5.4.10.jar
hadoop-rumen-2.6.0-cdh5.4.10.jar
hadoop-sls-2.6.0-cdh5.4.10.jar
hadoop-streaming-2.6.0-cdh5.4.10.jar
hadoop-streaming-2.6.0-mr1-cdh5.4.10.jar
hadoop-test-2.6.0-mr1-cdh5.4.10.jar
hadoop-tools-2.6.0-mr1-cdh5.4.10.jar
hadoop-yarn-api-2.6.0-cdh5.4.10.jar
hadoop-yarn-applications-distributedshell-2.6.0-cdh5.4.10.jar
hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.4.10.jar
hadoop-yarn-client-2.6.0-cdh5.4.10.jar
hadoop-yarn-common-2.6.0-cdh5.4.10.jar
hadoop-yarn-registry-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-common-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-nodemanager-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-resourcemanager-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-tests-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-web-proxy-2.6.0-cdh5.4.10.jar

添加到build path
之前忘记添加hadoop的核心包,就会出现下列报错,测试一次程序报错:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
at org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:402)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:193)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at hiveconnect.hiveconnecttest.main(hiveconnecttest.java:16)

测试ok,测试结果如下:
连接:org.apache.hive.jdbc.HiveConnection@43a25848
是否有数据:true
Running:show tables 'tinatest'
执行“show tables”运行结果:
tinatest
Running:describe tinatest
执行“describe table”运行结果:
key int
value string
Running:load data local inpath '/tmp/test2.txt' into table tinatest
Running:select * from tinatest
执行“select * query”运行结果:
1 a
2 b
3 tina


hiveconnecttest.java的具体内容:---网上找的例子随意改了一下,测试基础功能即可。
package hiveconnect;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import org.apache.hadoop.conf.Configuration;

public class hiveconnecttest {
    private static String sql = ""; 
    private static ResultSet res; 

public static void main(String[] args)throws Exception {
// TODO Auto-generated method stub 
Class.forName("org.apache.hive.jdbc.HiveDriver"); 
Connection conn=DriverManager.getConnection("jdbc:hive2://192.168.1.207:10000","root","root");
System.out.println("连接:"+conn);
Statement stmt=conn.createStatement();
String query_sql="select systemno from runningrecord_old limit 1";
ResultSet rs=stmt.executeQuery(query_sql);
System.out.println("是否有数据:"+rs.next());

//创建的表名 
String tableName = "tinatest"; 

/** 第一步:存在就先删除 **/ 
sql = "drop table " + tableName; 
stmt.execute(sql); 

/** 第二步:不存在就创建 **/ 
sql = "create table " + tableName + " (key int, value string)  row format delimited fields terminated by ','"; 
stmt.execute(sql); 

// 执行“show tables”操作 
sql = "show tables '" + tableName + "'"; 
System.out.println("Running:" + sql); 
res = stmt.executeQuery(sql); 
System.out.println("执行“show tables”运行结果:"); 
if (res.next()) { 
        System.out.println(res.getString(1)); 
}

// 执行“describe table”操作 
sql = "describe " + tableName; 
System.out.println("Running:" + sql); 
res = stmt.executeQuery(sql); 
System.out.println("执行“describe table”运行结果:"); 
while (res.next()) {   
        System.out.println(res.getString(1) + "\t" + res.getString(2)); 

// 执行“load data into table”操作 
String filepath = "/tmp/test2.txt"; 
sql = "load data local inpath '" + filepath + "' into table " + tableName; 
System.out.println("Running:" + sql); 
stmt.executeUpdate(sql); 
// 执行“select * query”操作 
sql = "select * from " + tableName; 
System.out.println("Running:" + sql); 
res = stmt.executeQuery(sql); 
System.out.println("执行“select * query”运行结果:"); 
while (res.next()) { 
        System.out.println(res.getInt(1) + "\t" + res.getString(2)); 

conn.close(); 
conn = null; 
}
}

在学习阶段,不对的地方欢迎指正。

QQ:906179271 
分享到:
评论

相关推荐

    Hive-2.1.1-CDH-3.6.1 相关JDBC连接驱动 Jar 包集合

    02、hive-exec-2.1.1-cdh6.3.1.jar 03、hive-jdbc-2.1.1-cdh6.3.1.jar 04、hive-jdbc-2.1.1-cdh6.3.1-standalone.jar 05、hive-metastore-2.1.1-cdh6.3.1.jar 06、hive-service-2.1.1-cdh6.3.1.jar 07、libfb303-...

    hive-0.13.1-cdh5.3.6.rar

    hive-0.13.1-cdh5.3.6.rarhive-0.13.1-cdh5.3.6.rarhive-0.13.1-cdh5.3.6.rarhive-0.13.1-cdh5.3.6.rarhive-0.13.1-cdh5.3.6.rarhive-0.13.1-cdh5.3.6.rarhive-0.13.1-cdh5.3.6.rarhive-0.13.1-cdh5.3.6.rarhive-...

    hive-jdbc-2.1.1-cdh6.2.0-standalone.jar

    hive-jdbc-2.1.1-cdh6.2.0(ieda等jdbc链接hive2.1.1);cdh6.2.0安装的hive2.1.1

    hive2.1.1-cdh6.3.2

    这个压缩包文件“hive2.1.1-cdh6.3.2”包含了与CDH6.3.2兼容的Hive JDBC驱动程序和其他相关组件,这对于远程连接和操作Hive数据库至关重要。 首先,我们来看看包含的四个文件: 1. **hive-jdbc-2.1.1-cdh6.3.2-...

    hive-jdbc-1.1.0-cdh5.12.1 连接库 jar包

    1. `hive-jdbc-1.1.0-cdh5.12.1-standalone.jar`:这是一个独立的jar包,包含了所有必要的依赖,可以直接在没有其他CDH库的情况下运行。这意味着如果你的客户端环境中无法或不便解决所有依赖问题,你可以选择使用这...

    hive-1.1.0-cdh5.7.0.tar.gz.rar

    - **解压文件**:首先,你需要将下载的`hive-1.1.0-cdh5.7.0.tar.gz`文件解压,通常使用`tar -zxvf hive-1.1.0-cdh5.7.0.tar.gz`命令。 - **配置环境变量**:在系统环境变量`PATH`中添加Hive的bin目录路径,确保...

    hive-jdbc-2.1.1-cdh6.3.1-standalone.jar

    Hive 2.1.1 的 JDBC 连接驱动,CDH-6.3.1 版本。

    hive-jdbc-2.1.1-cdh6.3.2-standalone.jar

    hive连接驱动

    hive-jdbc-1.1.0-cdh5.12.1-standalone.jar

    hive连接jdbc的jar包hive-jdbc-1.1.0-cdh5.12.1-standalone.jar

    Cloudera_HiveJDBC_2.5.4.1006,hive-1.1.0-cdh5.13.2.tar

    可用于在DataGrip连接CDH HIVE,也可以用于在idea或eclipse中连接hive使用,压缩包中包含hive-1.1.0-cdh5.13.2.tar和Cloudera_HiveJDBC_2.5.4.1006,已测试成功,尽情享用!

    hive-exec-2.1.1-cdh6.3.1.jar

    hive-exec-2.1.1-cdh6.3.1.jar

    hive-1.1.0-cdh5.7.0.tar

    这个压缩包文件"**hive-1.1.0-cdh5.7.0.tar**"是针对CDH(Cloudera Distribution Including Apache Hadoop)5.7.0版本的Hive 1.1.0发行版,它是大数据生态中一个基于Hadoop的数据仓库工具,能够将结构化的数据文件...

    hive-jdbc-2.1.1-cdh6.1.0-standalone.jar

    hive JDBC jar包。由于项目使用,此jar包从国外下载费了好大劲,现分享给大家。 hive JDBC jar包。由于项目使用,此jar包从国外下载费了好大劲,现分享给大家。 hive JDBC jar包。由于项目使用,此jar包从国外下载费...

    hive-jdbc-1.1.0-cdh5.4.5-standalone.jar

    hive-jdbc-1.1.0-cdh5.4.5-standalone.jar Caused by: java.sql.SQLException: java.lang.ClassNotFoundException: org.apache.hive.jdbc.HiveDriver at com.trs.gateway.commons.hive.HiveFeature.getConnection...

    cloudera-hive-cdh6.3.2源码包

    下面我们将对 `cloudera-hive-cdh6.3.2` 源码包中的关键组件和技术进行深入解析。 1. **Hive Metastore** Hive Metastore 是 Hive 的元数据存储服务,它负责存储表和分区的信息,包括表名、列名、表的物理位置、...

    hive-0.10.0-cdh4.2.1

    Hive-0.10.0-cdh4.2.1是Cloudera Distribution Including Apache Hadoop(CDH)的一个版本,专为CDH4.2.1定制,提供了对Hadoop生态系统的增强和优化。 1. **Hive架构**:Hive主要由四部分组成:客户端、元数据存储...

    含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-3.1.2-bin.tar.gz

    含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-3.1.2-bin.tar.gz 含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-3.1.2-bin.tar.gz 含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-...

    hive-1.1.0-cdh5.14.2.tar.gz和mysql-connector-java-5.1.38.jar连接包

    标题中的“hive-1.1.0-cdh5.14.2.tar.gz”是一个针对Apache Hive的压缩包,这是Hadoop生态系统中的一个数据仓库工具,用于查询和管理大规模存储在Hadoop分布式文件系统(HDFS)上的结构化数据。CDH是Cloudera ...

    hive-1.1.0-cdh5.14.2.tar.gz

    hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供完整的sql查询功能,可以将sql语句转换为MapReduce任务进行运行。 其优点是学习成本低,可以通过类SQL语句快速实现简单的...

    hive-1.1.0-cdh5.15.1.tar.gz

    大数据/Linux安装包-hive-1.1.0-cdh5.15.1.tar.gz 大数据/Linux安装包-hive-1.1.0-cdh5.15.1.tar.gz 大数据/Linux安装包-hive-1.1.0-cdh5.15.1.tar.gz

Global site tag (gtag.js) - Google Analytics