安装的Hive是Hive最新版本中的稳定版本,是基于Hadoop2.2.0,以前有写过,如何在hadoop1.x下面安装Hive0.8,本次Hive的版本是Hive0.13,可以直接在Hive官网上下载二进制包,无须进行源码编译。Hive需要依赖底层的Hadoop环境,所以在安装Hive前,请确保你的hadoop集群环境已经可以正常工作。
Hive0.13稳定版本的下载地址
http://apache.fayea.com/apache-mirror/hive/stable/
关于Hadoop2.2.0分布式集群的搭建
http://qindongliang1922.iteye.com/blog/2078423
MySQL的安装
http://qindongliang1922.iteye.com/blog/1987199
下载具体看下安装的步骤和过程:
1 | 序号 | 描述 | 2 | Hadoop2.2.0集群的搭建 | 底层依赖环境 | 3 | 下载Hive0.13的bin包,并解压 | Hive包 | 4 | 配置HIVE_HOME环境变量 | 环境变量所需 | 5 | 配置hive-env.sh | 涉及hadoop的目录,和hive的conf目录 | 6 | 配置hive-site.xml | 配置hive属性和集成MySQL存储元数据 | 7 | 启动bin/hive服务 | 测试启动hive | 8 | 建库,建表,测试hive | 测试hive是否正常工作 | 9 | 退出Hive客户端 | 执行命令exit | 10 | 工程师一枚 | 开工 | 11 | 拷贝mysql的jdbc包到hive的lib目录下 | 元数据存储为MySQL | 12 | hadoop技术交流群 | 376932160 |
首先,先执行如下4个命令,把Hive自带的模板文件,变为Hive实际所需的文件:
cp hive-default.xml.template hive-site.xml
cp hive-env.sh.template hive-env.sh
cp hive-exec-log4j.properties.template hive-exec-log4j.properties
cp hive-log4j.properties.template hive-log4j.properties
Hive环境变量的设置:
- export PATH=.:$PATH
- <!-- JDK环境 -->
- export JAVA_HOME="/usr/local/jdk"
- export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
- export PATH=$PATH:$JAVA_HOME/bin
- <!-- Hadoop环境 -->
- export HADOOP_HOME=/home/search/hadoop
- export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop/
- export CLASSPATH=.:$CLASSPATH:$HADOOP_HOME/lib
- export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
- <!-- Ant环境 -->
- export ANT_HOME=/usr/local/ant
- export CLASSPATH=$CLASSPATH:$ANT_HOME/lib
- export PATH=$PATH:$ANT_HOME/bin
- <!-- Maven环境 -->
- export MAVEN_HOME="/usr/local/maven"
- export CLASSPATH=$CLASSPATH:$MAVEN_HOME/lib
- export PATH=$PATH:$MAVEN_HOME/bin
- <!-- Hive环境 -->
- export HIVE_HOME=/home/search/hive
- export CLASSPATH=$CLASSPATH:$HIVE_HOME/lib
- export PATH=$PATH:$HIVE_HOME/bin:$HIVE_HOME/conf
export PATH=.:$PATH <!-- JDK环境 --> export JAVA_HOME="/usr/local/jdk" export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export PATH=$PATH:$JAVA_HOME/bin <!-- Hadoop环境 --> export HADOOP_HOME=/home/search/hadoop export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop/ export CLASSPATH=.:$CLASSPATH:$HADOOP_HOME/lib export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin <!-- Ant环境 --> export ANT_HOME=/usr/local/ant export CLASSPATH=$CLASSPATH:$ANT_HOME/lib export PATH=$PATH:$ANT_HOME/bin <!-- Maven环境 --> export MAVEN_HOME="/usr/local/maven" export CLASSPATH=$CLASSPATH:$MAVEN_HOME/lib export PATH=$PATH:$MAVEN_HOME/bin <!-- Hive环境 --> export HIVE_HOME=/home/search/hive export CLASSPATH=$CLASSPATH:$HIVE_HOME/lib export PATH=$PATH:$HIVE_HOME/bin:$HIVE_HOME/conf
下面是Hive-env.sh里面的内容:
- # Licensed to the Apache Software Foundation (ASF) under one
- # or more contributor license agreements. See the NOTICE file
- # distributed with this work for additional information
- # regarding copyright ownership. The ASF licenses this file
- # to you under the Apache License, Version 2.0 (the
- # "License"); you may not use this file except in compliance
- # with the License. You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- # Set Hive and Hadoop environment variables here. These variables can be used
- # to control the execution of Hive. It should be used by admins to configure
- # the Hive installation (so that users do not have to set environment variables
- # or set command line parameters to get correct behavior).
- #
- # The hive service being invoked (CLI/HWI etc.) is available via the environment
- # variable SERVICE
- # Hive Client memory usage can be an issue if a large number of clients
- # are running at the same time. The flags below have been useful in
- # reducing memory usage:
- #
- # if [ "$SERVICE" = "cli" ]; then
- # if [ -z "$DEBUG" ]; then
- # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit"
- # else
- # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"
- # fi
- # fi
- # The heap size of the jvm stared by hive shell script can be controlled via:
- #
- # export HADOOP_HEAPSIZE=1024
- #
- # Larger heap size may be required when running queries over large number of files or partitions.
- # By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be
- # appropriate for hive server (hwi etc).
- # Set HADOOP_HOME to point to a specific hadoop install directory
- HADOOP_HOME=/home/search/hadoop
- # Hive Configuration Directory can be controlled by:
- export HIVE_CONF_DIR=/home/search/hive/conf
- # Folder containing extra ibraries required for hive compilation/execution can be controlled by:
- # export HIVE_AUX_JARS_PATH=
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hive and Hadoop environment variables here. These variables can be used # to control the execution of Hive. It should be used by admins to configure # the Hive installation (so that users do not have to set environment variables # or set command line parameters to get correct behavior). # # The hive service being invoked (CLI/HWI etc.) is available via the environment # variable SERVICE # Hive Client memory usage can be an issue if a large number of clients # are running at the same time. The flags below have been useful in # reducing memory usage: # # if [ "$SERVICE" = "cli" ]; then # if [ -z "$DEBUG" ]; then # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit" # else # export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit" # fi # fi # The heap size of the jvm stared by hive shell script can be controlled via: # # export HADOOP_HEAPSIZE=1024 # # Larger heap size may be required when running queries over large number of files or partitions. # By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be # appropriate for hive server (hwi etc). # Set HADOOP_HOME to point to a specific hadoop install directory HADOOP_HOME=/home/search/hadoop # Hive Configuration Directory can be controlled by: export HIVE_CONF_DIR=/home/search/hive/conf # Folder containing extra ibraries required for hive compilation/execution can be controlled by: # export HIVE_AUX_JARS_PATH=
hive-site.xml里面的配置如下:
- <configuration>
- <property>
- <!-- MySQ的URL配置 -->
- <name>javax.jdo.option.ConnectionURL</name>
- <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
- </property>
- <!-- 数据库的用户名配置-->
- <property>
- <name>javax.jdo.option.ConnectionUserName</name>
- <value>root</value>
- </property>
- <!-- 此处JDBC的驱动务必加上,对应的数据配置对应的驱动-->
- <property>
- <name>javax.jdo.option.ConnectionDriverName</name>
- <value>com.mysql.jdbc.Driver</value>
- <description>Driver class name for a JDBC metastore</description>
- </property>
- <!-- 数据库密码配置-->
- <property>
- <name>javax.jdo.option.ConnectionPassword</name>
- <value>qin</value>
- </property>
- <!-- HDFS路径hive表的存放位置-->
- <property>
- <name>hive.metastore.warehouse.dir</name>
- <value>hdfs://h1:9000//user/hive/warehouse</value>
- </property>
- <!--HDFS路径,用于存储不同 map/reduce 阶段的执行计划和这些阶段的中间输出结果。 -->
- <property>
- <name>hive.exec.scratchdir</name>
- <value>/tmp</value>
- </property>
- <property>
- <name>mapred.child.java.opts</name>
- <value>-Xmx4096m</value>
- </property>
- <!-- 日志的记录位置-->
- <property>
- <name>hive.querylog.location</name>
- <value>/home/search/hive/logs</value>
- </property>
- <property>
- <name>hive.metastore.local</name>
- <value>true</value>
- </property>
- </configuration>
<configuration> <property> <!-- MySQ的URL配置 --> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value> </property> <!-- 数据库的用户名配置--> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <!-- 此处JDBC的驱动务必加上,对应的数据配置对应的驱动--> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <!-- 数据库密码配置--> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>qin</value> </property> <!-- HDFS路径hive表的存放位置--> <property> <name>hive.metastore.warehouse.dir</name> <value>hdfs://h1:9000//user/hive/warehouse</value> </property> <!--HDFS路径,用于存储不同 map/reduce 阶段的执行计划和这些阶段的中间输出结果。 --> <property> <name>hive.exec.scratchdir</name> <value>/tmp</value> </property> <property> <name>mapred.child.java.opts</name> <value>-Xmx4096m</value> </property> <!-- 日志的记录位置--> <property> <name>hive.querylog.location</name> <value>/home/search/hive/logs</value> </property> <property> <name>hive.metastore.local</name> <value>true</value> </property> </configuration>
在HDFS上,新建hive的数据存储目录,以及MapReduce执行过程,生成的临时文件目录,执行命令如下,并赋值权限:
- hadoop fs -mkidr /tmp
- hadoop fs -mkidr /user/hive/warehouse
- hadoop fs -chmod g+w /tmp
- hadoop fs -chmod g+w /user/hive/warehouse
hadoop fs -mkidr /tmp hadoop fs -mkidr /user/hive/warehouse hadoop fs -chmod g+w /tmp hadoop fs -chmod g+w /user/hive/warehouse
启动hive:
执行命令:bin/hive,启动信息如下:
- [search@h1 hive]$ bin/hive
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
- 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
- 14/07/30 04:18:09 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
- Logging initialized using configuration in file:/home/search/hive/conf/hive-log4j.properties
- hive>
[search@h1 hive]$ bin/hive 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 14/07/30 04:18:08 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed 14/07/30 04:18:09 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Logging initialized using configuration in file:/home/search/hive/conf/hive-log4j.properties hive>
执行,建表命令,并导入数据:
建表:
create table mytt (name string ,count int) row format delimited fields terminated by '#' stored as textfile ;
导入数据:
LOAD DATA LOCAL INPATH '/home/search/abc1.txt' OVERWRITE INTO TABLE info;
执行查询命令,并降序输出:
- Time taken: 0.837 seconds, Fetched: 5 row(s)
- hive> select * from info limit 5 order by count desc;
- FAILED: ParseException line 1:27 missing EOF at 'order' near '5'
- hive> select * from info order by count desc limit 5 ;
- Total jobs = 1
- Launching Job 1 out of 1
- Number of reduce tasks determined at compile time: 1
- In order to change the average load for a reducer (in bytes):
- set hive.exec.reducers.bytes.per.reducer=<number>
- In order to limit the maximum number of reducers:
- set hive.exec.reducers.max=<number>
- In order to set a constant number of reducers:
- set mapreduce.job.reduces=<number>
- Starting Job = job_1406660797211_0003, Tracking URL = http://h1:8088/proxy/application_1406660797211_0003/
- Kill Command = /home/search/hadoop/bin/hadoop job -kill job_1406660797211_0003
- Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
- 2014-07-30 04:26:13,538 Stage-1 map = 0%, reduce = 0%
- 2014-07-30 04:26:26,398 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 5.41 sec
- 2014-07-30 04:26:27,461 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 6.64 sec
- 2014-07-30 04:26:39,177 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 10.02 sec
- MapReduce Total cumulative CPU time: 10 seconds 20 msec
- Ended Job = job_1406660797211_0003
- MapReduce Jobs Launched:
- Job 0: Map: 1 Reduce: 1 Cumulative CPU: 10.02 sec HDFS Read: 143906707 HDFS Write: 85 SUCCESS
- Total MapReduce CPU Time Spent: 10 seconds 20 msec
- OK
- 英的国 999999
- 中的国 999997
- 美的国 999996
- 中的国 999993
- 英的国 999992
- Time taken: 37.892 seconds, Fetched: 5 row(s)
- hive>
Time taken: 0.837 seconds, Fetched: 5 row(s) hive> select * from info limit 5 order by count desc; FAILED: ParseException line 1:27 missing EOF at 'order' near '5' hive> select * from info order by count desc limit 5 ; Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1406660797211_0003, Tracking URL = http://h1:8088/proxy/application_1406660797211_0003/ Kill Command = /home/search/hadoop/bin/hadoop job -kill job_1406660797211_0003 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2014-07-30 04:26:13,538 Stage-1 map = 0%, reduce = 0% 2014-07-30 04:26:26,398 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 5.41 sec 2014-07-30 04:26:27,461 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 6.64 sec 2014-07-30 04:26:39,177 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 10.02 sec MapReduce Total cumulative CPU time: 10 seconds 20 msec Ended Job = job_1406660797211_0003 MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 Cumulative CPU: 10.02 sec HDFS Read: 143906707 HDFS Write: 85 SUCCESS Total MapReduce CPU Time Spent: 10 seconds 20 msec OK 英的国 999999 中的国 999997 美的国 999996 中的国 999993 英的国 999992 Time taken: 37.892 seconds, Fetched: 5 row(s) hive>
hive shell一些交互式命令的使用方法:
- quit,exit: 退出交互式shell
- reset: 重置配置为默认值
- set <key>=<value> : 修改特定变量的值(如果变量名拼写错误,不会报错)
- set : 输出用户覆盖的hive配置变量
- set -v : 输出所有Hadoop和Hive的配置变量
- add FILE[S] *, add JAR[S] *, add ARCHIVE[S] * : 添加 一个或多个 file, jar, archives到分布式缓存
- list FILE[S], list JAR[S], list ARCHIVE[S] : 输出已经添加到分布式缓存的资源。
- list FILE[S] *, list JAR[S] *,list ARCHIVE[S] * : 检查给定的资源是否添加到分布式缓存
- delete FILE[S] *,delete JAR[S] *,delete ARCHIVE[S] * : 从分布式缓存删除指定的资源
- ! <command> : 从Hive shell执行一个shell命令
- dfs <dfs command> : 从Hive shell执行一个dfs命令
- <query string> : 执行一个Hive 查询,然后输出结果到标准输出
- source FILE <filepath>: 在CLI里执行一个hive脚本文件
quit,exit: 退出交互式shell reset: 重置配置为默认值 set <key>=<value> : 修改特定变量的值(如果变量名拼写错误,不会报错) set : 输出用户覆盖的hive配置变量 set -v : 输出所有Hadoop和Hive的配置变量 add FILE[S] *, add JAR[S] *, add ARCHIVE[S] * : 添加 一个或多个 file, jar, archives到分布式缓存 list FILE[S], list JAR[S], list ARCHIVE[S] : 输出已经添加到分布式缓存的资源。 list FILE[S] *, list JAR[S] *,list ARCHIVE[S] * : 检查给定的资源是否添加到分布式缓存 delete FILE[S] *,delete JAR[S] *,delete ARCHIVE[S] * : 从分布式缓存删除指定的资源 ! <command> : 从Hive shell执行一个shell命令 dfs <dfs command> : 从Hive shell执行一个dfs命令 <query string> : 执行一个Hive 查询,然后输出结果到标准输出 source FILE <filepath>: 在CLI里执行一个hive脚本文件
以debug模式启动: hive -hiveconf hive.root.logger=DEBUG,console
至此,我们的Hive,已经安装成功,并可以正常运行。
相关推荐
本教程将详细讲解如何在Linux环境下安装Hive客户端,以便进行数据操作和分析。 一、Hadoop环境准备 在安装Hive客户端之前,确保你已经安装了Hadoop并且集群处于正常运行状态。Hadoop是Hive的基础,提供了分布式存储...
windows10下安装hive2.3.3的时候,无法执行hive命令,原因是官方下载文件中缺少可执行文件(好多个cmd文件),安装的时候无法执行成功。下载后,解压替换hive的bin目录即可执行成功。
本文档将指导您在 Linux 中安装和配置 Hive 2.3.6,包括环境准备、Hive 元数据库的建立、下载和安装 Hive、配置 Hive 环境等步骤。 一、环境准备 在开始安装 Hive 之前,我们需要准备好环境,包括 CentOS 7、...
下面我们将详细介绍在Windows上安装Hive 2.3.3所需的CMD相关步骤。 首先,你需要下载Hive 2.3.3的二进制包,这个包通常包含了运行Hive所需的所有文件。解压缩下载的文件后,你会看到一个名为“bin”的目录,这个...
"HIVE安装及详解" HIVE是一种基于Hadoop的数据仓库工具,主要用于处理和分析大规模数据。下面是关于HIVE的安装及详解。 HIVE基本概念 HIVE是什么?HIVE是一种数据仓库工具,主要用于处理和分析大规模数据。它将...
现在我们来详细探讨在Hadoop上安装Hive的过程及相关知识点。 Hadoop上的Hive安装涉及环境准备、Hive压缩包的下载和解压、配置文件修改、服务启动等多个步骤。理解这些知识点对于在大数据环境中高效地使用Hive进行...
"在 CentOS 7 中安装 Hive 1.2.2" Hive 是基于 Hadoop 的数据仓库工具,用于数据分析和查询。下面是安装 Hive 1.2.2 在 CentOS 7 中的步骤。 准备安装文件 首先,需要在 Windows 中下载 Hive 1.2.2 安装文件,...
在开始安装Hive之前,确保已经完成了以下步骤: 1. **SSH免密码登录**: 通过公钥认证实现无密码登录,以便于后续的集群操作。 2. **JDK 1.8安装配置**: Hive 2.1.1 要求JDK 1.8 或以上版本。 3. **Hadoop 2.7.7安装...
### Ubuntu下安装Hive数据库知识点总结 #### 一、前言 Hive 是一个基于 Hadoop 的数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供简单的 SQL 查询功能,使得 Hadoop 更好的发挥处理大数据的能力。...
spark下安装hive标准配置文档。Ubuntu安装hive,并配置mysql作为元数据库时候需要的标准hive-site.xml配置文件,可以根据这个文件联系我的博文内容就行修改,避免入坑。实现快捷启动hive。
本篇文章将指导您如何在CentOS 6.4下安装Hive,并解决可能遇到的错误。 环境及软件介绍 本篇文章使用的环境是CentOS 6.4-x86_64-bin-DVD1.iso,软件版本为Hadoop 2.2.0和Hive 0.12.0,以及MySQL 5.1.66。 MySQL ...
在CentOS7.x上安装Hive涉及到一系列配置步骤,包括准备环境、解压安装包、配置数据库、编辑配置文件和设置环境变量等。在安装过程中,需要确保已经安装好了Java环境,且MySQL数据库服务可以正常运行。 2. Hive简介...
CentOS 7 中 Hive 的安装和使用 Hive 是一个基于 Hadoop 的数据仓库工具,主要用于存储、查询和分析大规模数据。下面将详细介绍 CentOS 7 中 Hive 的安装和使用。 1. 安装 MySQL 在安装 Hive 之前,需要先安装 ...
6. 安装Hive,创建metastore数据库,配置连接MySQL的参数。 7. 安装Zeppelin,配置其与Hadoop、Hive的连接。 8. 编写启动和停止脚本,方便管理和监控服务状态。 通过这种方式,用户无需逐个手动安装和配置组件,...
在本压缩包中,"文档.pdf" 和 "资料必看.zip" 可能包含了关于 Hive 安装与配置的详细步骤和指南。现在,我们将深入探讨 Hive 的安装与配置过程。 首先,安装 Hive 需要先确保你已经安装了 Hadoop 环境,因为 Hive ...
本文档将详细介绍如何安装Hive 1.2.1版本,并解决在安装过程中可能遇到的一个常见错误。Hive是一款基于Hadoop的数据仓库工具,可以将结构化的数据文件映射成一张表,并提供SQL查询功能。通过Hive,用户可以使用SQL...