- 浏览: 346654 次
- 性别:
- 来自: 上海
文章分类
最新评论
-
tpxcer:
不开启时可以的,而且开启以后各种坑。。。。
hue beeswax权限管理 -
yangze:
博主请教一个问题,hue 控制hive表的权限怎么弄? 怎么联 ...
cloudera新增用户权限配置 -
linux91:
楼主你好,我用CM配置LDAP用户组映射,进入impala时, ...
sentry配置 -
linux91:
版主:按你的步骤配置了,可是,执行 impala-shell ...
impala集成LDAP -
lookqlp:
super_a 写道你好!找不到表这个问题是如何解决的,可以描 ...
hcatalog读取hive数据并写入hive
mysqldump -hhost -uroot -ppasswd sentry > /tmp/sentry.sql
create database sentry DEFAULT CHARACTER SET utf8;
grant all on sentry.* TO 'sentry'@'%' IDENTIFIED BY 'sentry';
flush PRIVILEGES;
source /tmp/sentry.sql
mysql参数
show variables like 'lower%';
注意迁移时查看系统大小写参数,如果大小写敏感参数不一致可能报如下错误
create database sentry DEFAULT CHARACTER SET utf8;
grant all on sentry.* TO 'sentry'@'%' IDENTIFIED BY 'sentry';
flush PRIVILEGES;
source /tmp/sentry.sql
mysql参数
[mysqld] #transaction-isolation=READ-COMMITTED # Disabling symbolic-links is recommended to prevent assorted security risks; # to do so, uncomment this line: # symbolic-links=0 socket=/var/lib/mysql/mysql.sock key_buffer = 16M key_buffer_size = 32M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 64 query_cache_limit = 8M query_cache_size = 64M query_cache_type = 1 # Important: see Configuring the Databases and Setting max_connections max_connections = 2000 default-storage-engine=InnoDB # log-bin should be on a disk with enough free space #log-bin=/var/log/mysql/binary/mysql_binary_log # For MySQL version 5.1.8 or later. Comment out binlog_format for older versions. #binlog_format = mixed #read_buffer_size = 2M #read_rnd_buffer_size = 16M #sort_buffer_size = 8M #join_buffer_size = 8M # InnoDB settings innodb_file_per_table = 1 innodb_flush_log_at_trx_commit = 2 innodb_log_buffer_size = 64M innodb_buffer_pool_size = 4G innodb_thread_concurrency = 8 innodb_flush_method = O_DIRECT innodb_log_file_size = 512M lower_case_table_names = 1 [mysqld_safe] socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
show variables like 'lower%';
注意迁移时查看系统大小写参数,如果大小写敏感参数不一致可能报如下错误
Query for candidates of org.apache.hadoop.hive.metastore.model.MVersionTable and subclasses resulted in no possible candidates Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables" org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables" at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3380) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954) at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:679) at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:408) at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:947) at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:370) at org.datanucleus.store.query.Query.executeQuery(Query.java:1744) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) at org.datanucleus.store.query.Query.execute(Query.java:1654) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221) at org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:6936) at org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:6920) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6878) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6862) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:560) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:608) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:450) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5622) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5617) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5850) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5775) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Nov 18, 4:16:37.046 PM ERROR org.apache.hadoop.hive.metastore.HiveMetaStore MetaException(message:Version information not found in metastore. ) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6881) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6862) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:560) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:608) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:450) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5622) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5617) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5850) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5775) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Nov 18, 4:16:37.046 PM ERROR org.apache.hadoop.hive.metastore.HiveMetaStore Metastore Thrift Server threw an exception... MetaException(message:Version information not found in metastore. ) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6881) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6862) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:560) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:608) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:450) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5622) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5617) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5850) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5775) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
发表评论
-
hive dynamic partitions insert java.lang.OutOfMemoryError: Java heap space
2015-10-26 18:03 3100动态分区问题,如果数据量大或者当动态分区大甚至只有十几个时 ... -
yarn NullPointerException
2015-03-26 17:03 1439yarn重启后,部分nm启动不了,报空指针问题 20 ... -
mapreduce mapper access security hbase
2015-03-17 14:42 1226环境: security cdh 5.2.0 secu ... -
sentry服务后,几个权限问题
2015-03-10 16:08 8982以账户bi为例 问题一:账户bi beeline ldap后 ... -
cloudera新增用户权限配置
2015-03-05 16:13 3966目标: 给各个业务组提供不同用户及用户组,并有限制的访问h ... -
hive集成LDAP
2015-02-13 10:09 9750cloudera manager hive- sevice ... -
sentry配置
2015-02-13 10:06 2352当前cdh版本为5.2.0,且通过cloudera mange ... -
impala HA
2014-12-11 17:36 1886目的: 为impala jdbc提供统一的接口,作用参照htt ... -
impala集成LDAP
2014-12-11 12:55 7117目的: 为解决kerberos安 ... -
security cdh mapreduce access hbase
2014-12-02 15:09 1103执行mapreduce的用户必须是可以访问hdfs相应目录和执 ... -
hive gateway(client) configuration
2014-12-02 14:32 8115配置hive gateway机器 Caused by: Met ... -
hcatalog读取hive数据并写入hive
2014-12-01 17:49 18129参考http://www.cloudera.com/cont ... -
CDH5.0.2升级至CDH5.2.0
2014-12-01 16:59 9870升级需求 1.为支持spark k ... -
hue beeswax权限管理
2014-08-05 17:54 10651http://www.cloudera.com/content ... -
cloudera client集群部署
2014-08-05 17:48 691一般我们使用使用client机器访问集群,而不会直接在hado ... -
cloudera manager kerberos配置
2014-08-05 17:37 1554CDH5.1.0前的版本,可以通过cloudera manag ... -
CDH5安装
2014-08-05 17:05 2322CDH安装有很多方式: ta ... -
hadoop集群数据迁移
2014-08-04 22:31 6569hadoop distcp hdfs://namenode1/ ... -
java.lang.OutOfMemoryError: unable to create new native thread
2014-05-23 17:29 170935227 2014-05-21 13:53:18,504 I ... -
hadoop-2.2.0编译import eclipse
2013-10-22 17:50 7798编译hadoop-2.2.0 下载hadoop-2.2.0-s ...
相关推荐
在实际业务中,将MySQL中的数据迁移到Hive进行分析和挖掘是一个常见的需求。本教程主要围绕“mysql数据抽取,自动生成hive建表语句”这一主题展开,讲解如何高效地实现这一过程。 首先,我们需要理解MySQL与Hive...
通常,Hive元数据默认存储在MySQL或Derby等关系型数据库中。然而,为了适应更复杂的业务需求和更高的性能,我们可以选择将Hive元数据存储在达梦数据库中。本文将详细介绍如何配置Hive metastore以使用达梦数据库。 ...
这个版本的驱动对应的是MySQL 5.1系列,提供了JDBC接口,使得Hive可以与MySQL数据库进行连接,例如作为元数据存储或者数据迁移的中间环节。 在使用这两个组件时,我们需要了解以下关键知识点: 1. **Apache Hive**...
然后,你可以使用Sqoop命令行工具,结合Hive的HQL,实现从MySQL数据库到HDFS的数据迁移,或者反过来,将处理过的数据写回到MySQL。这整个过程简化了大数据处理的流程,使得数据的导入导出变得更加高效和便捷。 总结...
元数据存储通常在MySQL或Derby数据库中,包含表名、列名、分区信息等;驱动器负责执行查询计划;编译器则将HQL转化为MapReduce任务。 2. **数据模型**:Hive使用类似关系数据库的表来组织数据,支持行式存储。表...
2. **配置Hive元数据信息**:将Hive的元数据信息(如位置、格式等)配置到Doris中。 3. **查询Hive数据**:使用Doris查询Hive表中的数据。 通过这种方式,Doris不仅能够作为独立的数据分析平台使用,还可以无缝集成...
5. **Hive架构**:Hive由多个组件组成,包括元数据存储(通常是MySQL或Derby)、Hive服务器、Hive客户端和执行引擎。元数据存储了表和分区的定义,Hive服务器处理客户端请求,执行引擎则将HiveQL转换为MapReduce任务...
Hive提供了元数据管理、SQL接口、可扩展性和高容错性,使得非程序员也能方便地对大数据进行操作。在这个项目中,电商数据可能被加载到Hive表中,然后通过编写HQL查询语句,分析用户的购买行为、商品销售趋势等,为...
- **元数据存储**:默认情况下,元数据被存储在一个嵌入式的Derby数据库中;在生产环境中,通常会将其迁移到MySQL等更为可靠的数据库中以提高性能和稳定性。 - **驱动器**:包括解释器、编译器、优化器和执行器等...
Hive的元数据存储在Metastore中,可以是MySQL或PostgreSQL等关系型数据库,负责存储表结构、分区信息等。通过YARN,Hive作业可以在集群中进行分布式处理,利用Hadoop的计算能力。 在实际应用中,Hive常用于批处理...
在压缩包中,`Hive的mysql安装配置.doc`可能是关于如何在Hadoop的Hive组件中集成MySQL作为元数据存储的文档。Hive通常会将元数据(例如表结构和分区信息)存储在关系数据库中,MySQL是一个常见的选择。集成MySQL可以...
MySQL元数据生成Hive建表语句注释脚本详解 在大数据处理场景中,经常需要将数据从传统的关系型数据库(如MySQL)迁移至分布式数据仓库(如Hive)。在这个过程中,保持数据表结构和注释的一致性非常重要,因为注释有...
Hive提供了元数据管理、数据分桶、压缩和优化等功能,以提高查询效率。Hive与MySQL的一个显著区别在于,MySQL适用于在线事务处理(OLTP),而Hive则更适合在线分析处理(OLAP)。 MySQL与Hive在实际应用中常常结合使用...
3. **元数据**:Hive维护着关于表、列、分区等的元数据,这些信息存储在传统的数据库如MySQL中。 4. **编译与执行计划**:Hive将HQL转换为MapReduce任务,通过Hadoop集群进行分布式计算。 5. **分区与桶**:分区有助...
1. Hive集成:在Hive中使用MySQL作为元数据存储时,需要配置Hive的metastore_uri指向MySQL数据库,并确保MySQL Connector/J在Hive的类路径中。这样,Hive可以通过JDBC驱动连接到MySQL来存储表元数据和其他配置信息。...
- 使用本地元数据存储:将 metastore 从默认的远程 MySQL 或 Derby 数据库迁移到本地或更强大的数据库(如 PostgreSQL 或 MySQL),可以显著提高元数据查询速度。 - 分离 metastore 服务:确保 metastore 服务与 ...
- 配置`hive-site.xml`,包括Hive元数据存储的位置(如MySQL或Derby),以及Hadoop的相关配置。 - 初始化Metastore:`schematool -dbType derby -initSchema`(如果使用Derby)。 - 启动Hive服务器:`hive`命令...
1. **检查现有MySQL安装**:使用命令`rpm -qa | grep -i mysql`检查系统中是否有已安装的MySQL相关软件包。如果有,可以使用`rpm -e --nodeps mysql-libs-5.1.66-2.el6_3.x86_64`卸载它们。 2. **创建MySQL用户和组*...