impala:
[nobida145:21000] > select * from a;
Query: select * from a
ERROR: AnalysisException: Failed to load metadata for table: default.a
CAUSED BY: TableLoadingException: Failed to load metadata for table: a
CAUSED BY: TTransportException: null
[nobida145:21000] >
hive :
2014-11-03 11:04:32,306 ERROR [pool-4-thread-77]: server.TThreadPoolServer (TThreadPoolServer.java:run(251)) - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Cannot write a TUnion with no set value!
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:240)
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
at org.apache.thrift.TUnion.write(TUnion.java:152)
at org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj$ColumnStatisticsObjStandardScheme.write(ColumnStatisticsObj.java:550)
at org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj$ColumnStatisticsObjStandardScheme.write(ColumnStatisticsObj.java:488)
at org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj.write(ColumnStatisticsObj.java:414)
at org.apache.hadoop.hive.metastore.api.TableStatsResult$TableStatsResultStandardScheme.write(TableStatsResult.java:388)
at org.apache.hadoop.hive.metastore.api.TableStatsResult$TableStatsResultStandardScheme.write(TableStatsResult.java:338)
at org.apache.hadoop.hive.metastore.api.TableStatsResult.write(TableStatsResult.java:288)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_statistics_req_result$get_table_statistics_req_resultStandardScheme.write(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_statistics_req_result$get_table_statistics_req_resultStandardScheme.write(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_statistics_req_result.write(ThriftHiveMetastore.java)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
原因不详,跟建的表名和列名相同有关?
分享到:
相关推荐
Create Access data source name dynamically动态的创建数据源名
Cloudera JDBC Driver for Impala 是一款专门为连接Cloudera Data Hub (CDH) 中的Impala服务设计的Java数据库连接器(JDBC)驱动。这个驱动程序允许开发者和数据分析师使用标准的JDBC接口来与Impala进行交互,从而在...
Impala 驱动包 Cloudera_ImpalaJDBC4_2.5.41.zip Cloudera_ImpalaJDBC41_2.5.41.zip Cloudera-JDBC-Driver-for-Impala-Install-Guide.pdf Cloudera-JDBC-Driver-for-Impala-Release-Notes.pdf
impala的jdbc包,jdbc官方版本2.6.4,验证可以使用。目前最新版本了。
在Python编程中,有时我们需要与大数据处理系统进行交互,例如Apache Impala。Impala是一种高性能的SQL查询引擎,适用于实时分析存储在Hadoop中的大规模数据集。本篇文章将详细解析如何使用Python连接到Impala,并...
节点飞羚 Impala的节点客户端 安装 $ npm install --save node-impala 用法 使用此模块之前,请先查看。 import { createClient } from 'node-impala' ; const client = createClient ( ) ;... error ( err ) ) .
解决groups=params.user_to_groups_dict[user],keyError-u'impala'的bug
Cloudera_ImpalaJDBC-2.5.28 包含Java Demo,更是包含了Cloudera_ImpalaJDBC3_2.5.28.1047,Cloudera_ImpalaJDBC4_2.5.28.1047,Cloudera_ImpalaJDBC41_2.5.28.1047三个种类的JDBC应用的jar
在IT行业中,Impala和Kudu是两个非常关键的大数据处理组件,主要应用于大数据分析和实时查询场景。这里我们深入探讨这两个技术以及在并发创建Kudu表时可能遇到的问题。 首先,Impala是由Cloudera开发的一个开源的...