usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]
Common arguments:
--connect <jdbc-uri> Specify JDBC connect string
--connection-manager <class-name> Specify connection manager class name
--connection-param-file <properties-file> Specify connection arameters file
--driver <class-name> Manually specify JDBC driver class to use
--hadoop-home <hdir> Override $HADOOP_MAPRED_HOME_ARG
--hadoop-mapred-home <dir> Override $HADOOP_MAPRED_HOME_ARG
--help Print usage instructions
-P Read password from console
--password <password> Set authentication password
--password-file <password-file> Set authentication password file path
--username <username> Set authentication username
--verbose Print more information while working
Import control arguments:
--append Imports data in append mode
--as-avrodatafile Imports data to Avro data files
--as-sequencefile Imports data to SequenceFiles
--as-textfile Imports data as plain text (default)
--boundary-query <statement> Set boundary query for retrieving max and min value of the primary key
--columns <col,col,col...> Columns to import from table
--compression-codec <codec> Compression codec to use for import
--delete-target-dir Imports data in delete mode
--direct Use direct import fast path
--direct-split-size <n> Split the input stream every 'n' bytes when importing direct mode
-e,--query <statement> Import results of SQL 'statement'
--fetch-size <n> Set number 'n' of rows to fetch from the database when more rows are needed
--inline-lob-limit <n> Set the maximum size for an inline LOB
-m,--num-mappers <n> Use 'n' map tasks to import in parallel
--mapreduce-job-name <name> Set name for generated mapreduce job
--split-by <column-name> Column of the table used to split work units
--table <table-name> Table to read
--target-dir <dir> HDFS plain table destination
--validate Validate the copy using the configure validator
--validation-failurehandler <validation-failurehandler> Fully qualified class name for ValidationFa ilureHandler
--validation-threshold <validation-threshold> Fully qualified class name for Validation Threshold
--validator <validator> Fully qualified class name for the Validator
--warehouse-dir <dir> HDFS parent for table destination
--where <where clause> WHERE clause to use during import
-z,--compress Enable compression
Incremental import arguments:
--check-column <column> Source column to check for incremental change
--incremental <import-type> Define an incremental import of type 'append' or 'lastmodified'
--last-value <value> Last imported value in the incremental check column
Output line formatting arguments:
--enclosed-by <char> Sets a required field enclosing character
--escaped-by <char> Sets the escape character
--fields-terminated-by <char> Sets the field separator character
--lines-terminated-by <char> Sets the end-of-line character
--mysql-delimiters Uses MySQL's default delimiter set: fields: , lines: \n escaped-by: \ optionally-enclosed-by: '
--optionally-enclosed-by <char> Sets a field enclosing character
Input parsing arguments:
--input-enclosed-by <char> Sets a required field encloser
--input-escaped-by <char> Sets the input escape character
--input-fields-terminated-by <char> Sets the input field separator
--input-lines-terminated-by <char> Sets the input end-of-line char
--input-optionally-enclosed-by <char> Sets a field enclosing character
Hive arguments:
--create-hive-table Fail if the target hive table exists
--hive-database <database-name> Sets the database name to use when importing to hive
--hive-delims-replacement <arg> Replace Hive record \0x01 and row delimiters (\n\r) from imported string fields
with user-defined string
--hive-drop-import-delims Drop Hive record \0x01 and row delimiters (\n\r) from imported string fields
--hive-home <dir> Override $HIVE_HOME
--hive-import Import tables into Hive (Uses Hive's default delimiters if none are set.)
--hive-overwrite Overwrite existing data in the Hive table
--hive-partition-key <partition-key> Sets the partition key to use when importing to hive
--hive-partition-value <partition-value> Sets the partition value to use when importing to hive
--hive-table <table-name> Sets the table name to use when importing to hive
--map-column-hive <arg> Override mapping for specific column to hive types.
HBase arguments:
--column-family <family> Sets the target column family for the import
--hbase-create-table If specified, create missing HBase tables
--hbase-row-key <col> Specifies which input column to use as the row key
--hbase-table <table> Import to <table> in HBase
HCatalog arguments:
--hcatalog-database <arg> HCatalog database name
--hcatalog-home <hdir> Override $HCAT_HOME
--hcatalog-table <arg> HCatalog table name
--hive-home <dir> Override $HIVE_HOME
--hive-partition-key <partition-key> Sets the partition key to use when importing to hive
--hive-partition-value <partition-value> Sets the partition value to use when importing to hive
--map-column-hive <arg> Override mapping for specific column to hive types.
HCatalog import specific options:
--create-hcatalog-table Create HCatalog before import
--hcatalog-storage-stanza <arg> HCatalog storage stanza for table
creation
Code generation arguments:
--bindir <dir> Output directory for compiled objects
--class-name <name> Sets the generated class name. This overrides --package-name.
When combined with --jar-file, sets the input class.
--input-null-non-string <null-str> Input null non-string representation
--input-null-string <null-str> Input null string representation
--jar-file <file> Disable code generation; use specified jar
--map-column-java <arg> Override mapping for specific columns to java types
--null-non-string <null-str> Null non-string representation
--null-string <null-str> Null string representation
--outdir <dir> Output directory for generated code
--package-name <name> Put auto-generated classes in this package
- 浏览: 243584 次
- 性别:
- 来自: 成都
文章分类
- 全部博客 (294)
- Hadoop (34)
- mysql (9)
- operatingsystem (13)
- Hive (8)
- Hue (8)
- Pig (11)
- oozie (8)
- ZooKeeper (1)
- HBase (1)
- Spark (4)
- Impala (1)
- Lily (0)
- Solr (41)
- RS (0)
- Sqoop (10)
- Avro (0)
- Thrift (0)
- HDP (0)
- Bigtop (0)
- Redis (7)
- Java (6)
- Tez (6)
- Ambari (1)
- Mahout (25)
- MongoDB (9)
- Lucene (9)
- Nutch (1)
- Katta (1)
- UIMA (0)
- MediaProcess (1)
- linux (1)
- Design (2)
- AI (1)
- RTR (1)
- Docker (1)
- Splunk (0)
- OpenNLP (1)
- Carrot (3)
- LingPipe (0)
- Weka (0)
- Hama (9)
- CloudStack (0)
- Helix (0)
- Rave (0)
- jclouds (0)
- Giraph (0)
- Drill (0)
- Tajo (0)
- Kafka (3)
- Samza (0)
- Storm (23)
- Flume (15)
- Sifarish (1)
- ML (3)
- android (2)
- Theory (2)
- 系统架构 (2)
- Kiji (1)
- Neo4j (16)
- spanner (0)
- Ejabberd (0)
- Dropwizard (1)
- Tigon (1)
- OrientDB (1)
- Kite (2)
- Jubatus (4)
- Logstash (2)
- Kibana (0)
- Cassandra (0)
- Curator (1)
最新评论
-
oldrat:
https://github.com/oldratlee/tr ...
Kafka: High Qulity Posts
发表评论
-
sqoop: truncate table prior export data from hdfs
2015-01-06 17:18 2832We are using Sqoop to expor ... -
sqoop1:import to hive with comma delimilated fields
2014-05-15 09:27 805I want to import data with tr ... -
Sqoop1:export data to mysql
2014-04-24 16:53 4374When I export data to mysql u ... -
sqoop1: import to hive partitioned table
2014-04-22 11:13 3545mysql table user hive met ... -
Sqoop1: Sqoop-HCatalog Integration
2014-04-14 00:50 8120I create a HCat table using H ... -
sqoop1: sqoop1.4.4 with hadoop2.3.0
2014-04-09 15:22 0http://mmicky.blog ... -
Sqoop2: import and export data By Hue
2014-04-02 17:42 2527Import data from mysql to hdfs ... -
sqoop2: Architecture
2014-04-12 18:25 425The architecture of Sqoop2 is ... -
sqoop2 :Install and Basic Usage
2014-03-25 17:16 1104Sqoop2 Install 1. install se ... -
sqoop2 与hadoop-2.2.0集成
2014-03-23 23:10 2284sqoop是关系型数据库与hadoop之间传递数据的一个工 ...
相关推荐
1. 将`sqoop-1.4.7.jar`复制到`$SQOOP_HOME/lib`目录下,其中`$SQOOP_HOME`是你的Sqoop安装目录。 2. 如果有其他依赖JAR,也应一并放入`lib`目录。 3. 更新环境变量`CLASSPATH`,包括`$SQOOP_HOME/lib`目录。 4. ...
- 数据库连接:使用 `sqoop list-databases` 和 `sqoop list-tables` 命令查看可用数据库和表。 - 导入数据:使用 `sqoop import` 命令,指定数据库连接参数、表名、目标目录等。 - 导出数据:使用 `sqoop export...
sqoop资源 sqoop-1.4.4.bin__hadoop-2.0.4- gz文件
NULL 博文链接:https://ylzhj02.iteye.com/blog/2051729
编译Atlas用 sqoop-1.4.6.2.3.99.0-195.jar 内含安装jar包以及maven手动安装命令 详情可参考我的博客: https://blog.csdn.net/qq_26502245/article/details/108008070
叶梓老师整理的Hadoop2.2.0+Hbase0.98.4+sqoop-1.4.4+hive-0.98.1安装手册,非常实用
1. **命令行接口**:Sqoop提供了丰富的命令行选项,允许用户执行数据导入、导出以及数据转换任务。用户可以通过简单的命令行参数来指定源数据库、表名、导入类型(全量或增量)、字段映射等。 2. **连接器**:Sqoop...
sqoop-1.4.4-cdh5.1.0.tar
sqoop框架开发工具使用的jar sqoop-1.4.7.jar 手动安装到maven <groupId>org.apache.sqoop <artifactId>sqoop <version>1.4.7 </dependency>
在IT行业中,我们经常涉及到各种库和框架的集成与使用,这次我们关注的是"Atlas2.3.0"依赖的组件:"org.restlet/sqoop-1.4.6.2.3.99.0-195"。这个依赖包含了三个关键的JAR文件:`sqoop-1.4.6.2.3.99.0-195.jar`,`...
内容概要:Sqoop 1.4.7 安装包主要包括以下内容:Sqoop 命令行工具:用于执行数据迁移任务的客户端工具。连接器:Sqoop 支持多种数据库连接器,包括 MySQL、PostgreSQL、Oracle 等,用于连接目标数据库。元数据驱动...
Sqoop是Apache Hadoop生态中的一个工具,专用于在关系型数据库(如MySQL、Oracle等)与Hadoop之间高效地导入导出数据。在标题"sqoop-1.4.5-cdh5.4.2.tar.gz"中,我们可以看出这是Sqoop的一个特定版本——1.4.5,针对...
java连接sqoop源码Apache Sqoop 目录 #Getting Started下载并安装 Sqoop 注意:选择合适的版本,不要使用 sqoop2 因为它不是正式的 GA 并且可能永远不会 $ wget ...
1. 安装 Sqoop 软件包:将 Sqoop 软件包下载到本地机器上,并将其解压到指定目录下。 2. 配置环境变量:在环境变量配置文件中添加 Sqoop 的安装目录,以便 Sqoop 可以正确地找到依赖项。 3. 配置 JDBC 驱动包:将...
sqoop.Sqoop: Got exception running Sqoop: java.lang.NullPointerException,没遇到可以跳过 19/09/20 09:57:47 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.NullPointerException at org.json...
sqoop 导入数据时候报错ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: oracle.jdbc.OracleDriver 缺少驱动包。
sqoop是一种导入导出数据的工具,这里用思维导图的形式展现了sqoop的一些简单应用。
在构建大数据处理环境时,Hadoop集群是核心基础,而`zookeeper3.4.12+hbase1.4.4+sqoop1.4.7+kafka2.10`这一组合则提供了集群中不可或缺的组件。让我们逐一探讨这些组件的功能、作用以及它们之间的协同工作。 **...
1. **JAR文件**:这是Java的归档文件,包含了插件的代码和资源。开发者可以将这些JAR文件添加到他们的项目类路径中,以便使用插件提供的功能。 2. **配置文件**:可能有XML或.properties格式的配置文件,用于设置...
1. 数据导入:Sqoop 可以从 RDBMS 导入数据到 HDFS,支持全量导入和增量导入。增量导入允许用户仅导入自上次导入以来发生更改的数据,减少了不必要的数据传输。 2. 数据导出:与导入相反,Sqoop 还能将 HDFS 中的...