`

Sqoop1: 1.4.4 commands list getted from log

 
阅读更多

usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]

Common arguments:
   --connect <jdbc-uri>                         Specify JDBC connect string
   --connection-manager <class-name>            Specify connection manager class name
   --connection-param-file <properties-file>    Specify connection arameters file
   --driver <class-name>                        Manually specify JDBC driver class to use
   --hadoop-home <hdir>                         Override $HADOOP_MAPRED_HOME_ARG
   --hadoop-mapred-home <dir>                   Override $HADOOP_MAPRED_HOME_ARG
   --help                                       Print usage instructions
-P                                              Read password from console
   --password <password>                        Set authentication password
   --password-file <password-file>              Set authentication password file path
   --username <username>                        Set authentication username
   --verbose                                    Print more information while working

Import control arguments:
   --append                                                   Imports data in append mode
   --as-avrodatafile                                          Imports data to Avro data files
   --as-sequencefile                                          Imports data to SequenceFiles
   --as-textfile                                              Imports data as plain text (default)
   --boundary-query <statement>                               Set boundary query for retrieving max and min value of the primary key
   --columns <col,col,col...>                                 Columns to import from table
   --compression-codec <codec>                                Compression codec to use for import
   --delete-target-dir                                        Imports data in delete mode
   --direct                                                   Use direct import fast path
   --direct-split-size <n>                                    Split the input stream every 'n' bytes when importing direct mode
-e,--query <statement>                                        Import results of SQL 'statement'
   --fetch-size <n>                                           Set number 'n' of rows to fetch from the database when more rows are needed
   --inline-lob-limit <n>                                     Set the maximum size for an inline LOB
-m,--num-mappers <n>                                          Use 'n' map tasks to import in parallel
   --mapreduce-job-name <name>                                Set name for generated mapreduce job
   --split-by <column-name>                                   Column of the table used to split work units
   --table <table-name>                                       Table to read
   --target-dir <dir>                                         HDFS plain table destination
   --validate                                                 Validate the copy using the configure validator
   --validation-failurehandler <validation-failurehandler>    Fully qualified class name for ValidationFa ilureHandler
   --validation-threshold <validation-threshold>              Fully qualified class name for Validation Threshold
   --validator <validator>                                    Fully qualified class name for the Validator
   --warehouse-dir <dir>                                      HDFS parent for table destination
   --where <where clause>                                     WHERE clause to use during import
-z,--compress                                                 Enable compression

Incremental import arguments:
   --check-column <column>        Source column to check for incremental change
   --incremental <import-type>    Define an incremental import of type 'append' or 'lastmodified'
   --last-value <value>           Last imported value in the incremental check column

Output line formatting arguments:
   --enclosed-by <char>               Sets a required field enclosing character
   --escaped-by <char>                Sets the escape character
   --fields-terminated-by <char>      Sets the field separator character
   --lines-terminated-by <char>       Sets the end-of-line character
   --mysql-delimiters                 Uses MySQL's default delimiter set: fields: ,  lines: \n  escaped-by: \ optionally-enclosed-by: '
   --optionally-enclosed-by <char>    Sets a field enclosing character

Input parsing arguments:
   --input-enclosed-by <char>               Sets a required field encloser
   --input-escaped-by <char>                Sets the input escape character
   --input-fields-terminated-by <char>      Sets the input field separator
   --input-lines-terminated-by <char>       Sets the input end-of-line char
   --input-optionally-enclosed-by <char>    Sets a field enclosing character

Hive arguments:
   --create-hive-table                         Fail if the target hive table exists
   --hive-database <database-name>             Sets the database name to use when importing to hive
   --hive-delims-replacement <arg>             Replace Hive record \0x01 and row delimiters (\n\r) from imported string fields
                                               with user-defined string
   --hive-drop-import-delims                   Drop Hive record \0x01 and row delimiters (\n\r) from imported string fields
   --hive-home <dir>                           Override $HIVE_HOME
   --hive-import                               Import tables into Hive (Uses Hive's default delimiters if none are set.)
   --hive-overwrite                            Overwrite existing data in the Hive table
   --hive-partition-key <partition-key>        Sets the partition key to use when importing to hive
   --hive-partition-value <partition-value>    Sets the partition value to use when importing to hive
   --hive-table <table-name>                   Sets the table name to use when importing to hive
   --map-column-hive <arg>                     Override mapping for specific column to hive types.

HBase arguments:
   --column-family <family>    Sets the target column family for the import
   --hbase-create-table        If specified, create missing HBase tables
   --hbase-row-key <col>       Specifies which input column to use as the row key
   --hbase-table <table>       Import to <table> in HBase

HCatalog arguments:
   --hcatalog-database <arg>                   HCatalog database name
   --hcatalog-home <hdir>                      Override $HCAT_HOME
   --hcatalog-table <arg>                      HCatalog table name
   --hive-home <dir>                           Override $HIVE_HOME
   --hive-partition-key <partition-key>        Sets the partition key to use when importing to hive
   --hive-partition-value <partition-value>    Sets the partition value to use when importing to hive
   --map-column-hive <arg>                     Override mapping for specific column to hive types.

HCatalog import specific options:
   --create-hcatalog-table            Create HCatalog before import
   --hcatalog-storage-stanza <arg>    HCatalog storage stanza for table
                                      creation

Code generation arguments:
   --bindir <dir>                        Output directory for compiled objects
   --class-name <name>                   Sets the generated class name. This overrides --package-name.
                                         When combined with --jar-file, sets the input class.
   --input-null-non-string <null-str>    Input null non-string representation
   --input-null-string <null-str>        Input null string representation
   --jar-file <file>                     Disable code generation; use specified jar
   --map-column-java <arg>               Override mapping for specific columns to java types
   --null-non-string <null-str>          Null non-string representation
   --null-string <null-str>              Null string representation
   --outdir <dir>                        Output directory for generated code
   --package-name <name>                 Put auto-generated classes in this package

分享到:
评论

相关推荐

    sqoop-1.4.7.zip

    1. 将`sqoop-1.4.7.jar`复制到`$SQOOP_HOME/lib`目录下,其中`$SQOOP_HOME`是你的Sqoop安装目录。 2. 如果有其他依赖JAR,也应一并放入`lib`目录。 3. 更新环境变量`CLASSPATH`,包括`$SQOOP_HOME/lib`目录。 4. ...

    sqoop-1.4.4-cdh5.0.6.tar

    - 数据库连接:使用 `sqoop list-databases` 和 `sqoop list-tables` 命令查看可用数据库和表。 - 导入数据:使用 `sqoop import` 命令,指定数据库连接参数、表名、目标目录等。 - 导出数据:使用 `sqoop export...

    sqoop资源 sqoop-1.4.4.bin-hadoop-2.0.4- gz文件

    sqoop资源 sqoop-1.4.4.bin__hadoop-2.0.4- gz文件

    sqoop1: import to hive partitioned table

    NULL 博文链接:https://ylzhj02.iteye.com/blog/2051729

    sqoop-1.4.6.2.3.99.0-195.jar..zip

    编译Atlas用 sqoop-1.4.6.2.3.99.0-195.jar 内含安装jar包以及maven手动安装命令 详情可参考我的博客: https://blog.csdn.net/qq_26502245/article/details/108008070

    Hadoop2.2.0+Hbase0.98.4+sqoop-1.4.4+hive-0.98.1安装手册(All)_ZCX

    叶梓老师整理的Hadoop2.2.0+Hbase0.98.4+sqoop-1.4.4+hive-0.98.1安装手册,非常实用

    sqoop 1.4.4

    1. **命令行接口**:Sqoop提供了丰富的命令行选项,允许用户执行数据导入、导出以及数据转换任务。用户可以通过简单的命令行参数来指定源数据库、表名、导入类型(全量或增量)、字段映射等。 2. **连接器**:Sqoop...

    sqoop-1.4.4-cdh5.1.0.tar

    sqoop-1.4.4-cdh5.1.0.tar

    sqoop-1.4.7.jar

    sqoop框架开发工具使用的jar sqoop-1.4.7.jar 手动安装到maven &lt;groupId&gt;org.apache.sqoop &lt;artifactId&gt;sqoop &lt;version&gt;1.4.7 &lt;/dependency&gt;

    Atlas2.3.0依赖: org.restlet/sqoop-1.4.6.2.3.99.0-195

    在IT行业中,我们经常涉及到各种库和框架的集成与使用,这次我们关注的是"Atlas2.3.0"依赖的组件:"org.restlet/sqoop-1.4.6.2.3.99.0-195"。这个依赖包含了三个关键的JAR文件:`sqoop-1.4.6.2.3.99.0-195.jar`,`...

    sqoop-1.4.7(可直接下载学习使用)附有安装配置教程!

    内容概要:Sqoop 1.4.7 安装包主要包括以下内容:Sqoop 命令行工具:用于执行数据迁移任务的客户端工具。连接器:Sqoop 支持多种数据库连接器,包括 MySQL、PostgreSQL、Oracle 等,用于连接目标数据库。元数据驱动...

    sqoop-1.4.5-cdh5.4.2.tar.gz

    Sqoop是Apache Hadoop生态中的一个工具,专用于在关系型数据库(如MySQL、Oracle等)与Hadoop之间高效地导入导出数据。在标题"sqoop-1.4.5-cdh5.4.2.tar.gz"中,我们可以看出这是Sqoop的一个特定版本——1.4.5,针对...

    java连接sqoop源码-quick-sqoop:ApacheSqoopETL工具的快速参考

    java连接sqoop源码Apache Sqoop 目录 #Getting Started下载并安装 Sqoop 注意:选择合适的版本,不要使用 sqoop2 因为它不是正式的 GA 并且可能永远不会 $ wget ...

    Hadoop-Sqoop配置

    1. 安装 Sqoop 软件包:将 Sqoop 软件包下载到本地机器上,并将其解压到指定目录下。 2. 配置环境变量:在环境变量配置文件中添加 Sqoop 的安装目录,以便 Sqoop 可以正确地找到依赖项。 3. 配置 JDBC 驱动包:将...

    java-json.7z

    sqoop.Sqoop: Got exception running Sqoop: java.lang.NullPointerException,没遇到可以跳过 19/09/20 09:57:47 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.NullPointerException at org.json...

    sqoop jdbc驱动包

    sqoop 导入数据时候报错ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: oracle.jdbc.OracleDriver 缺少驱动包。

    sqoop的操作

    sqoop是一种导入导出数据的工具,这里用思维导图的形式展现了sqoop的一些简单应用。

    zookeeper3.4.12+hbase1.4.4+sqoop1.4.7+kafka2.10

    在构建大数据处理环境时,Hadoop集群是核心基础,而`zookeeper3.4.12+hbase1.4.4+sqoop1.4.7+kafka2.10`这一组合则提供了集群中不可或缺的组件。让我们逐一探讨这些组件的功能、作用以及它们之间的协同工作。 **...

    derby_ui_plugin_1.1.1

    1. **JAR文件**:这是Java的归档文件,包含了插件的代码和资源。开发者可以将这些JAR文件添加到他们的项目类路径中,以便使用插件提供的功能。 2. **配置文件**:可能有XML或.properties格式的配置文件,用于设置...

    sqoop-1.4.6-cdh5.5.0.tar.gz

    1. 数据导入:Sqoop 可以从 RDBMS 导入数据到 HDFS,支持全量导入和增量导入。增量导入允许用户仅导入自上次导入以来发生更改的数据,减少了不必要的数据传输。 2. 数据导出:与导入相反,Sqoop 还能将 HDFS 中的...

Global site tag (gtag.js) - Google Analytics