一、sqoop-export
相关参数:
usage: sqoop export [GENERIC-ARGS] [TOOL-ARGS]
Common arguments:
--connect <jdbc-uri> Specify JDBC connect
string
--connection-manager <class-name> Specify connection manager
class name
--connection-param-file <properties-file> Specify connection
parameters file
--driver <class-name> Manually specify JDBC
driver class to use
--hadoop-home <dir> Override $HADOOP_HOME
--help Print usage instructions
-P Read password from console
--password <password> Set authentication
password
--username <username> Set authentication
username
--verbose Print more information
while working
Export control arguments:
--batch Indicates underlying statements to be
executed in batch mode
--clear-staging-table Indicates that any data in staging
table can be deleted
--direct Use direct export fast path
--export-dir <dir> HDFS source path for the export
-m,--num-mappers <n> Use 'n' map tasks to export in parallel
--staging-table <table-name> Intermediate staging table
--table <table-name> Table to populate
--update-key <key> Update records by specified key column
--update-mode <mode> Specifies how updates are performed
when new rows are found with
non-matching keys in database
Input parsing arguments:
--input-enclosed-by <char> Sets a required field encloser
--input-escaped-by <char> Sets the input escape
character
--input-fields-terminated-by <char> Sets the input field separator
--input-lines-terminated-by <char> Sets the input end-of-line
char
--input-optionally-enclosed-by <char> Sets a field enclosing
character
Output line formatting arguments:
--enclosed-by <char> Sets a required field enclosing
character
--escaped-by <char> Sets the escape character
--fields-terminated-by <char> Sets the field separator character
--lines-terminated-by <char> Sets the end-of-line character
--mysql-delimiters Uses MySQL's default delimiter set:
fields: , lines: \n escaped-by: \
optionally-enclosed-by: '
--optionally-enclosed-by <char> Sets a field enclosing character
Code generation arguments:
--bindir <dir> Output directory for compiled
objects
--class-name <name> Sets the generated class name.
This overrides --package-name.
When combined with --jar-file,
sets the input class.
--input-null-non-string <null-str> Input null non-string
representation
--input-null-string <null-str> Input null string representation
--jar-file <file> Disable code generation; use
specified jar
--map-column-java <arg> Override mapping for specific
columns to java types
--null-non-string <null-str> Null non-string representation
--null-string <null-str> Null string representation
--outdir <dir> Output directory for generated
code
--package-name <name> Put auto-generated classes in
this package
Generic Hadoop command-line arguments:
(must preceed any tool-specific arguments)
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
从这里我们可以看到,大部分的参数使用是与导入相同的,只有少部分是导出专用的。
既然导入和导出的道理都是相同的,那么我就不按照自己的理解把官网上的话变成自己的话了。直接看一个例子:(这里如果你不懂,请把数据导入完整看完)
$ sqoop export --connect jdbc:mysql://db.example.com/foo --table bar \
--export-dir /results/bar_data
分享到:
相关推荐
Sqoop 是 Apache Hadoop 生态系统中的一个工具,主要用于在关系型数据库(如 MySQL、Oracle 等)和 Hadoop 分布式文件系统(HDFS)之间高效地传输数据。这个压缩包 "sqoop-1.4.2.bin__hadoop-2.0.0-alpha.tar" 提供...
2.2.1 数据模型的“旋风之旅” 2.2.2 实现 2.3 安装 2.3.1 测试驱动 2.4 客户机 2.4.1 Java 2.4.2 Avro,REST,以及Thrift 2.5 示例 2.5.1 模式 2.5.2 加载数据 2.5.3 Web查询 2.6 HBase和...
2.2.1 数据模型的"旋风之旅" 2.2.2 实现 2.3 安装 2.3.1 测试驱动 2.4 客户机 2.4.1 Java 2.4.2 Avro,REST,以及Thrift 2.5 示例 2.5.1 模式 2.5.2 加载数据 2.5.3 Web查询 2.6 HBase和RDBMS的比较 2.6.1 成功的...
生日志文件,需要将这些日志...Flume 负责高效、可靠的日志采集,Oozie 调度和管理复杂的工作流,而 Sqoop 则提供数据在 RDBMS 和 Hadoop 之间的迁移。理解并熟练运用这些工具,能有效提升大数据处理的效率和灵活性。