- 浏览: 225605 次
- 性别:
- 来自: 上海
-
文章分类
最新评论
-
chowqh:
#修改指向我的hadoop安装目录 org.apache.s ...
Sqoop 1.99.3 安装 -
chowqh:
#修改指向我的hadoop安装目录 org.apache.s ...
Sqoop 1.99.3 安装 -
wuzhongfei:
sqoop1.99.3以后是不是全部取消了sqoop命令,例如 ...
Sqoop 1.99.3 安装 -
cyj0421129:
sqoop:000> show version ...
Sqoop 1.99.3 安装 -
mypeterhero:
请问,我的服务端也起来了如下:sqoop.sh server ...
Sqoop 1.99.3 安装
1.安装准备工作: 已经装好的hadoop环境是hadoop 2.2.0 下载的sqoop安装包(注意是hadoop200) http://www.us.apache.org/dist/sqoop/1.99.3/sqoop-1.99.3-bin-hadoop200.tar.gz 2.解压文件到工作目录: hadoop@hadoopMaster:$ sudo tar -xvf /opt/hn/hadoop_family/sqoop-1.99.3-bin-hadoop200.tar.gz hadoop@hadoopMaster:mv /opt/hn/hadoop_family/sqoop-1.99.3-bin-hadoop200 /usr/local/sqoop 3.修改环境变量: hadoop@hadoopMaster:~$ vim /etc/profile 添加如下内容: #sqoop export SQOOP_HOME=/usr/local/sqoop export PATH=$SQOOP_HOME/bin:$PATH export CATALINA_HOME=$SQOOP_HOME/server export LOGDIR=$SQOOP_HOME/logs 保存退出即时生效: source /etc/profile 4.修改sqoop配置: hadoop@hadoopMaster:~$ vim /usr/local/sqoop/server/conf/sqoop.properties #修改指向我的hadoop安装目录 org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/usr/local/hadoop/ #把hadoop目录下的jar包都引进来 hadoop@hadoopMaster:~$ vim /usr/local/sqoop/server/conf/catalina.properties common.loader=/usr/local/hadoop/share/hadoop/common/*.jar,/usr/local/hadoop/share/hadoop/common/lib/*.jar,/usr/local/hadoop/share/hadoop/hdfs/*.jar,/usr/local/hadoop/share/hadoop/hdfs/lib/*.jar,/usr/local/hadoop/share/hadoop/mapreduce/*.jar,/usr/local/hadoop/share/hadoop/mapreduce/lib/*.jar,/usr/local/hadoop/share/hadoop/tools/*.jar,/usr/local/hadoop/share/hadoop/tools/lib/*.jar,/usr/local/hadoop/share/hadoop/yarn/*.jar,/usr/local/hadoop/share/hadoop/yarn/lib/*.jar,/usr/local/hadoop/share/hadoop/httpfs/tomcat/lib/*.jar 5.下载mysql驱动包 mysql-connector-java-5.1.16-bin.jar 6.启动/停止sqoop200 hadoop@hadoopMaster:/usr/local/sqoop/bin$ ./sqoop.sh server start/stop 查看启动日志: hadoop@hadoopMaster:/usr/local/sqoop/server/logs$ vim catalina.out 7.进入客户端交互目录 hadoop@hadoopMaster:/usr/local/sqoop/bin$ ./sqoop.sh client +------------------------------------------+ |Sqoop home directory: /usr/local/sqoop | |Sqoop Shell: Type 'help' or '\h' for help.| |sqoop:000> | +------------------------------------------+ 为客户端配置服务器: +---------------------------------------------------------------------+ |sqoop:000> set server --host hadoopMaster --port 12000 --webapp sqoop| |Server is set successfully | +---------------------------------------------------------------------+ 查版本信息: +-----------------------------------------------------------------+ |sqoop:000> show version --all | |client version: | | Sqoop 1.99.3 revision 2404393160301df16a94716a3034e31b03e27b0b | | Compiled by mengweid on Fri Oct 18 14:15:53 EDT 2013 | |server version: | | Sqoop 1.99.3 revision 2404393160301df16a94716a3034e31b03e27b0b | | Compiled by mengweid on Fri Oct 18 14:15:53 EDT 2013 | |Protocol version: | | [1] | +-----------------------------------------------------------------+ 显示连接器: +---------------------------------------------------------------------------------------------+ |sqoop:000> show connector --all | |1 connector(s) to show: | |Connector with id 1: | | Name: generic-jdbc-connector | | Class: org.apache.sqoop.connector.jdbc.GenericJdbcConnector | | Version: 1.99.3 | | Supported job types: [IMPORT, EXPORT] | | Connection form 1: | | Name: connection | | Label: Connection configuration | | Help: You must supply the information requested in order to create a connection object.| | Input 1: | | . | | . | | . | | 太长了,就拷贝这一点 | +---------------------------------------------------------------------------------------------+ 创建数据库连接: +---------------------------------------------------------------------------------------------+ |sqoop:000> create connection --cid 1 | |Creating connection for connector with id 1 | |Please fill following values to create new connection object | |Name: My first | | | |Connection configuration | | | |JDBC Driver Class: com.mysql.jdbc.Driver | |JDBC Connection String: jdbc:mysql://localhost:3306/sqoop_stu | |Username: root | |Password: ********** | |JDBC Connection Properties: | |There are currently 0 values in the map: | |entry# | | | |Security related configuration options | | | |Max connections: 100 | |New connection was successfully created with validation status FINE and persistent id 1 | +---------------------------------------------------------------------------------------------+ 创建导入任务 +------------------------------------------------------------------------------------+ |sqoop:001> create job --xid 1 --type import | |Creating job for connection with id 1 | |Please fill following values to create new job object | |Name: First job | | | |Database configuration | | | |Schema name: traceweb | |Table name: trace_web_application | |Table SQL statement: | |Table column names: | |Partition column name: | |Nulls in partition column: | |Boundary query: | | | |Output configuration | | | |Storage type: | | 0 : HDFS | |Choose: 0 | |Output format: | | 0 : TEXT_FILE | | 1 : SEQUENCE_FILE | |Choose: 1 | |Compression format: | | 0 : NONE | | 1 : DEFAULT | | 2 : DEFLATE | | 3 : GZIP | | 4 : BZIP2 | | 5 : LZO | | 6 : LZ4 | | 7 : SNAPPY | |Choose: 0 | |Output directory: /opt/sqoop_output | | | |Throttling resources | | | |Extractors: | |Loaders: | |New job was successfully created with validation status FINE and persistent id 1 | +------------------------------------------------------------------------------------+ 启动job: +------------------------------------------------ |sqoop:000> start job --jid 1 +------------------------------------------------ 查看导入状态: +------------------------------------------------ |sqoop:000> status job --jid 1 |Submission details |Job ID: 1 |Server URL: http://hadoopMaster:12000/sqoop/ |Created by: hadoop |Creation date: 2014-05-23 18:51:05 CST |Lastly updated by: hadoop |External ID: job_local1566994033_0001 | http://localhost:8080/ |2014-05-23 18:51:35 CST: UNKNOWN +------------------------------------------------ 查看输出目录: +--------------------------------------------------------------------+ hadoop@hadoopMaster:~$ l /opt/sqoop_output/ 总用量 92 drwxrwxr-x 2 hadoop hadoop 4096 5月 23 18:52 . drwxr-xr-x 8 hadoop hadoop 4096 5月 23 18:51 .. -rw-r--r-- 1 hadoop hadoop 209 5月 23 18:51 part-m-00000.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00000.seq.crc -rw-r--r-- 1 hadoop hadoop 86 5月 23 18:51 part-m-00001.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00001.seq.crc -rw-r--r-- 1 hadoop hadoop 86 5月 23 18:51 part-m-00002.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00002.seq.crc -rw-r--r-- 1 hadoop hadoop 86 5月 23 18:51 part-m-00003.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00003.seq.crc -rw-r--r-- 1 hadoop hadoop 86 5月 23 18:51 part-m-00004.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00004.seq.crc -rw-r--r-- 1 hadoop hadoop 86 5月 23 18:51 part-m-00005.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00005.seq.crc -rw-r--r-- 1 hadoop hadoop 207 5月 23 18:51 part-m-00006.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00006.seq.crc -rw-r--r-- 1 hadoop hadoop 86 5月 23 18:51 part-m-00007.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00007.seq.crc -rw-r--r-- 1 hadoop hadoop 206 5月 23 18:51 part-m-00008.seq -rw-rw-r-- 1 hadoop hadoop 12 5月 23 18:51 .part-m-00008.seq.crc -rw-r--r-- 1 hadoop hadoop 682 5月 23 18:51 part-m-00009.seq -rw-rw-r-- 1 hadoop hadoop 16 5月 23 18:51 .part-m-00009.seq.crc -rw-r--r-- 1 hadoop hadoop 0 5月 23 18:51 _SUCCESS -rw-rw-r-- 1 hadoop hadoop 8 5月 23 18:51 ._SUCCESS.crc +-------------------------------------------------------------------- sqoop:000> show job +----+------------+--------+-----------+---------+ | Id | Name | Type | Connector | Enabled | +----+------------+--------+-----------+---------+ | 1 | First job | IMPORT | 1 | true | | 2 | importHDFS | IMPORT | 1 | true | +----+------------+--------+-----------+---------+ sqoop:000> delete job --jid 1 sqoop:000> show job +----+------------+--------+-----------+---------+ | Id | Name | Type | Connector | Enabled | +----+------------+--------+-----------+---------+ | 2 | importHDFS | IMPORT | 1 | true | +----+------------+--------+-----------+---------+ sqoop:000> delete job --jid 2 sqoop:000> show job +----+------+------+-----------+---------+ | Id | Name | Type | Connector | Enabled | +----+------+------+-----------+---------+ +----+------+------+-----------+---------+ sqoop:000> show connection 批处理模式: sqoop.sh client /opt/sqoop/script.sqoop hadoop@hadoopMaster:$ vim /opt/sqoop/script.sqoop #指定服务器信息 set server --host hadoopMaster --port 12000 --webapp sqoop #执行JOB start job --jid 1 +--------------------------------------------------------------------+ hadoop@hadoopMaster:/usr/local/sqoop/bin$ ./sqoop.sh client /opt/hadoop/mysql/batchModel.sqoop Sqoop home directory: /usr/local/sqoop sqoop:000> set server --host hadoopMaster --port 12000 --webapp sqoop Server is set successfully sqoop:000> start job --jid 1 Submission details Job ID: 1 Server URL: http://hadoopMaster:12000/sqoop/ Created by: hadoop Creation date: 2014-05-30 10:55:10 CST Lastly updated by: hadoop External ID: job_local945860799_0003 http://localhost:8080/ 2014-05-30 10:55:10 CST: BOOTING - Progress is not available +--------------------------------------------------------------------+ https://cwiki.apache.org/confluence/display/SQOOP/Sqoop2+Quickstart#Sqoop2Quickstart-Fullimportdemo ================================MYSQL======================================= hadoop@hadoopMaster:~$ mysql -uroot -pjava mysql> create database sqoop_stu; Query OK, 1 row affected (0.03 sec) mysql> use sqoop_stu; Database changed mysql> create table student(id int(3) auto_increment not null primary key, name char(10) not null, address varchar(50)); Query OK, 0 rows affected (0.41 sec) mysql> insert into student values(1, 'Tom','beijing'),(2, 'Joan','shanghai'), (3, 'Wang', 'shenzheng'); Query OK, 3 rows affected (0.07 sec) Records: 3 Duplicates: 0 Warnings: 0 CREATE TABLE `demo_blog` (`id` int(11) NOT NULL AUTO_INCREMENT, `blog` varchar(100) NOT NULL, PRIMARY KEY (`id`)) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `demo_log` (`operator` varchar(16) NOT NULL, `log` varchar(100) NOT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8; https://hbase.apache.org/book/configuration.html#hadoop http://www.tuicool.com/articles/NVfEVnn
评论
10 楼
chowqh
2016-09-30
#修改指向我的hadoop安装目录
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/usr/local/hadoop/
应该修改为hadoop配置文件的路径,在楼主那里应该是/usr/local/hadoop/etc/hadoop才行,不然导入导出任务提交会失败
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/usr/local/hadoop/
应该修改为hadoop配置文件的路径,在楼主那里应该是/usr/local/hadoop/etc/hadoop才行,不然导入导出任务提交会失败
9 楼
chowqh
2016-09-30
#修改指向我的hadoop安装目录
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/usr/local/hadoop/
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/usr/local/hadoop/
8 楼
wuzhongfei
2015-04-16
sqoop1.99.3以后是不是全部取消了sqoop命令,例如:
qoop create-hive-table --connect jdbc:oracle:thin:@172.17.1.188:1521:lhorcl --username sq --password sq --table buyy --hive-table buyy
sqoop import --connect jdbc:oracle:thin:@172.17.1.188:1521:lhorcl --username sq --password sq --table buyy --hive-import
我在sqoop启动之后,发现使用sqoop指令提示:
sqoop command not found
qoop create-hive-table --connect jdbc:oracle:thin:@172.17.1.188:1521:lhorcl --username sq --password sq --table buyy --hive-table buyy
sqoop import --connect jdbc:oracle:thin:@172.17.1.188:1521:lhorcl --username sq --password sq --table buyy --hive-import
我在sqoop启动之后,发现使用sqoop指令提示:
sqoop command not found
7 楼
cyj0421129
2015-04-15
sqoop:000> show version -all
client version:
Sqoop 1.99.3 revision 2404393160301df16a94716a3034e31b03e27b0b
Compiled by mengweid on Fri Oct 18 14:15:53 EDT 2013
Exception has occurred during processing command
Exception: com.sun.jersey.api.client.UniformInterfaceException Message: GET http://10.0.252.10:12000/sqoop/version returned a response status of 404 Not Found
我hadoop集群的主机IP是10.0.252.10 ,问题出在哪?? 我看catalina.out日志,是log4j的个报错。。。
求解决方案
client version:
Sqoop 1.99.3 revision 2404393160301df16a94716a3034e31b03e27b0b
Compiled by mengweid on Fri Oct 18 14:15:53 EDT 2013
Exception has occurred during processing command
Exception: com.sun.jersey.api.client.UniformInterfaceException Message: GET http://10.0.252.10:12000/sqoop/version returned a response status of 404 Not Found
我hadoop集群的主机IP是10.0.252.10 ,问题出在哪?? 我看catalina.out日志,是log4j的个报错。。。
求解决方案
6 楼
mypeterhero
2014-11-03
请问,我的服务端也起来了如下:
sqoop.sh server start
用客户端也连接成功了,
sqoop.sh client
set server --host cloudmaster --port 12000 --webapp sqoop
然后新建了链接mysql
新建了job
测试把库的表的数据导到hdfs中去
start job --jid 2
如下:
sqoop:000> start job --jid 2
Submission details
Job ID: 2
Server URL: http://cloudmaster:12000/sqoop/
Created by: hadoop
Creation date: 2014-11-03 10:42:51 CST
Lastly updated by: hadoop
External ID: job_1414828812005_0005
http://cloudmaster:8088/proxy/application_1414828812005_0005/
2014-11-03 10:42:51 CST: BOOTING - Progress is not available
我点击http://cloudmaster:8088.....的链接
查看到如下报错
Application application_1414828812005_0005 failed 2 times due to AM Container for appattempt_1414828812005_0005_000002 exited with exitCode: -1000 due to: No space available in any of the local directories.
.Failing this attempt.. Failing the application.
我是一个主节点,四个子节点
请问侯上校,是什么原因啊,
sqoop.sh server start
用客户端也连接成功了,
sqoop.sh client
set server --host cloudmaster --port 12000 --webapp sqoop
然后新建了链接mysql
新建了job
测试把库的表的数据导到hdfs中去
start job --jid 2
如下:
sqoop:000> start job --jid 2
Submission details
Job ID: 2
Server URL: http://cloudmaster:12000/sqoop/
Created by: hadoop
Creation date: 2014-11-03 10:42:51 CST
Lastly updated by: hadoop
External ID: job_1414828812005_0005
http://cloudmaster:8088/proxy/application_1414828812005_0005/
2014-11-03 10:42:51 CST: BOOTING - Progress is not available
我点击http://cloudmaster:8088.....的链接
查看到如下报错
Application application_1414828812005_0005 failed 2 times due to AM Container for appattempt_1414828812005_0005_000002 exited with exitCode: -1000 due to: No space available in any of the local directories.
.Failing this attempt.. Failing the application.
我是一个主节点,四个子节点
请问侯上校,是什么原因啊,
5 楼
haha_hyq
2014-09-23

4 楼
q547904614
2014-08-30
我在启动show version --all时,出现GET http://localhost:12000/sqoop/version returned a response status of 404 Not Found,这个问题怎么解决?
3 楼
侯上校
2014-07-18
jie8895010 写道
你好
(1)我在启动show version --all时,出现GET http://localhost:12000/sqoop/version returned a response status of 404 Not Found,这个问题怎么解决?
(2)set server --host hadoopMaster --port 12000 --webapp sqoop里面的hadoopMaster这个主机是你新添加的,还是系统自带的?可以换成localhost吗?
(1)我在启动show version --all时,出现GET http://localhost:12000/sqoop/version returned a response status of 404 Not Found,这个问题怎么解决?
(2)set server --host hadoopMaster --port 12000 --webapp sqoop里面的hadoopMaster这个主机是你新添加的,还是系统自带的?可以换成localhost吗?
hadoop集群的master的hostname
2 楼
jie8895010
2014-07-15
你好
(1)我在启动show version --all时,出现GET http://localhost:12000/sqoop/version returned a response status of 404 Not Found,这个问题怎么解决?
(2)set server --host hadoopMaster --port 12000 --webapp sqoop里面的hadoopMaster这个主机是你新添加的,还是系统自带的?可以换成localhost吗?
(1)我在启动show version --all时,出现GET http://localhost:12000/sqoop/version returned a response status of 404 Not Found,这个问题怎么解决?
(2)set server --host hadoopMaster --port 12000 --webapp sqoop里面的hadoopMaster这个主机是你新添加的,还是系统自带的?可以换成localhost吗?
1 楼
jie8895010
2014-07-15
你好,我在启动show version --all时,出现
Exception:com.sun.jersey.api.client.ClientHandlerException Message: java.net.ConnectException: Connection timed out,请问是什么问题?
Exception:com.sun.jersey.api.client.ClientHandlerException Message: java.net.ConnectException: Connection timed out,请问是什么问题?
相关推荐
Sqoop 组件安装配置 Sqoop 是一个开源的数据传输工具,由 Apache 软件基金会开发,主要用于在 Hadoop 和结构化数据存储之间传输大量数据。Sqoop 提供了一种高效、可靠的方式来将数据从关系数据库管理系统(RDBMS)...
Sqoop 组件安装配置 Sqoop 是一个开源的数据传输工具,用于在 Hadoop 和结构化数据存储之间传输数据。 Sqoop 提供了一个命令行界面,允许用户定义数据传输的参数和配置。Sqoop 的安装和配置是将其集成到 Hadoop ...
【大数据技术基础实验报告——Sqoop的安装配置与应用】 Sqoop是一款用于在Apache Hadoop和关系型数据库之间传输数据的工具,它简化了大量数据的导入导出过程。本实验报告将详细介绍如何安装配置Sqoop以及如何使用...
Sqoop 组件安装配置 Sqoop 是 Apache 旗下一款“ Hadoop 和关系数据库服务器之间传送数据”的工具。主要用于在 Hadoop(Hive) 与传统的数据库 (MySQL 、 Oracle 、 Postgres 等 ) 之间进行数据的传递,可以将一个...
sqoop的安装和基本操作.mp4
:star: sqoop的安装与配置 sqoop的安装与配置 sqoop的安装与配置 sqoop的安装与配置 sqoop的安装与配置
Sqoop 安装与使用 Sqoop 是一款方便的在传统型数据库与 Hadoop 之间进行数据迁移的工具,充分利用 MapReduce 并行特点以批处理的方式加快数据传输。Sqoop 工具是 Hadoop 下连接关系型数据库和 Hadoop 的桥梁,支持...
### Sqoop2安装与配置详解 #### 一、概述 Sqoop是一款开源工具,主要用于在Hadoop和关系型数据库之间高效地传输数据。Sqoop2是Sqoop的一个重大升级版本,它引入了全新的架构和改进的功能,使得数据迁移更加稳定...
sqoop的安装与配置 第1章:什么是 Sqoop? Sqoop 是一种用于在 Hadoop 与关系型数据库(例如 MySQL、Oracle)之间传输数据的工具。简单来说,它就像一个“搬运工”,把不同地方的数据搬到 Hadoop 中,帮助分析大量...
4. **验证安装**:运行 `bin/sqoop help` 命令,如果显示 Sqoop 的帮助信息,说明安装成功。 5. **测试连接**:使用 `bin/sqoop list-databases --connect jdbc:mysql://master:3306/ --username root --password ...
大数据集群 Hadoop HBase Hive Sqoop 集群环境安装配置及使用文档 在本文档中,我们将详细介绍如何搭建一个大数据集群环境,包括 Hadoop、HBase、Hive 和 Sqoop 的安装配置及使用。该文档将分为四部分:Hadoop 集群...
Sqoop 安装与配置 Sqoop 是一款开源的数据传输工具,由 Cloudera 公司开发,用于在 Hadoop 和结构化数据存储之间传输数据。Sqoop 提供了一个命令行接口,允许用户使用 SQL 语句来从关系数据库中导出数据,并将其...
在本教程中,我们将详细探讨 Sqoop 的安装过程及其基本使用方法。 ### Sqoop 的安装 1. **环境准备**:确保你已经安装了 Java 运行环境(JRE)和 Java 开发工具(JDK),因为 Sqoop 需要它们。检查 Java 版本: `...
在本实验中,我们将详细介绍 Sqoop 的安装过程,包括嵌入式安装方法,适用于 Ubuntu 16.04 操作系统的环境。 **实验目标** Sqoop 的安装目的是为了熟悉和掌握 Sqoop 工具的安装步骤,以便在未来的工作中能够高效地...
一、Sqoop 安装 1.到/install-package目录下查看sqoop的安装包 cd /install-package ls 2.解压 sqoop 的按转包到/apps目录下 tar -zxvf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz-C/apps 将名称换成sqoop mv sqopp-...
Sqoop的安装和配置,Sqoop的数据导入导出,MySQL对hdfs数据的操作
sqoop安装详解以及sqoop内容介绍使用介绍 集群介绍 sqoop:是一个工具,主要用于导入导出,实现MySQL到Hadoop之间数据的转换 2、导入数据:从结构化数据(Mysql,oracle,db2,)导入到半结构化或非结构化hadoop中...
3. 解压后,进入Sqoop的安装目录,执行一系列的目录结构查看和文件查看命令,了解Sqoop的安装目录结构和关键文件。 4. 接下来需要拷贝MySQL JDBC连接驱动到Sqoop的lib目录中。这是必须的步骤,因为Sqoop需要借助...
Sqoop的安装与配置涉及多个步骤,包括下载安装包、解压安装、配置环境变量、修改配置文件以及验证安装等。压缩包文档记录的是一个详细的Sqoop安装与配置指南。