- 浏览: 1469682 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (691)
- linux (207)
- shell (33)
- java (42)
- 其他 (22)
- javascript (33)
- cloud (16)
- python (33)
- c (48)
- sql (12)
- 工具 (6)
- 缓存 (16)
- ubuntu (7)
- perl (3)
- lua (2)
- 超级有用 (2)
- 服务器 (2)
- mac (22)
- nginx (34)
- php (2)
- 内核 (2)
- gdb (13)
- ICTCLAS (2)
- mac android (0)
- unix (1)
- android (1)
- vim (1)
- epoll (1)
- ios (21)
- mysql (3)
- systemtap (1)
- 算法 (2)
- 汇编 (2)
- arm (3)
- 我的数据结构 (8)
- websocket (12)
- hadoop (5)
- thrift (2)
- hbase (1)
- graphviz (1)
- redis (1)
- raspberry (2)
- qemu (31)
- opencv (4)
- socket (1)
- opengl (1)
- ibeacons (1)
- emacs (6)
- openstack (24)
- docker (1)
- webrtc (11)
- angularjs (2)
- neutron (23)
- jslinux (18)
- 网络 (13)
- tap (9)
- tensorflow (8)
- nlu (4)
- asm.js (5)
- sip (3)
- xl2tp (5)
- conda (1)
- emscripten (6)
- ffmpeg (10)
- srt (1)
- wasm (5)
- bert (3)
- kaldi (4)
- 知识图谱 (1)
最新评论
-
wahahachuang8:
我喜欢代码简洁易读,服务稳定的推送服务,前段时间研究了一下go ...
websocket的helloworld -
q114687576:
http://www.blue-zero.com/WebSoc ...
websocket的helloworld -
zhaoyanzimm:
感谢您的分享,给我提供了很大的帮助,在使用过程中发现了一个问题 ...
nginx的helloworld模块的helloworld -
haoningabc:
leebyte 写道太NB了,期待早日用上Killinux!么 ...
qemu+emacs+gdb调试内核 -
leebyte:
太NB了,期待早日用上Killinux!
qemu+emacs+gdb调试内核
groupadd hadoop
useradd hadoop -g hadoop
vim /etc/sudoers
root ALL=(ALL) ALL
后加
hadoop ALL=(ALL) ALL
[root@localhost sqoop]# mkdir /usr/local/hadoop
[root@localhost sqoop]# chown -R hadoop /usr/local/hadoop
su hadoop 注意这里 ★★★★★★★
ssh-keygen
cat id_rsa.pub >>authorized_keys
ssh localhost
vim conf/hadoop-env.sh
export JAVA_HOME=/usr/local/java/jdk1.6.0_45/
vim conf/core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
vim conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
./hadoop namenode -format
[hadoop@localhost bin]$ ./hadoop namenode -format
14/03/10 00:57:17 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2-CDH3B4
STARTUP_MSG: build = git://ubuntu-slave02/ on branch -r 3aa7c91592ea1c53f3a913a581dbfcdfebe98bfe; compiled by 'hudson' on Mon Feb 21 11:52:19 PST 2011
************************************************************/
14/03/10 00:57:18 INFO util.GSet: VM type = 64-bit
14/03/10 00:57:18 INFO util.GSet: 2% max memory = 19.33375 MB
14/03/10 00:57:18 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/03/10 00:57:18 INFO util.GSet: recommended=2097152, actual=2097152
14/03/10 00:57:18 INFO namenode.FSNamesystem: fsOwner=hadoop
14/03/10 00:57:18 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/10 00:57:18 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/10 00:57:18 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=1000
14/03/10 00:57:18 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/10 00:57:20 INFO common.Storage: Image file of size 112 saved in 0 seconds.
14/03/10 00:57:20 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
14/03/10 00:57:20 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
观察★★★★
[hadoop@localhost tmp]$ pwd
/home/hadoop/tmp
[hadoop@localhost tmp]$ tree
.
└── dfs
└── name
├── current
│ ├── edits
│ ├── fsimage
│ ├── fstime
│ └── VERSION
└── image
└── fsimage
4 directories, 5 files
[hadoop@localhost tmp]$
./start-all.sh
[hadoop@localhost bin]$ jps
51166 NameNode
51561 TaskTracker
52208 Jps
51378 SecondaryNameNode
51266 DataNode
51453 JobTracker
[hadoop@localhost bin]$
参考
http://freewxy.iteye.com/blog/1027569
运行worldcount的例子
./hadoop dfs -ls /
创建目录
./hadoop dfs -mkdir /haotest
[hadoop@localhost bin]$ vim test.txt
hello haoning,eiya haoning this is my first hadoop test ,god bless me
./hadoop dfs -copyFromLocal test.txt /haotest
[hadoop@localhost hadoop]$ bin/hadoop jar hadoop-examples-0.20.2-CDH3B4.jar wordcount /haotest /output
14/03/10 01:15:47 INFO input.FileInputFormat: Total input paths to process : 1
14/03/10 01:15:48 INFO mapred.JobClient: Running job: job_201403100100_0002
14/03/10 01:15:49 INFO mapred.JobClient: map 0% reduce 0%
14/03/10 01:15:58 INFO mapred.JobClient: map 100% reduce 0%
14/03/10 01:16:08 INFO mapred.JobClient: map 100% reduce 100%
14/03/10 01:16:09 INFO mapred.JobClient: Job complete: job_201403100100_0002
14/03/10 01:16:09 INFO mapred.JobClient: Counters: 22
14/03/10 01:16:09 INFO mapred.JobClient: Job Counters
14/03/10 01:16:09 INFO mapred.JobClient: Launched reduce tasks=1
14/03/10 01:16:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=8844
14/03/10 01:16:09 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/10 01:16:09 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/03/10 01:16:09 INFO mapred.JobClient: Launched map tasks=1
14/03/10 01:16:09 INFO mapred.JobClient: Data-local map tasks=1
14/03/10 01:16:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10370
14/03/10 01:16:09 INFO mapred.JobClient: FileSystemCounters
14/03/10 01:16:09 INFO mapred.JobClient: FILE_BYTES_READ=123
14/03/10 01:16:09 INFO mapred.JobClient: HDFS_BYTES_READ=161
14/03/10 01:16:09 INFO mapred.JobClient: FILE_BYTES_WRITTEN=93307
14/03/10 01:16:09 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=77
14/03/10 01:16:09 INFO mapred.JobClient: Map-Reduce Framework
14/03/10 01:16:09 INFO mapred.JobClient: Reduce input groups=10
14/03/10 01:16:09 INFO mapred.JobClient: Combine output records=10
14/03/10 01:16:09 INFO mapred.JobClient: Map input records=2
14/03/10 01:16:09 INFO mapred.JobClient: Reduce shuffle bytes=123
14/03/10 01:16:09 INFO mapred.JobClient: Reduce output records=10
14/03/10 01:16:09 INFO mapred.JobClient: Spilled Records=20
14/03/10 01:16:09 INFO mapred.JobClient: Map output bytes=97
14/03/10 01:16:09 INFO mapred.JobClient: Combine input records=10
14/03/10 01:16:09 INFO mapred.JobClient: Map output records=10
14/03/10 01:16:09 INFO mapred.JobClient: SPLIT_RAW_BYTES=103
14/03/10 01:16:09 INFO mapred.JobClient: Reduce input records=10
[hadoop@localhost hadoop]$
[hadoop@localhost hadoop]$ bin/hadoop dfs -cat /output/part-r-00000
,god 1
bless 1
first 1
hadoop 1
haoning,this 1
hello 1
is 1
me 1
my 1
test 1
[hadoop@localhost hadoop]$
rm: Cannot remove directory "hdfs://localhost:9000/haotest", use -rmr instead
[hadoop@localhost hadoop]$ bin/hadoop dfs -rmr /haotest
Deleted hdfs://localhost:9000/haotest
[hadoop@localhost hadoop]$
bin/hadoop dfs -copyFromLocal bin/test.txt /haotest
[hadoop@localhost hadoop]$ bin/hadoop dfs -rmr /output
Deleted hdfs://localhost:9000/output
yum install mysql mysql-server mysql-devel
用root
services mysqld start
chkconfig --list|grep mysql*
mysqladmin -u root password haoning
openicf
Kettle
/data/hadoop/sqoop/sqoop-1.2.0-CDH3B4/lib
./sqoop list-tables --connect jdbc:mysql://localhost/mysql --username root --password haoning
sqoop import --connect jdbc:mysql://localhost/mysql --username root --password haoning --table active_uuid --hive-import
★★★★★★★★★
hive
http://www.juziku.com/wiki/6028.htm
export JAVA_HOME=/usr/local/java/jdk1.6.0_45
export HBASE_HOME=/data/hbase/hbase-install/hbase-0.94.13
export HAO=/data/haoning/mygit/mynginxmodule
export hao=/data/haoning/mygit/mynginxmodule/nginx_release/nginx-1.5.6
export mm=/data/haoning/mygit/mynginxmodule/
export nn=/usr/local/nginx_upstream/sbin
export ne=/usr/local/nginx_echo/
export HADOOP_HOME=/usr/local/hadoop
export HIVE_HOME=/data/hadoop/hive/hive-0.8.1
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/conf
export HIVE_CONF_DIR=$HIVE_HOME/hive-conf
export PATH=$HADOOP_HOME/bin:$HIVE_HOME/bin:/usr/local/java/jdk1.6.0_45/bin:$HBASE_HOME/bin:$PATH
export CLASSPATH=/usr/local/java/jdk1.6.0_45/jre/lib/rt.jar:$HADOOP_HOME:.
配置好变量后,hive解压即可用
[hadoop@localhost bin]$ hive
Logging initialized using configuration in jar:file:/data/hadoop/hive/hive-0.8.1/lib/hive-common-0.8.1.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_201403100348_620639513.txt
hive> show tables;
OK
Time taken: 8.822 seconds
hive>
>
>
> create table abc(id int,name string);
OK
Time taken: 0.476 seconds
hive> select * from abc;
OK
Time taken: 0.297 seconds
hive>
###./sqoop list-tables --connect jdbc:mysql://localhost/mysql --username root --password haoning
###./sqoop import --connect jdbc:mysql://10.230.13.100/mysql --username root --password haoning --table user --hive-import
./sqoop list-tables --connect jdbc:mysql://10.230.13.100/test --username root --password haoning
注意mysql的权限
必须hadoop用户能访问的远程ip的权限
mysql :
grant all privileges on *.* to root@'%' identified by "haoning";
use test
create table haohao(id int(4) not null primary key auto_increment,name char(20) not null)
insert into haohao values(1,'hao');
./sqoop import --connect jdbc:mysql://10.230.13.100/test --username root --password haoning --table haohao --hive-import
测试结果
[hadoop@localhost bin]$ hive
Logging initialized using configuration in jar:file:/data/hadoop/hive/hive-0.8.1/lib/hive-common-0.8.1.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_201403102018_222421651.txt
hive> show tables;
OK
haoge
Time taken: 6.36 seconds
hive> select * from haoge
> ;
OK
1 hao
Time taken: 1.027 seconds
hive>
注意的问题:
权限问题:
hadoop hive,sqoop最好都是hadoop用户操作的,否则会报权限错误
mysql表必须有主键
版本问题要匹配,尝试四个版本的hive
调研:
尝试使用sqoop把mysql中的表导入到hive
使用版本
hadoop:hadoop-0.20.2-CDH3B4.tar.gz
sqoop:sqoop-1.2.0-CDH3B4.tar.gz
mysqljdbc:mysql-connector-java-5.1.18.jar
hive:hive-0.8.1.tar.gz 尝试4种hive,只有这种可用
使用结果:
mysql中的表通过sqoop 导入到hive,存于hadoop中
hadoop dfs -lsr /user/hadoop/
useradd hadoop -g hadoop
vim /etc/sudoers
root ALL=(ALL) ALL
后加
hadoop ALL=(ALL) ALL
[root@localhost sqoop]# mkdir /usr/local/hadoop
[root@localhost sqoop]# chown -R hadoop /usr/local/hadoop
su hadoop 注意这里 ★★★★★★★
ssh-keygen
cat id_rsa.pub >>authorized_keys
ssh localhost
vim conf/hadoop-env.sh
export JAVA_HOME=/usr/local/java/jdk1.6.0_45/
vim conf/core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
vim conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
./hadoop namenode -format
[hadoop@localhost bin]$ ./hadoop namenode -format
14/03/10 00:57:17 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2-CDH3B4
STARTUP_MSG: build = git://ubuntu-slave02/ on branch -r 3aa7c91592ea1c53f3a913a581dbfcdfebe98bfe; compiled by 'hudson' on Mon Feb 21 11:52:19 PST 2011
************************************************************/
14/03/10 00:57:18 INFO util.GSet: VM type = 64-bit
14/03/10 00:57:18 INFO util.GSet: 2% max memory = 19.33375 MB
14/03/10 00:57:18 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/03/10 00:57:18 INFO util.GSet: recommended=2097152, actual=2097152
14/03/10 00:57:18 INFO namenode.FSNamesystem: fsOwner=hadoop
14/03/10 00:57:18 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/10 00:57:18 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/10 00:57:18 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=1000
14/03/10 00:57:18 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/10 00:57:20 INFO common.Storage: Image file of size 112 saved in 0 seconds.
14/03/10 00:57:20 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
14/03/10 00:57:20 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
观察★★★★
[hadoop@localhost tmp]$ pwd
/home/hadoop/tmp
[hadoop@localhost tmp]$ tree
.
└── dfs
└── name
├── current
│ ├── edits
│ ├── fsimage
│ ├── fstime
│ └── VERSION
└── image
└── fsimage
4 directories, 5 files
[hadoop@localhost tmp]$
./start-all.sh
[hadoop@localhost bin]$ jps
51166 NameNode
51561 TaskTracker
52208 Jps
51378 SecondaryNameNode
51266 DataNode
51453 JobTracker
[hadoop@localhost bin]$
参考
http://freewxy.iteye.com/blog/1027569
运行worldcount的例子
./hadoop dfs -ls /
创建目录
./hadoop dfs -mkdir /haotest
[hadoop@localhost bin]$ vim test.txt
hello haoning,eiya haoning this is my first hadoop test ,god bless me
./hadoop dfs -copyFromLocal test.txt /haotest
[hadoop@localhost hadoop]$ bin/hadoop jar hadoop-examples-0.20.2-CDH3B4.jar wordcount /haotest /output
14/03/10 01:15:47 INFO input.FileInputFormat: Total input paths to process : 1
14/03/10 01:15:48 INFO mapred.JobClient: Running job: job_201403100100_0002
14/03/10 01:15:49 INFO mapred.JobClient: map 0% reduce 0%
14/03/10 01:15:58 INFO mapred.JobClient: map 100% reduce 0%
14/03/10 01:16:08 INFO mapred.JobClient: map 100% reduce 100%
14/03/10 01:16:09 INFO mapred.JobClient: Job complete: job_201403100100_0002
14/03/10 01:16:09 INFO mapred.JobClient: Counters: 22
14/03/10 01:16:09 INFO mapred.JobClient: Job Counters
14/03/10 01:16:09 INFO mapred.JobClient: Launched reduce tasks=1
14/03/10 01:16:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=8844
14/03/10 01:16:09 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/10 01:16:09 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/03/10 01:16:09 INFO mapred.JobClient: Launched map tasks=1
14/03/10 01:16:09 INFO mapred.JobClient: Data-local map tasks=1
14/03/10 01:16:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10370
14/03/10 01:16:09 INFO mapred.JobClient: FileSystemCounters
14/03/10 01:16:09 INFO mapred.JobClient: FILE_BYTES_READ=123
14/03/10 01:16:09 INFO mapred.JobClient: HDFS_BYTES_READ=161
14/03/10 01:16:09 INFO mapred.JobClient: FILE_BYTES_WRITTEN=93307
14/03/10 01:16:09 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=77
14/03/10 01:16:09 INFO mapred.JobClient: Map-Reduce Framework
14/03/10 01:16:09 INFO mapred.JobClient: Reduce input groups=10
14/03/10 01:16:09 INFO mapred.JobClient: Combine output records=10
14/03/10 01:16:09 INFO mapred.JobClient: Map input records=2
14/03/10 01:16:09 INFO mapred.JobClient: Reduce shuffle bytes=123
14/03/10 01:16:09 INFO mapred.JobClient: Reduce output records=10
14/03/10 01:16:09 INFO mapred.JobClient: Spilled Records=20
14/03/10 01:16:09 INFO mapred.JobClient: Map output bytes=97
14/03/10 01:16:09 INFO mapred.JobClient: Combine input records=10
14/03/10 01:16:09 INFO mapred.JobClient: Map output records=10
14/03/10 01:16:09 INFO mapred.JobClient: SPLIT_RAW_BYTES=103
14/03/10 01:16:09 INFO mapred.JobClient: Reduce input records=10
[hadoop@localhost hadoop]$
[hadoop@localhost hadoop]$ bin/hadoop dfs -cat /output/part-r-00000
,god 1
bless 1
first 1
hadoop 1
haoning,this 1
hello 1
is 1
me 1
my 1
test 1
[hadoop@localhost hadoop]$
rm: Cannot remove directory "hdfs://localhost:9000/haotest", use -rmr instead
[hadoop@localhost hadoop]$ bin/hadoop dfs -rmr /haotest
Deleted hdfs://localhost:9000/haotest
[hadoop@localhost hadoop]$
bin/hadoop dfs -copyFromLocal bin/test.txt /haotest
[hadoop@localhost hadoop]$ bin/hadoop dfs -rmr /output
Deleted hdfs://localhost:9000/output
yum install mysql mysql-server mysql-devel
用root
services mysqld start
chkconfig --list|grep mysql*
mysqladmin -u root password haoning
openicf
Kettle
/data/hadoop/sqoop/sqoop-1.2.0-CDH3B4/lib
./sqoop list-tables --connect jdbc:mysql://localhost/mysql --username root --password haoning
sqoop import --connect jdbc:mysql://localhost/mysql --username root --password haoning --table active_uuid --hive-import
★★★★★★★★★
hive
http://www.juziku.com/wiki/6028.htm
export JAVA_HOME=/usr/local/java/jdk1.6.0_45
export HBASE_HOME=/data/hbase/hbase-install/hbase-0.94.13
export HAO=/data/haoning/mygit/mynginxmodule
export hao=/data/haoning/mygit/mynginxmodule/nginx_release/nginx-1.5.6
export mm=/data/haoning/mygit/mynginxmodule/
export nn=/usr/local/nginx_upstream/sbin
export ne=/usr/local/nginx_echo/
export HADOOP_HOME=/usr/local/hadoop
export HIVE_HOME=/data/hadoop/hive/hive-0.8.1
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/conf
export HIVE_CONF_DIR=$HIVE_HOME/hive-conf
export PATH=$HADOOP_HOME/bin:$HIVE_HOME/bin:/usr/local/java/jdk1.6.0_45/bin:$HBASE_HOME/bin:$PATH
export CLASSPATH=/usr/local/java/jdk1.6.0_45/jre/lib/rt.jar:$HADOOP_HOME:.
配置好变量后,hive解压即可用
[hadoop@localhost bin]$ hive
Logging initialized using configuration in jar:file:/data/hadoop/hive/hive-0.8.1/lib/hive-common-0.8.1.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_201403100348_620639513.txt
hive> show tables;
OK
Time taken: 8.822 seconds
hive>
>
>
> create table abc(id int,name string);
OK
Time taken: 0.476 seconds
hive> select * from abc;
OK
Time taken: 0.297 seconds
hive>
###./sqoop list-tables --connect jdbc:mysql://localhost/mysql --username root --password haoning
###./sqoop import --connect jdbc:mysql://10.230.13.100/mysql --username root --password haoning --table user --hive-import
./sqoop list-tables --connect jdbc:mysql://10.230.13.100/test --username root --password haoning
注意mysql的权限
必须hadoop用户能访问的远程ip的权限
mysql :
grant all privileges on *.* to root@'%' identified by "haoning";
use test
create table haohao(id int(4) not null primary key auto_increment,name char(20) not null)
insert into haohao values(1,'hao');
./sqoop import --connect jdbc:mysql://10.230.13.100/test --username root --password haoning --table haohao --hive-import
测试结果
[hadoop@localhost bin]$ hive
Logging initialized using configuration in jar:file:/data/hadoop/hive/hive-0.8.1/lib/hive-common-0.8.1.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_201403102018_222421651.txt
hive> show tables;
OK
haoge
Time taken: 6.36 seconds
hive> select * from haoge
> ;
OK
1 hao
Time taken: 1.027 seconds
hive>
注意的问题:
权限问题:
hadoop hive,sqoop最好都是hadoop用户操作的,否则会报权限错误
mysql表必须有主键
版本问题要匹配,尝试四个版本的hive
调研:
尝试使用sqoop把mysql中的表导入到hive
使用版本
hadoop:hadoop-0.20.2-CDH3B4.tar.gz
sqoop:sqoop-1.2.0-CDH3B4.tar.gz
mysqljdbc:mysql-connector-java-5.1.18.jar
hive:hive-0.8.1.tar.gz 尝试4种hive,只有这种可用
使用结果:
mysql中的表通过sqoop 导入到hive,存于hadoop中
hadoop dfs -lsr /user/hadoop/
相关推荐
微信小程序Artandw_eapp-artand
本项目是一款基于Vue和JavaScript开发的心旅途个性化推荐旅游平台设计源码,整合了513个Java文件、76个PNG图片、70个XML配置文件、62个JavaScript文件、42个Vue组件文件、28个CSS样式文件、22个HTML文件、18个YAML配置文件、16个属性文件、11个Vue模板文件,总计919个文件。平台采用现代化前端技术堆栈,旨在为用户提供个性化的旅游推荐服务。
微信小程序开发地图演示、地图导航、标记标注_echat-weapp-mpdemo
Vue和Axios文件
该项目为基于Python语言开发的HTML与任务清单关系系统,包含50个文件,其中16个为HTML文件,14个为Python源代码文件,其余包括Python编译文件、Markdown文件、图片、数据库文件、配置文件、模板文件、文本文件等,旨在为用户提供简单便捷的生活时间段安排及任务规划管理工具。
全国大学生电子设计大赛项目合集全国电赛优秀作品大学生电子竞赛历届题目
该项目是一款基于Python和pygame引擎开发的植物大战僵尸游戏,包含125个文件,涵盖93个PNG图像文件、11个Python源代码文件、10个Python字节码文件、8个GIF动画文件、1个Git忽略配置文件、1个JSON数据文件以及1个Markdown说明文件。游戏设计源码为学习游戏开发提供了丰富的实践素材。
本项目是一款以Java为核心开发的Qiniu服务端设计源码整合的RunFlow桌面端效率工具,总文件数29个,包括17个Java源文件、2个属性文件、1个Git忽略文件、1个JAR包文件、1个LICENSE文件、1个Markdown文件、1个YAML文件、1个Maven配置文件和1个命令文件。该工具旨在提升工作效率,适用于各种桌面环境下的便捷使用。
那些年,与你同分同位次的同学都去了哪里?全国各大学在辽宁2020-2024年各专业最低录取分数及录取位次数据,高考志愿必备参考数据
Golang支付-微信公众号、微信应用、微信小程序、微信企业支付、支付宝网络版、支付宝应用、支付宝企业支付_支付宝
医疗辅助诊断系统-开题
该项目为东东购物网的后台开发源码,采用JavaScript为主要编程语言,辅以CSS、Java、HTML等多种语言,共包含1300个文件。具体文件类型分布如下:458个PNG图片文件、215个JavaScript文件、160个JPG图片文件、159个GIF图片文件、107个CSS样式表文件、96个Java源代码文件、28个HTML文件、24个XML文件、12个JSON文件、8个Map文件。该系统旨在提供高效、便捷的购物后台管理功能。
content_1728052071778.apk
资源视频编辑软件win781064位系统
职业心理测试人格气质情绪控制测试18个文件
那些年,与你同分同位次的同学都去了哪里?全国各大学在辽宁2020-2024年各专业最低录取分数及录取位次数据,高考志愿必备参考数据
该项目为基于Python语言的微信小程序源码,全面整合了JavaScript和微信小程序开发技术。项目结构包含50个文件,涵盖了15个PNG图片、9个JSON配置、8个JavaScript脚本、7个wxss样式表、6个WXML模板和4个JPG图片。此外,还包括1个Markdown文档。该小程序的设计与实现展现了Python编程的强大能力,适用于微信生态下的各类应用场景。
全国大学生电子设计大赛项目合集全国电赛优秀作品电赛B题风力摆控制系统设计(原理图+源代码+设计报告等)
基于西门子S7-200PLC的自动灌溉系统组态王组态 带解释的梯形图程序,接线图原理图图纸,io分配,组态画面
该项目是基于GitHub平台的xClouds-device项目设计源码,包含2835个文件,包括763个头文件、630个C语言源文件、207个文本文件、144个Makefile文件、102个Python脚本、86个Markdown文件、70个二进制文件、59个实验文件、50个Shell脚本、48个项目构建脚本。该项目主要使用C语言编写,辅以Python、Shell、C++和C++等语言,适合进行设备控制和云平台交互开发。