- 浏览: 913528 次
- 性别:
- 来自: 上海
文章分类
- 全部博客 (263)
- J2EE (9)
- Spring (11)
- Hibernate (11)
- Struts (5)
- opensource (19)
- Hadoop (28)
- 架构设计 (8)
- 企业应用 (10)
- SNMP (8)
- SSO (4)
- webservice (11)
- RPC (2)
- 编程语言 (0)
- Java (30)
- Javascript (5)
- NoSQL (11)
- 数据库 (0)
- oracle (8)
- MySQL (3)
- web (1)
- Android (2)
- Linux (15)
- 软件工具 (15)
- 项目构建 (11)
- 测试工具 (2)
- Exception (19)
- 杂谈 (4)
- Groovy (5)
- Nodejs (1)
- MacOSX (4)
最新评论
-
fighhin:
decode(BinaryBitmap,java.util.M ...
条形码/二维码之开源利器ZXing图文介绍 -
u013489005:
追问:楼主,请问有中文文档么?我的邮箱是frankgray@s ...
Java表达式计算引擎:Expr4J -
u013489005:
感谢博主 需要引入的包是import java.io.*;im ...
Java表达式计算引擎:Expr4J -
calosteward:
感谢楼主分享。。 Zxing 我听说过的。__________ ...
条形码/二维码之开源利器ZXing图文介绍 -
u013810758:
judasqiqi 写道感谢楼主!想请问楼主一下这个生成的图片 ...
Java实现二维码QRCode的编码和解码
blog迁移至 :http://www.micmiu.com
Hadoop一个分布式系统基础架构,是Apache基金下的一个子项目。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力高速运算和存储。Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有着高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上。而且它提供高传输率(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求(requirements)这样可以流的形式访问(streaming access)文件系统中的数据。
Hadoop官网:http://hadoop.apache.org/
本文详细介绍如何搭建一个Hadoop的测试环境,将分别从单机、伪分布式等逐个讲解,并讲述在不同操作系统(Centos、Ubuntu)搭建过程中可能碰到的问题及解决方法,所有的测试过程本人在Centos、Ubuntu下均全部测试成功,全文的目录结构:
以Ubuntu中的用户 michael为例:
二、前期准备
首先将hadoop-0.20.203.0rc1.tar.gz解压到/home/michael/下
修改hadoop-env.sh中JAVA_HOME的配置,找到如下信息:
修改为:
三、单机演示(Standalone Operation)
所涉及的到的操作命令如下:
详细过程如下:
到此单机演示成功。
四、伪分布式演示(Pseudo-Distributed Operation)
1. 修改配置文件:
conf/core-site.xml:
conf/hdfs-site.xml:
conf/mapred-site.xml:
2.设置SSH无密码登陆
ps:如果是Centos系统,有关SSH无密码的详细设置请看:Linux(Centos)配置OpenSSH无密码登陆
3.测试:
相关操作的基本命令如下:
以上的测试操作在Centos5中测试顺利成功,而在Ubuntu10.10系统中却失败了,在执行命令:bin/hadoop fs -put conf input 时出错,错误信息类似“could only be replicated to 0 nodes, instead of 1”,引起该错误信息的原因有多种(详见:http://sjsky.iteye.com/blog/1124545),但此处的出错的原因是由于hadoop.tmp.dir默认配置指向/tmp/hadoop-${user.name},而在Ubuntu系统中,/tmp目录下的文件系统的类型往往是Hadoop不支持的。
解决的办法是重新定义hadoop.tmp.dir指向,修改配置文件conf/core-site.xml如下:
再次进行测试,成功运行。
整个测试过程的详细信息如下:
到此伪分布式的演示成功。
转载请注明来自:Michael's blog @ http://sjsky.iteye.com
----------------------------- 分 ------------------------------ 隔 ------------------------------ 线 ------------------------------
现在对hadoop才是个入门,还没有深入理解呢
Hadoop一个分布式系统基础架构,是Apache基金下的一个子项目。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力高速运算和存储。Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有着高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上。而且它提供高传输率(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求(requirements)这样可以流的形式访问(streaming access)文件系统中的数据。
Hadoop官网:http://hadoop.apache.org/
本文详细介绍如何搭建一个Hadoop的测试环境,将分别从单机、伪分布式等逐个讲解,并讲述在不同操作系统(Centos、Ubuntu)搭建过程中可能碰到的问题及解决方法,所有的测试过程本人在Centos、Ubuntu下均全部测试成功,全文的目录结构:
- 实验环境
- 准备工作
- 单机演示
- 伪分布式演示
- Windows Vista
- VirtualBox + Ubuntu10.10(OpenSSH 安装并启动)
- jdk 版本:1.6.0_20;安装路径为:/opt/jdk1.6(Hadoop要求jdk1.6.x)
- hadoop-0.20.203.0rc1.tar.gz(目前最新的稳定版本)
以Ubuntu中的用户 michael为例:
二、前期准备
首先将hadoop-0.20.203.0rc1.tar.gz解压到/home/michael/下
$tar -zxvf hadoop-0.20.203.0rc1.tar.gz -C /home/michael/ $mv hadoop-0.20.203.0 hadoop
修改hadoop-env.sh中JAVA_HOME的配置,找到如下信息:
引用
# The java implementation to use. Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
修改为:
引用
# The java implementation to use. Required.
#当前系统的JDK的路径
export JAVA_HOME=/opt/jdk1.6
#当前系统的JDK的路径
export JAVA_HOME=/opt/jdk1.6
三、单机演示(Standalone Operation)
所涉及的到的操作命令如下:
$ cd /home/michael/hadoop $ mkdir input $ cp conf/*.xml input $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+' $ cat output/*
详细过程如下:
引用
michael@michael-VirtualBox:~/hadoop$ mkdir input
michael@michael-VirtualBox:~/hadoop$ cp conf/*.xml input
michael@michael-VirtualBox:~/hadoop$ bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'
11/07/16 10:06:48 INFO mapred.FileInputFormat: Total input paths to process : 6
11/07/16 10:06:48 INFO mapred.JobClient: Running job: job_local_0001
11/07/16 10:06:48 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:48 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:49 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:49 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:49 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:49 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:06:49 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
11/07/16 10:06:51 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/capacity-scheduler.xml:0+7457
11/07/16 10:06:51 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
11/07/16 10:06:51 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:51 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:51 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:51 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:51 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:52 INFO mapred.MapTask: Finished spill 0
11/07/16 10:06:52 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
11/07/16 10:06:52 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
11/07/16 10:06:54 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:54 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:55 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:55 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:55 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:55 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
11/07/16 10:06:57 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:57 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:58 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:58 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:58 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:58 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
11/07/16 10:07:00 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:00 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:01 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:01 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:01 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:01 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
11/07/16 10:07:04 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:04 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:04 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:04 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:04 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:04 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hdfs-site.xml:0+178
11/07/16 10:07:07 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Merger: Merging 6 sorted segments
11/07/16 10:07:07 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
11/07/16 10:07:07 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/michael/hadoop/grep-temp-1267281521
11/07/16 10:07:10 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:10 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
11/07/16 10:07:10 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:10 INFO mapred.JobClient: Job complete: job_local_0001
11/07/16 10:07:10 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:10 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Read=14668
11/07/16 10:07:10 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Written=123
11/07/16 10:07:10 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_READ=1106074
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1231779
11/07/16 10:07:10 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:10 INFO mapred.JobClient: Map output materialized bytes=55
11/07/16 10:07:10 INFO mapred.JobClient: Map input records=357
11/07/16 10:07:10 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:10 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:10 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:10 INFO mapred.JobClient: Map input bytes=14668
11/07/16 10:07:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=611
11/07/16 10:07:10 INFO mapred.JobClient: Combine input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:10 INFO mapred.JobClient: Combine output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Map output records=1
11/07/16 10:07:10 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 10:07:10 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 10:07:10 INFO mapred.JobClient: Running job: job_local_0002
11/07/16 10:07:10 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:10 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:11 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:11 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:11 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:11 INFO mapred.MapTask: Finished spill 0
11/07/16 10:07:11 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
11/07/16 10:07:11 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Merger: Merging 1 sorted segments
11/07/16 10:07:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
11/07/16 10:07:13 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/michael/hadoop/output
11/07/16 10:07:14 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:07:16 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:16 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
11/07/16 10:07:17 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:17 INFO mapred.JobClient: Job complete: job_local_0002
11/07/16 10:07:17 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:17 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Read=123
11/07/16 10:07:17 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Written=23
11/07/16 10:07:17 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_READ=606737
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_WRITTEN=700981
11/07/16 10:07:17 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:17 INFO mapred.JobClient: Map output materialized bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: Map input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:17 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:17 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:17 INFO mapred.JobClient: Map input bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: SPLIT_RAW_BYTES=110
11/07/16 10:07:17 INFO mapred.JobClient: Combine input records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:17 INFO mapred.JobClient: Combine output records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:17 INFO mapred.JobClient: Map output records=1
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$ cat output/*
1 dfsadmin
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$ cp conf/*.xml input
michael@michael-VirtualBox:~/hadoop$ bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'
11/07/16 10:06:48 INFO mapred.FileInputFormat: Total input paths to process : 6
11/07/16 10:06:48 INFO mapred.JobClient: Running job: job_local_0001
11/07/16 10:06:48 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:48 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:49 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:49 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:49 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:49 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:06:49 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
11/07/16 10:06:51 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/capacity-scheduler.xml:0+7457
11/07/16 10:06:51 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
11/07/16 10:06:51 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:51 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:51 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:51 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:51 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:52 INFO mapred.MapTask: Finished spill 0
11/07/16 10:06:52 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
11/07/16 10:06:52 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
11/07/16 10:06:54 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:54 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:55 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:55 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:55 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:55 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
11/07/16 10:06:57 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:57 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:58 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:58 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:58 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:58 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
11/07/16 10:07:00 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:00 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:01 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:01 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:01 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:01 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
11/07/16 10:07:04 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:04 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:04 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:04 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:04 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:04 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hdfs-site.xml:0+178
11/07/16 10:07:07 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Merger: Merging 6 sorted segments
11/07/16 10:07:07 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
11/07/16 10:07:07 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/michael/hadoop/grep-temp-1267281521
11/07/16 10:07:10 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:10 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
11/07/16 10:07:10 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:10 INFO mapred.JobClient: Job complete: job_local_0001
11/07/16 10:07:10 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:10 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Read=14668
11/07/16 10:07:10 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Written=123
11/07/16 10:07:10 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_READ=1106074
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1231779
11/07/16 10:07:10 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:10 INFO mapred.JobClient: Map output materialized bytes=55
11/07/16 10:07:10 INFO mapred.JobClient: Map input records=357
11/07/16 10:07:10 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:10 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:10 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:10 INFO mapred.JobClient: Map input bytes=14668
11/07/16 10:07:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=611
11/07/16 10:07:10 INFO mapred.JobClient: Combine input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:10 INFO mapred.JobClient: Combine output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Map output records=1
11/07/16 10:07:10 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 10:07:10 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 10:07:10 INFO mapred.JobClient: Running job: job_local_0002
11/07/16 10:07:10 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:10 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:11 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:11 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:11 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:11 INFO mapred.MapTask: Finished spill 0
11/07/16 10:07:11 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
11/07/16 10:07:11 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Merger: Merging 1 sorted segments
11/07/16 10:07:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
11/07/16 10:07:13 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/michael/hadoop/output
11/07/16 10:07:14 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:07:16 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:16 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
11/07/16 10:07:17 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:17 INFO mapred.JobClient: Job complete: job_local_0002
11/07/16 10:07:17 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:17 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Read=123
11/07/16 10:07:17 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Written=23
11/07/16 10:07:17 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_READ=606737
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_WRITTEN=700981
11/07/16 10:07:17 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:17 INFO mapred.JobClient: Map output materialized bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: Map input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:17 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:17 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:17 INFO mapred.JobClient: Map input bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: SPLIT_RAW_BYTES=110
11/07/16 10:07:17 INFO mapred.JobClient: Combine input records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:17 INFO mapred.JobClient: Combine output records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:17 INFO mapred.JobClient: Map output records=1
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$ cat output/*
1 dfsadmin
michael@michael-VirtualBox:~/hadoop$
到此单机演示成功。
四、伪分布式演示(Pseudo-Distributed Operation)
1. 修改配置文件:
conf/core-site.xml:
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
conf/hdfs-site.xml:
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
conf/mapred-site.xml:
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
2.设置SSH无密码登陆
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ps:如果是Centos系统,有关SSH无密码的详细设置请看:Linux(Centos)配置OpenSSH无密码登陆
3.测试:
相关操作的基本命令如下:
#Format a new distributed-filesystem: $ bin/hadoop namenode -format #Start the hadoop daemons: $ bin/start-all.sh Copy the input files into the distributed filesystem: $ bin/hadoop fs -put conf input #Run some of the examples provided: $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+' #Copy the output files from the distributed filesystem to the local filesytem and examine them: $ bin/hadoop fs -get output output $ cat output/output/*
以上的测试操作在Centos5中测试顺利成功,而在Ubuntu10.10系统中却失败了,在执行命令:bin/hadoop fs -put conf input 时出错,错误信息类似“could only be replicated to 0 nodes, instead of 1”,引起该错误信息的原因有多种(详见:http://sjsky.iteye.com/blog/1124545),但此处的出错的原因是由于hadoop.tmp.dir默认配置指向/tmp/hadoop-${user.name},而在Ubuntu系统中,/tmp目录下的文件系统的类型往往是Hadoop不支持的。
解决的办法是重新定义hadoop.tmp.dir指向,修改配置文件conf/core-site.xml如下:
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/michael/hadooptmp/hadoop-${user.name}</value> <description> A base for other temporary directories. </description> </property> </configuration>
再次进行测试,成功运行。
整个测试过程的详细信息如下:
引用
michael@michael-VirtualBox:~/hadoop$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /home/michael/.ssh/id_dsa.
Your public key has been saved in /home/michael/.ssh/id_dsa.pub.
The key fingerprint is:
2a:47:e3:3a:c8:80:ab:97:d1:c6:68:54:9a:45:9f:59 michael@michael-VirtualBox
The key's randomart image is:
+--[ DSA 1024]----+
| .. E |
| o. + |
| = + |
| + |
|.. + o S |
|o + +o o |
| = =. + |
|. = .+ |
|o. .. |
+-----------------+
michael@michael-VirtualBox:~/hadoop$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
michael@michael-VirtualBox:~/hadoop$ ssh localhost
Linux michael-VirtualBox 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 GNU/Linux
Ubuntu 10.10
Welcome to Ubuntu!
* Documentation: https://help.ubuntu.com/
71 packages can be updated.
71 updates are security updates.
New release 'natty' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Wed Jul 15 15:56:17 2011 from shnap.local
michael@michael-VirtualBox:~$ exit
注销
Connection to localhost closed.
michael@michael-VirtualBox:~/hadoop$ bin/hadoop namenode -format
11/07/16 12:43:45 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = michael-VirtualBox/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
11/07/16 12:43:46 INFO util.GSet: VM type = 32-bit
11/07/16 12:43:46 INFO util.GSet: 2% max memory = 19.33375 MB
11/07/16 12:43:46 INFO util.GSet: capacity = 2^22 = 4194304 entries
11/07/16 12:43:46 INFO util.GSet: recommended=4194304, actual=4194304
11/07/16 12:43:46 INFO namenode.FSNamesystem: fsOwner=michael
11/07/16 12:43:46 INFO namenode.FSNamesystem: supergroup=supergroup
11/07/16 12:43:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/07/16 12:43:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/07/16 12:43:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/07/16 12:43:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
11/07/16 12:43:47 INFO common.Storage: Image file of size 113 saved in 0 seconds.
11/07/16 12:43:47 INFO common.Storage: Storage directory /home/michael/hadooptmp/hadoop-michael/dfs/name has been successfully formatted.
11/07/16 12:43:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at michael-VirtualBox/127.0.1.1
************************************************************/
michael@michael-VirtualBox:~/hadoop$ bin/start-all.sh
starting namenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-namenode-michael-VirtualBox.out
localhost: starting datanode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-datanode-michael-VirtualBox.out
localhost: starting secondarynamenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-secondarynamenode-michael-VirtualBox.out
starting jobtracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-jobtracker-michael-VirtualBox.out
localhost: starting tasktracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-tasktracker-michael-VirtualBox.out
michael@michael-VirtualBox:~/hadoop$ jps
7948 SecondaryNameNode
8033 JobTracker
8887 Jps
7627 NameNode
7781 DataNode
8190 TaskTracker
michael@michael-VirtualBox:~/hadoop$ bin/hadoop fs -put conf input
michael@michael-VirtualBox:~/hadoop$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
11/07/16 12:46:21 INFO mapred.FileInputFormat: Total input paths to process : 15
11/07/16 12:46:21 INFO mapred.JobClient: Running job: job_201107161244_0001
11/07/16 12:46:22 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:47:09 INFO mapred.JobClient: map 13% reduce 0%
11/07/16 12:47:33 INFO mapred.JobClient: map 26% reduce 0%
11/07/16 12:47:45 INFO mapred.JobClient: map 26% reduce 8%
11/07/16 12:47:54 INFO mapred.JobClient: map 40% reduce 8%
11/07/16 12:48:07 INFO mapred.JobClient: map 53% reduce 13%
11/07/16 12:48:16 INFO mapred.JobClient: map 53% reduce 17%
11/07/16 12:48:24 INFO mapred.JobClient: map 66% reduce 17%
11/07/16 12:48:36 INFO mapred.JobClient: map 80% reduce 22%
11/07/16 12:48:42 INFO mapred.JobClient: map 80% reduce 26%
11/07/16 12:48:45 INFO mapred.JobClient: map 93% reduce 26%
11/07/16 12:48:53 INFO mapred.JobClient: map 100% reduce 26%
11/07/16 12:48:58 INFO mapred.JobClient: map 100% reduce 33%
11/07/16 12:49:07 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:49:14 INFO mapred.JobClient: Job complete: job_201107161244_0001
11/07/16 12:49:15 INFO mapred.JobClient: Counters: 26
11/07/16 12:49:15 INFO mapred.JobClient: Job Counters
11/07/16 12:49:15 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=255488
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Launched map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: Data-local map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=115656
11/07/16 12:49:15 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Read=25623
11/07/16 12:49:15 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Written=180
11/07/16 12:49:15 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_READ=27281
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_WRITTEN=342206
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=180
11/07/16 12:49:15 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:49:15 INFO mapred.JobClient: Map output materialized bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Map input records=716
11/07/16 12:49:15 INFO mapred.JobClient: Reduce shuffle bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:49:15 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:49:15 INFO mapred.JobClient: Map input bytes=25623
11/07/16 12:49:15 INFO mapred.JobClient: Combine input records=3
11/07/16 12:49:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=1658
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input groups=3
11/07/16 12:49:15 INFO mapred.JobClient: Combine output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Map output records=3
11/07/16 12:49:16 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 12:49:17 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 12:49:18 INFO mapred.JobClient: Running job: job_201107161244_0002
11/07/16 12:49:19 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:49:40 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 12:49:55 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:50:00 INFO mapred.JobClient: Job complete: job_201107161244_0002
11/07/16 12:50:00 INFO mapred.JobClient: Counters: 26
11/07/16 12:50:00 INFO mapred.JobClient: Job Counters
11/07/16 12:50:00 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=16946
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Launched map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: Data-local map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14357
11/07/16 12:50:00 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Read=180
11/07/16 12:50:00 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Written=52
11/07/16 12:50:00 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_READ=298
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=41947
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=52
11/07/16 12:50:00 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:50:00 INFO mapred.JobClient: Map output materialized bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Map input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce shuffle bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:50:00 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:50:00 INFO mapred.JobClient: Map input bytes=94
11/07/16 12:50:00 INFO mapred.JobClient: Combine input records=0
11/07/16 12:50:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=118
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input groups=1
11/07/16 12:50:00 INFO mapred.JobClient: Combine output records=0
11/07/16 12:50:00 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:50:00 INFO mapred.JobClient: Map output records=3
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$ cat output/output/*
cat: output/output/_logs: 是一个目录
1 dfs.replication
1 dfs.server.namenode.
1 dfsadmin
michael@michael-VirtualBox:~/hadoop$
Generating public/private dsa key pair.
Your identification has been saved in /home/michael/.ssh/id_dsa.
Your public key has been saved in /home/michael/.ssh/id_dsa.pub.
The key fingerprint is:
2a:47:e3:3a:c8:80:ab:97:d1:c6:68:54:9a:45:9f:59 michael@michael-VirtualBox
The key's randomart image is:
+--[ DSA 1024]----+
| .. E |
| o. + |
| = + |
| + |
|.. + o S |
|o + +o o |
| = =. + |
|. = .+ |
|o. .. |
+-----------------+
michael@michael-VirtualBox:~/hadoop$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
michael@michael-VirtualBox:~/hadoop$ ssh localhost
Linux michael-VirtualBox 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 GNU/Linux
Ubuntu 10.10
Welcome to Ubuntu!
* Documentation: https://help.ubuntu.com/
71 packages can be updated.
71 updates are security updates.
New release 'natty' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Wed Jul 15 15:56:17 2011 from shnap.local
michael@michael-VirtualBox:~$ exit
注销
Connection to localhost closed.
michael@michael-VirtualBox:~/hadoop$ bin/hadoop namenode -format
11/07/16 12:43:45 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = michael-VirtualBox/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
11/07/16 12:43:46 INFO util.GSet: VM type = 32-bit
11/07/16 12:43:46 INFO util.GSet: 2% max memory = 19.33375 MB
11/07/16 12:43:46 INFO util.GSet: capacity = 2^22 = 4194304 entries
11/07/16 12:43:46 INFO util.GSet: recommended=4194304, actual=4194304
11/07/16 12:43:46 INFO namenode.FSNamesystem: fsOwner=michael
11/07/16 12:43:46 INFO namenode.FSNamesystem: supergroup=supergroup
11/07/16 12:43:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/07/16 12:43:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/07/16 12:43:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/07/16 12:43:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
11/07/16 12:43:47 INFO common.Storage: Image file of size 113 saved in 0 seconds.
11/07/16 12:43:47 INFO common.Storage: Storage directory /home/michael/hadooptmp/hadoop-michael/dfs/name has been successfully formatted.
11/07/16 12:43:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at michael-VirtualBox/127.0.1.1
************************************************************/
michael@michael-VirtualBox:~/hadoop$ bin/start-all.sh
starting namenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-namenode-michael-VirtualBox.out
localhost: starting datanode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-datanode-michael-VirtualBox.out
localhost: starting secondarynamenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-secondarynamenode-michael-VirtualBox.out
starting jobtracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-jobtracker-michael-VirtualBox.out
localhost: starting tasktracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-tasktracker-michael-VirtualBox.out
michael@michael-VirtualBox:~/hadoop$ jps
7948 SecondaryNameNode
8033 JobTracker
8887 Jps
7627 NameNode
7781 DataNode
8190 TaskTracker
michael@michael-VirtualBox:~/hadoop$ bin/hadoop fs -put conf input
michael@michael-VirtualBox:~/hadoop$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
11/07/16 12:46:21 INFO mapred.FileInputFormat: Total input paths to process : 15
11/07/16 12:46:21 INFO mapred.JobClient: Running job: job_201107161244_0001
11/07/16 12:46:22 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:47:09 INFO mapred.JobClient: map 13% reduce 0%
11/07/16 12:47:33 INFO mapred.JobClient: map 26% reduce 0%
11/07/16 12:47:45 INFO mapred.JobClient: map 26% reduce 8%
11/07/16 12:47:54 INFO mapred.JobClient: map 40% reduce 8%
11/07/16 12:48:07 INFO mapred.JobClient: map 53% reduce 13%
11/07/16 12:48:16 INFO mapred.JobClient: map 53% reduce 17%
11/07/16 12:48:24 INFO mapred.JobClient: map 66% reduce 17%
11/07/16 12:48:36 INFO mapred.JobClient: map 80% reduce 22%
11/07/16 12:48:42 INFO mapred.JobClient: map 80% reduce 26%
11/07/16 12:48:45 INFO mapred.JobClient: map 93% reduce 26%
11/07/16 12:48:53 INFO mapred.JobClient: map 100% reduce 26%
11/07/16 12:48:58 INFO mapred.JobClient: map 100% reduce 33%
11/07/16 12:49:07 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:49:14 INFO mapred.JobClient: Job complete: job_201107161244_0001
11/07/16 12:49:15 INFO mapred.JobClient: Counters: 26
11/07/16 12:49:15 INFO mapred.JobClient: Job Counters
11/07/16 12:49:15 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=255488
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Launched map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: Data-local map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=115656
11/07/16 12:49:15 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Read=25623
11/07/16 12:49:15 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Written=180
11/07/16 12:49:15 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_READ=27281
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_WRITTEN=342206
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=180
11/07/16 12:49:15 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:49:15 INFO mapred.JobClient: Map output materialized bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Map input records=716
11/07/16 12:49:15 INFO mapred.JobClient: Reduce shuffle bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:49:15 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:49:15 INFO mapred.JobClient: Map input bytes=25623
11/07/16 12:49:15 INFO mapred.JobClient: Combine input records=3
11/07/16 12:49:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=1658
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input groups=3
11/07/16 12:49:15 INFO mapred.JobClient: Combine output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Map output records=3
11/07/16 12:49:16 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 12:49:17 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 12:49:18 INFO mapred.JobClient: Running job: job_201107161244_0002
11/07/16 12:49:19 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:49:40 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 12:49:55 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:50:00 INFO mapred.JobClient: Job complete: job_201107161244_0002
11/07/16 12:50:00 INFO mapred.JobClient: Counters: 26
11/07/16 12:50:00 INFO mapred.JobClient: Job Counters
11/07/16 12:50:00 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=16946
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Launched map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: Data-local map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14357
11/07/16 12:50:00 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Read=180
11/07/16 12:50:00 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Written=52
11/07/16 12:50:00 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_READ=298
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=41947
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=52
11/07/16 12:50:00 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:50:00 INFO mapred.JobClient: Map output materialized bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Map input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce shuffle bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:50:00 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:50:00 INFO mapred.JobClient: Map input bytes=94
11/07/16 12:50:00 INFO mapred.JobClient: Combine input records=0
11/07/16 12:50:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=118
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input groups=1
11/07/16 12:50:00 INFO mapred.JobClient: Combine output records=0
11/07/16 12:50:00 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:50:00 INFO mapred.JobClient: Map output records=3
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$ cat output/output/*
cat: output/output/_logs: 是一个目录
1 dfs.replication
1 dfs.server.namenode.
1 dfsadmin
michael@michael-VirtualBox:~/hadoop$
到此伪分布式的演示成功。
转载请注明来自:Michael's blog @ http://sjsky.iteye.com
----------------------------- 分 ------------------------------ 隔 ------------------------------ 线 ------------------------------
评论
3 楼
Menuz
2012-11-06
文档翻译的不错。
2 楼
sjsky
2011-07-16
conneyma 写道
牛人啊,都整些高深的东东~
现在对hadoop才是个入门,还没有深入理解呢
1 楼
conneyma
2011-07-16
牛人啊,都整些高深的东东~
发表评论
-
Hadoop2.x在Ubuntu系统中编译源码
2014-04-22 14:23 1015本文主要记录Hadoop2.x在Ubuntu 12.04下编 ... -
eclipse中开发Hadoop2.x的Map/Reduce项目
2014-04-22 14:20 1364本文演示如何在Eclipse中开发一个Map/Reduce项 ... -
Hadoop2.x eclipse plugin插件编译安装配置
2014-04-22 14:18 1344本文主要讲解如何编译安装配置 Hadoop2.x eclip ... -
maven编译Spark源码
2014-04-14 23:37 1793Spark 源码除了用 sbt/sbt assembly 编 ... -
java.net.ConnectException: to 0.0.0.0:10020 failed on connection exception
2014-04-14 23:36 3279在DataNode节点中的Hive CLI中执行 selec ... -
ERROR tool.ImportTool: Imported Failed: Attempted to generate class with no colu
2014-04-14 23:35 1511Sqoop 把数据从Oracle中迁移到Hive中时发生错误 ... -
Sqoop安装配置及演示
2014-04-09 16:51 1971Sqoop是一个用来将Hadoop(Hive、HBase)和 ... -
Hive和HBase整合
2014-04-09 16:50 1100本文主要描述Hive和HBase 环境整合配置的详细过程: ... -
HiveException:Not a host:port pair: PBUF
2014-04-09 16:49 1228HBase和Hive整合后,在Hive shell中执行建表 ... -
HiveException:Not a host:port pair: PBUF
2014-04-09 16:48 981HBase和Hive整合后,在Hive shell中执行建表 ... -
HBase安装配置之完全分布式模式
2014-03-10 17:28 1137HBase安装模式有三种:单机模式、分布式(伪分布式和完全分 ... -
HBase安装配置之伪分布式模式
2014-03-10 17:26 1431HBase安装模式有三种:单机模式、分布式(伪分布式和完全分 ... -
HBase安装配置之单机模式
2014-03-09 22:51 2174HBase安装模式有三种:单机模式、分布式(伪分布式和完全分 ... -
Hive自定义分隔符InputFormat
2014-02-24 17:34 1633Hive默认创建的表字段分隔符为:\001(ctrl-A), ... -
Hive教程之DML数据导入导出
2014-02-20 17:26 576文章基本目录结构: 数据导入 导入本地文件 导入 ... -
HBase基于Hadoop2的源码编译
2014-02-20 17:24 1981本文以HBase0.98.0 为例,演示编译生成适用于Had ... -
Hive教程之metastore的三种模式
2014-02-20 17:24 1337Hive中metastore(元数据存储)的三种方式: ... -
Hive教程之DDL+DML
2014-02-11 17:21 972在完成 Hive安装配置 后自然而然就是它的基本应用,本文就 ... -
Hive安装配置详解
2014-02-11 17:20 798本文主要是在Hadoop单机模式中演示Hive默认(嵌入式d ... -
hadoop 2.2.0 集群模式安装配置和测试
2014-01-22 16:52 827本文详细记录Hadoop 2.2.0 集群安装配置的步骤,并 ...
相关推荐
- **大数据处理**:Hadoop、Spark等分布式计算框架,以及MapReduce模型在大数据处理中的应用。 - **云计算技术**:IaaS、PaaS、SaaS的概念,虚拟化技术,如Kubernetes和Docker的使用。 3. **软件工程**: - **...
- **大数据**:包括数据挖掘、数据清洗、数据存储(如Hadoop HDFS)、数据处理(如MapReduce、Spark)、数据分析(如SQL、Python的Pandas库)等,以及大数据应用如推荐系统、预测分析等。 - **云计算**:主要涵盖...
基于java的校园美食交流系统设计与实现.docx
均包含代码,文章,部分项目包含ppt
基于python的酒店评论中文情感分析系统源码+设计文档+数据集.zip基于python的酒店评论中文情感分析系统源码+设计文档+数据集.zip基于python的酒店评论中文情感分析系统源码+设计文档+数据集.zip 个人大四的毕业设计、课程设计、作业、经导师指导并认可通过的高分设计项目,评审平均分达96.5分。主要针对计算机相关专业的正在做毕设的学生和需要项目实战练习的学习者,也可作为课程设计、期末大作业。 [资源说明] 不懂运行,下载完可以私聊问,可远程教学 该资源内项目源码是个人的毕设或者课设、作业,代码都测试ok,都是运行成功后才上传资源,答辩评审平均分达到96.5分,放心下载使用! 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.md文件(如有),供学习参考。
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 、4下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合;、下载 4使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合;、 4下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.m或d论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 、1资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
基于Django框架开发的协同过滤算法小说推荐系统是一种利用用户行为数据来提供个性化小说推荐的应用。该系统通过分析用户的历史阅读记录、评分和反馈,发现用户之间的相似性或小说之间的相似性,进而为用户推荐可能感兴趣的小说。以下是该系统可能包含的关键特性: 1. **用户账户管理**:允许用户创建账户、登录和编辑个人信息,同时跟踪用户的阅读历史和评分。 2. **小说数据库**:构建一个包含大量小说信息的数据库,每本小说都有详细的元数据,如作者、出版年份、流派、标签等。 3. **协同过滤引擎**:实现协同过滤算法,包括用户-用户协同过滤和项目-项目协同过滤,以发现相似用户或相似小说。 4. **推荐生成**:根据协同过滤引擎的结果,生成个性化的小说推荐列表,并提供给用户。 5. **评分系统**:允许用户对小说进行评分,这些评分数据将用于训练推荐算法,提高推荐的准确性。 6. **用户界面**:设计直观、易用的用户界面,使用户能够轻松浏览推荐的小说、查看详情和进行评分。 7. **搜索和筛选功能**:提供强大的搜索功能,允许用户根据标题、作者或流派等关键词搜索小说,并提供筛选
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。、资源 5来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。、资 5源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
内容概要:本文是关于2020年度数字孪生技术的最新进展和发展趋势的研究报告。文中对数字孪生技术及其应用场景作出了详细的阐述,特别强调了数字孪生在智能制造、智慧城市、产品开发等多个领域内的实际应用成果,并讨论了数字孪生带来的信息安全方面的挑战和解决方案。 适用人群:面向希望深入了解和应用数字孪生技术的企业管理人员、研发工程师和学者。 使用场景及目标:适用于企业或机构寻求改进产品设计、生产制造、城市管理等领域效能的情况,助力相关人员理解和实现更加精细的管理决策和模拟预测,进而优化资源配置与提升工作效率。 其它说明:介绍了多项核心技术,包括但不限于数据收集、建模仿真、模型管理系统等,并分享了多个数字孪生的真实应用案例以展示其实效。
基于java的的德云社票务系统的设计与实现.docx
基于java的宜佰丰超市进销存管理系统设计与实现.docx
基于java的削面快餐店点餐服务系统的设计与实现.docx
用户体验分享和讨论.ppt
均包含代码,文章,部分项目包含ppt
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看REaDme.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 、3本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看REAdme.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 、本项3目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看ReAdmE.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
均包含代码,文章,部分项目包含ppt
项目工程资源经过严格测试可直接运行成功且功能正常的情况才上传,可轻松复刻,拿到资料包后可轻松复现出一样的项目,本人系统开发经验充足(全领域),有任何使用问题欢迎随时与我联系,我会及时为您解惑,提供帮助。 【资源内容】:包含完整源码+工程文件+说明(如有)等。答辩评审平均分达到96分,放心下载使用!可轻松复现,设计报告也可借鉴此项目,该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 【提供帮助】:有任何使用问题欢迎随时与我联系,我会及时解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 下载后请首先打开README文件(如有),项目工程可直接复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用。
志愿者招募网站,在网站首页可以查看首页,组织信息,志愿活动,新闻资讯,个人中心,后台管理等内容,并进行详细操作 用户注册,在用户注册页面通过填写账号,密码,确认密码,姓名,手机,所在学校,邮箱,验证码等信息进行注册操作 组织信息,在组织信息页面可以查看组织名称,组织编号,组织宣言,负责人,联系电话等内容,并进行评论和收藏操作 项目关键技术 开发工具:IDEA 、Eclipse 编程语言: Java 数据库: MySQL5.7+ 后端技术:ssm 前端技术:Vue 关键技术:springboot、SSM、vue、MYSQL、MAVEN 数据库工具:Navicat、SQLyog
全代码在里面,学完Java实训写出来的Java图书馆代码