`
cooliufang
  • 浏览: 131481 次
社区版块
存档分类
最新评论

hadoop测试worldcount

阅读更多
hadoop测试worldcount,统计每个单词出现的个数

一、首先创建新目录testFiles,并在目录下创建两个测试数据文本文件如下:
[root@SC-026 hadoop-1.0.3]# mkdir testFiles
[root@SC-026 hadoop-1.0.3]# cd testFiles/
[root@SC-026 testFiles]# echo "hello world, bye bye, world." > file1.txt
[root@SC-026 testFiles]# echo "hello hadoop, how are you? hadoop." > file2.txt


二、将本地文件系统上的./testFiles目录拷贝到HDFS的根目录下,目录名为input。
遇到问题,报错信息如下:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -put ./testFiles input
put: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/root/input. Name node is in safe mode.

问题说明hadoop的namenode处在安全模式下,通过以下方式就可以离开安全模式,再次执行拷贝就成功了:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfsadmin -safemode leave
Safe mode is OFF
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -put ./testFiles input


三、执行测试任务,输出到output:
[root@SC-026 hadoop-1.0.3]# bin/hadoop jar hadoop-examples-1.0.3.jar wordcount input output
12/08/31 09:21:34 INFO input.FileInputFormat: Total input paths to process : 2
12/08/31 09:21:34 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/08/31 09:21:34 WARN snappy.LoadSnappy: Snappy native library not loaded
12/08/31 09:21:35 INFO mapred.JobClient: Running job: job_201208310909_0001
12/08/31 09:21:36 INFO mapred.JobClient:  map 0% reduce 0%
12/08/31 09:21:57 INFO mapred.JobClient:  map 50% reduce 0%
12/08/31 09:22:00 INFO mapred.JobClient:  map 100% reduce 0%
12/08/31 09:22:12 INFO mapred.JobClient:  map 100% reduce 100%
12/08/31 09:22:16 INFO mapred.JobClient: Job complete: job_201208310909_0001
12/08/31 09:22:16 INFO mapred.JobClient: Counters: 29
12/08/31 09:22:16 INFO mapred.JobClient:   Job Counters 
12/08/31 09:22:16 INFO mapred.JobClient:     Launched reduce tasks=1
12/08/31 09:22:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=27675
12/08/31 09:22:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/08/31 09:22:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/08/31 09:22:16 INFO mapred.JobClient:     Launched map tasks=2
12/08/31 09:22:16 INFO mapred.JobClient:     Data-local map tasks=2
12/08/31 09:22:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=14460
12/08/31 09:22:16 INFO mapred.JobClient:   File Output Format Counters 
12/08/31 09:22:16 INFO mapred.JobClient:     Bytes Written=78
12/08/31 09:22:16 INFO mapred.JobClient:   FileSystemCounters
12/08/31 09:22:16 INFO mapred.JobClient:     FILE_BYTES_READ=136
12/08/31 09:22:16 INFO mapred.JobClient:     HDFS_BYTES_READ=278
12/08/31 09:22:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=64909
12/08/31 09:22:16 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=78
12/08/31 09:22:16 INFO mapred.JobClient:   File Input Format Counters 
12/08/31 09:22:16 INFO mapred.JobClient:     Bytes Read=64
12/08/31 09:22:16 INFO mapred.JobClient:   Map-Reduce Framework
12/08/31 09:22:16 INFO mapred.JobClient:     Map output materialized bytes=142
12/08/31 09:22:16 INFO mapred.JobClient:     Map input records=2
12/08/31 09:22:16 INFO mapred.JobClient:     Reduce shuffle bytes=142
12/08/31 09:22:16 INFO mapred.JobClient:     Spilled Records=22
12/08/31 09:22:16 INFO mapred.JobClient:     Map output bytes=108
12/08/31 09:22:16 INFO mapred.JobClient:     CPU time spent (ms)=3480
12/08/31 09:22:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=411828224
12/08/31 09:22:16 INFO mapred.JobClient:     Combine input records=11
12/08/31 09:22:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=214
12/08/31 09:22:16 INFO mapred.JobClient:     Reduce input records=11
12/08/31 09:22:16 INFO mapred.JobClient:     Reduce input groups=10
12/08/31 09:22:16 INFO mapred.JobClient:     Combine output records=11
12/08/31 09:22:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=447000576
12/08/31 09:22:16 INFO mapred.JobClient:     Reduce output records=10
12/08/31 09:22:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1634324480
12/08/31 09:22:16 INFO mapred.JobClient:     Map output records=11


四、查看结果:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -cat output/*
are     1
bye     1
bye,    1
hadoop, 1
hadoop. 1
hello   2
how     1
world,  1
world.  1
you?    1
cat: File does not exist: /user/root/output/_logs


将结果从HDFS复制到本地再查看:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -get output output
[root@SC-026 hadoop-1.0.3]# cat output/*
cat: output/_logs: 是一个目录
are     1
bye     1
bye,    1
hadoop, 1
hadoop. 1
hello   2 
how     1
world,  1
world.  1
you?    1



备注:bin/hadoop dfs –help 可以了解各种 HDFS命令的使用。
分享到:
评论

相关推荐

    Hadoop MapReduce.pdf

    - 使用`hadoopcom.sun.tools.javac.Main WorldCount.java`命令编译`WordCount.java`。 - 将编译后的类文件打包成`wordcount.jar`。 - **步骤3:上传测试文档至HDFS** - 在HDFS中创建一个名为`wordcount/input`的...

    大数据技术实验一平台搭建.docx

    首先,实验的主要目标是搭建Hadoop集群,并通过WorldCount实例进行测试,以验证集群功能的正确性。Hadoop是一个开源的分布式计算框架,它允许在廉价硬件上存储和处理大量数据,具有高容错性和可扩展性。 实验步骤...

    WordCount_MapReduce:在 Hadoop 上运行的 MapReduce 程序

    在 WordCount 中,Reduce 函数会将所有 "Hello" 的计数值累加,所有 "World" 的计数值累加,最终生成键值对 ("Hello", total_count), ("World", total_count),表示每个单词在整个输入数据中的总出现次数。...

    bigdata bench 用户手册

    - **测试**:使用 `mpirun` 工具运行简单的 MPI 程序,如 `mpi_hello_world` 来验证 MPI 的安装情况。 ##### 3.5 Hive 的安装配置 - **环境准备**:确保已安装 Hadoop。 - **下载与解压**:下载 Hive 并解压。 - **...

    Springboot 结合Apache Spark 2.4.4与Scala 2.12 集成示例

    在本集成示例中,我们将探讨如何将Spring Boot与Apache Spark 2.4.4以及Scala 2.12版本相结合,实现一个简单的"Hello World"应用。Spring Boot以其便捷的微服务开发能力,而Apache Spark是大数据处理领域中的一员...

Global site tag (gtag.js) - Google Analytics