`
ckwang17
  • 浏览: 26262 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
社区版块
存档分类
最新评论

hadoop 入门

阅读更多

转载的。

原文出自 http://www.infosci.cornell.edu/hadoop/mac.html 

 

 

Cornell University

NOTICE: The Web Lab Hadoop cluster was closed at the end of September 2011

Quick Guide to Developing and Running Hadoop Jobs (Mac OS X 10.6)

This guide is written to help Cornell students using Mac OS X 10.6 with setting up a development environment for working with Hadoopand running Hadoop jobs on the Cornell Center for Advanced Computing (CAC) Hadoop cluster. This guide will walk you through compiling and running a simple example Hadoop job. More information is available at the official Hadoop Map-Reduce Tutorial.

The overall process of developing a Hadoop job is as follows:

  1. Install Hadoop on your development machine (personal or lab computer)
  2. Compile the Hadoop job, create a JAR file
  3. Run the Hadoop job JAR file on your development machine, for testing and debugging
  4. Run the Hadoop job JAR file on the CAC Hadoop cluster, for production

1. Installing Hadoop

This section shows you how to download Hadoop and prepare it for use on a Mac machine. Note: Hadoop versions after 0.19.2 require Java version 1.6. The following instructions take this into account.

  1. Obtain the latest stable Hadoop release. The file is named hadoop-version.tar.gz and can be obtained here. Unzip the downloaded file and place the resulting folder on your Desktop (or other location).

  2. To make hadoop run on a Mac, you will need to edit two files. Open the file conf/hadoop-env.sh within the hadoop folder you just unzipped in your favorite text editor. Find the following line in the file: 

    # export JAVA_HOME=/usr/lib/j2sdk1.6-sun 

    and change it to: 

    export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/ 

    Save the file. Second, open the file bin/hadoop within the hadoop folder in your favorite text editor. Search the file for the following line: 

    JAVA=$JAVA_HOME/bin/java 

    and change it to: 

    JAVA=$JAVA_HOME/Commands/java 

    Save the file and exit the editor. You have now set up Hadoop for development purposes on your computer.

2. Compiling a Hadoop job into a JAR file

This section guides you through compiling the WordCount example available in the Hadoop Map-Reduce Tutorial. This section assumes you are using the Eclipse IDE. If this is not the case, you should be able to adapt these instructions for your IDE.

  1. Create a new Java Project.
    Launch Eclipse, and from the File Menu select New, then use the Wizard to create a new Java Project. Enter a project name, in this example WordCount. Make sure you that the selected JRE is of version 1.6.0. Click Finish.

  2. Add hadoop library to project
    In Eclipse, right-click (control-click), on your project, go to Build Paths then Add External Archives. Browse to the hadoop folder on your desktop and select the file hadoop-version-core.jar, click Open.

  3. Add source code file
    From the File Menu, select New, then File. Select the parent folder WordCount/src (make sure this is right or you will encounter trouble when exporting the JAR file below.) and name the new file WordCount.java click Finish. Copy this code and paste it into the new file and save it. Eclipse will compile the file as soon as you save it.
  4. Export JAR file
    From the File Menu, select Export. From under Java select JAR file, click Next. Select all resources to be exported. In this case, select the entire WordCount project. Make sure the export classes checkbox is checked. Select an export destination for your JAR file - you can use your Desktop, or some other directory. For simplicity, name the file WordCount.jar and export it to your Desktop.

3. Running a Hadoop job on your development machine

This section shows you how to run your job on your own machine, for testing purposes. Hadoop will run in "standalone mode", which means that it will run within a single process, not taking advantage of any parallel processing. This will be much slower than running on the cluster, so you may want to reduce the data size set for testing.

  1. Create or obtain test data
    For this example, the input data will be this web page. Copy this entire web page, and using your favorite text editor save it as a plain text file named testing.txt. Place this file within a folder called input on your Desktop.

  2. Run the job
    First, go to the command line. (To access the command line, go to the finder, then to "Applications", then "Utilities" and finally launch "Terminal"). If you are not familiar with the UNIX command line, here is a basic guide. Change into your hadoop directory~/Desktop/hadoop-0.19.2 or similar. Execute the following command

    ./bin/hadoop jar ~/Desktop/WordCount.jar WordCount ~/Desktop/input ~/Desktop/output

    You may need to alter the paths if any of the files were saved to different places.

  3. Retrieve the results
    The results have been written to a new folder called output on your Desktop. There should be one file, named part-00000 which lists all the words on this web page, along with their occurrence count. Note, that before running hadoop again you will need to delete the entire output folder, since hadoop will not do this for you.

4. Running a Hadoop job on the CAC cluster

This section shows you how to take the JAR file you created above along with the test data, and run the job on the CAC cluster.

  1. Obtain a CAC account
    If you are taking a course which requires the use of the cluster, the instructor should organize the CAC account for you. If you are using the cluster for research, the Principal Investigator will add you to their CAC project. In either case, you will receive an email to your Cornell email address with your username and password for the CAC.

  2. Use SSH to connect to the job tracker node
    To connect to the cluster and run Hadoop jobs, use SSH in the Macintosh Terminal window, which provides a Bash shell. Run the following from the Bash command line. First, connect to the CAC: 

    ssh netid@wl01.cac.cornell.edu 

    where netid is replaced by your CAC username. Note: the address starts with doubleu-el-zero-one NOT doubleu-zero-one-zero. Enter your CAC password when prompted. The first time that you log in you will be required to change your password to something secure and easy to remember. Once you are logged in you will be placed in your CAC home directory. 

  3. Copy JAR and input files to CAC
    Copy your WordCount.jar file and input folder from your Desktop into your CAC home directory. You can use scp from the Terminal window to copy files, or you can mount the CAC directory on your Macintosh. To do this from the Macintosh Finder, then select the Connect to Server option from the Go menu. Enter the following path:

    smb://cacfs01.cac.cornell.edu/netid 

    and replace netid with your CAC user name. Enter your CAC username and password when prompted. You should then see a new Finder window, showing the contents of your CAC home directory. Please note, this directory is only accessible from within the Cornell firewall. If you wish to access it from off-campus, you will first need to VPN into Cornell

  4. Copy input files into HDFS
    Make a directory in the Hadoop Distributed File System (dfs) for your input files. You can see the list of commands available for working on the dfs by executing the following: 

    hadoop dfs 

    More information about the commands is available here. Note, that to execute any hadoop dfs command, you must type hadoop dfs -command, where command is the dfs command to run.

    To copy input data files into dfs from your home directory, do the following: 

    hadoop dfs -copyFromLocal input .

  5. Run your job
    Perform the following: 

    hadoop jar WordCount.jar WordCount input output 

    This will place the result files in a directory called "output" in the dfs. You can then copy these files back to your CAC home directory by executing the following: 

    hadoop dfs -copyToLocal output output

    Now you can retrieve the output files in the same fashion that you copied the input files to your home directory. Note, that one output file is produced for each reduce job you run. The WordCount example uses the system-configured limit of the number of reduce jobs, so do not be surprised to see 10-20 output files (the exact number depends on the number of cluster nodes running and their configuration). You can control this limit programmatically via the setNumReduceTasks() method of the JobConf class in the hadoop API. Refer to the map reduce tutorial for more details on running map reduce jobs. 

    When you are finished with the output files, you should delete the output directory. Hadoop will not automatically do this for you, and it will throw an error if you run it while there is an old output directory. To do this, execute: 

    hadoop dfs -rmr output

Last revised: March 15, 2010 
bjk/wya

 

分享到:
评论

相关推荐

    hadoop 入门

    【Hadoop 入门】 Hadoop 是一个由Apache基金会开发的开源分布式计算框架,它以其高效、可扩展和容错性著称,是大数据处理领域的重要工具。本篇将从Hadoop的基本流程、应用开发以及集群配置和使用技巧三个方面进行...

    hadoop入门经典书籍

    Hadoop是一个广泛使用的分布式数据处理框架,特别适合于处理大规模数据集。它最初是作为搜索引擎的核心数据缩减功能,但由于其架构设计为...而《Hadoop入门经典书籍》这类资料,对于新手来说,是非常有价值的入门参考。

    Hadoop入门到精通

    "Hadoop入门到精通"的学习资料旨在帮助初学者掌握这一强大的框架,并逐步晋升为专家。以下是对Hadoop及其相关概念的详细解读。 一、Hadoop概述 Hadoop是由Apache基金会开发的一个开源框架,主要用于处理和存储大...

    Hadoop入门手册.chm

    Hadoop入门手册 简单入门Hadoop入门手册 简单入门Hadoop入门手册 简单入门Hadoop入门手册 简单入门

    Hadoop入门实战手册 中文版)

    《Hadoop入门实战手册》是一本专为初学者设计的中文版指南,旨在帮助读者快速掌握Hadoop这一分布式计算框架的基础知识和实际操作技巧。Hadoop是Apache软件基金会的一个开源项目,它为海量数据处理提供了可靠的分布式...

    Hadoop入门程序java源码

    这个“Hadoop入门程序java源码”是为初学者准备的,目的是帮助他们快速掌握如何在Hadoop环境中编写和运行Java程序。Hadoop的主要组件包括HDFS(Hadoop分布式文件系统)和MapReduce,这两个部分将在下面详细介绍。 ...

    hadoop入门

    总之,Hadoop入门教程为初学者提供了对Hadoop核心概念的理解,帮助他们掌握如何安装和使用Hadoop进行数据存储与处理,并理解Hadoop的设计思想和体系架构。通过学习Hadoop,初学者可以入门到大数据处理的广阔天地中,...

    Hadoop入门教程

    本教程《Hadoop入门教程》旨在为初学者提供全面且深入的指导,帮助他们快速理解并掌握Hadoop的基本概念、架构及应用。教程由Hadoop技术论坛在2010年出版,为当时的开发者提供了宝贵的资源。 一、Hadoop简介 Hadoop...

    hadoop入门书籍1

    hadoop的入门书籍,本人认为一共有以下五本书比较好: 1.云计算资料大全(了解云计算者必读).pdf 2.Hadoop开发者入门专刊 3.Hadoop权威指南%28第2版%29中文版 4.hadoop实战中文版+电子版pdf 5.精通HADOOP 由于上传...

    Hadoop入门手册

    【Hadoop入门手册】是一本专为初学者设计的指南,旨在帮助读者快速掌握Hadoop这一分布式计算框架的基础知识和核心概念。Hadoop是Apache软件基金会的一个开源项目,它的出现解决了大数据处理中的诸多挑战,包括数据...

    hadoop入门学习 天气数据 2002年整年数据

    hadoop入门学习 mapreduce求解 天气数据 2002年整年数据的最高气温

    Hadoop入门学习文档

    ### Hadoop入门学习文档知识点梳理 #### 一、大数据概论 ##### 1.1 大数据概念 - **定义**:大数据是指无法在可承受的时间范围内用常规软件工具进行捕捉、管理和处理的数据集合。 - **特点**: - **Volume(大量)...

    hadoop入门教程.docx

    【Hadoop入门教程】 本文将带你逐步了解如何在Ubuntu虚拟机中安装配置Hadoop,并使用Eclipse进行Hadoop程序开发。教程适用于初学者,旨在帮助你快速掌握Hadoop的基础知识。 1. **JDK安装与配置** 在开始Hadoop的...

    Hadoop入门

    Hadoop的源起——Lucene ,Doug Cutting开创的开源软件,用java书写代码,实现与Google类似的全文搜索功能,提供了全文检索引擎的架构,包括完整的查询引擎和索引引擎 。

    hadoop入门共21页.pdf.zip

    【标题】"Hadoop入门共21页.pdf.zip" 提供了一个初步了解和学习Hadoop分布式文件系统(HDFS)和MapReduce计算模型的基础教程。Hadoop是大数据处理领域的一个核心框架,它允许用户在廉价硬件集群上存储和处理海量数据...

    Hadoop入门中文手册

    Hadoop入门中文手册 目的是帮助你快速完成单机上的Hadoop安装与使用以便你对Hadoop分布式文件系统(HDFS)和Map-Reduce框架有所体会,比如在HDFS上运行示例程序或简单作业等,同样也介绍了Hive,HBase详细安装应用! ...

Global site tag (gtag.js) - Google Analytics