- 浏览: 2190465 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (682)
- 软件思想 (7)
- Lucene(修真篇) (17)
- Lucene(仙界篇) (20)
- Lucene(神界篇) (11)
- Solr (48)
- Hadoop (77)
- Spark (38)
- Hbase (26)
- Hive (19)
- Pig (25)
- ELK (64)
- Zookeeper (12)
- JAVA (119)
- Linux (59)
- 多线程 (8)
- Nutch (5)
- JAVA EE (21)
- Oracle (7)
- Python (32)
- Xml (5)
- Gson (1)
- Cygwin (1)
- JavaScript (4)
- MySQL (9)
- Lucene/Solr(转) (5)
- 缓存 (2)
- Github/Git (1)
- 开源爬虫 (1)
- Hadoop运维 (7)
- shell命令 (9)
- 生活感悟 (42)
- shell编程 (23)
- Scala (11)
- MongoDB (3)
- docker (2)
- Nodejs (3)
- Neo4j (5)
- storm (3)
- opencv (1)
最新评论
-
qindongliang1922:
粟谷_sugu 写道不太理解“分词字段存储docvalue是没 ...
浅谈Lucene中的DocValues -
粟谷_sugu:
不太理解“分词字段存储docvalue是没有意义的”,这句话, ...
浅谈Lucene中的DocValues -
yin_bp:
高性能elasticsearch ORM开发库使用文档http ...
为什么说Elasticsearch搜索是近实时的? -
hackWang:
请问博主,有用solr做电商的搜索项目?
Solr中Group和Facet的用法 -
章司nana:
遇到的问题同楼上 为什么会返回null
Lucene4.3开发之第八步之渡劫初期(八)
在上一篇博文中,散仙已经讲了Hadoop的单机伪分布的部署,本篇,散仙就说下,如何eclipse中调试hadoop2.2.0,如果你使用的还是hadoop1.x的版本,那么,也没事,散仙在以前的博客里,也写过eclipse调试1.x的hadoop程序,两者最大的不同之处在于使用的eclipse插件不同,hadoop2.x与hadoop1.x的API,不太一致,所以插件也不一样,我们只需要使用分别对应的插件即可.
下面开始进入正题:
遇到的几个问题如下:
解决办法:
在org.apache.hadoop.util.Shell类的checkHadoopHome()方法的返回值里写固定的
本机hadoop的路径,散仙在这里更改如下:
第二个异常,Could not locate executable D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe in the Hadoop binaries. 找不到win上的执行程序,可以去https://github.com/srccodes/hadoop-common-2.2.0-bin下载bin包,覆盖本机的hadoop跟目录下的bin包即可
第三个异常:
出现这个异常,一般是HDFS的路径写的有问题,解决办法,拷贝集群上的core-site.xml和hdfs-site.xml文件,放在eclipse的src根目录下即可。
第四个异常:
出现这个异常,一般是由于HADOOP_HOME的环境变量配置的有问题,在这里散仙特别说明一下,如果想在Win上的eclipse中成功调试Hadoop2.2,就需要在本机的环境变量上,添加如下的环境变量:
(1)在系统变量中,新建HADOOP_HOME变量,属性值为D:\hadoop-2.2.0.也就是本机对应的hadoop目录
(2)在系统变量的Path里,追加%HADOOP_HOME%/bin即可
以上的问题,是散仙在测试遇到的,经过对症下药,我们的eclipse终于可以成功的调试MR程序了,散仙这里的Hellow World源码如下:
控制台,打印日志如下:
输入的测试数据如下:
输出的结果如下:
至此,我们已经成功的在eclipse里远程调试hadoop成功,调试时,注意散仙,在上文提出的几个问题,如果遇到时,按照对应的方法解决即可。
下面开始进入正题:
序号 | 名称 | 描述 |
1 | eclipse | Juno Service Release 4.2的本 |
2 | 操作系统 | Windows7 |
3 | hadoop的eclipse插件 | hadoop-eclipse-plugin-2.2.0.jar |
4 | hadoop的集群环境 | 虚拟机Linux的Centos6.5单机伪分布式 |
5 | 调试程序 | Hellow World |
遇到的几个问题如下:
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
解决办法:
在org.apache.hadoop.util.Shell类的checkHadoopHome()方法的返回值里写固定的
本机hadoop的路径,散仙在这里更改如下:
private static String checkHadoopHome() { // first check the Dflag hadoop.home.dir with JVM scope //System.setProperty("hadoop.home.dir", "..."); String home = System.getProperty("hadoop.home.dir"); // fall back to the system/user-global env variable if (home == null) { home = System.getenv("HADOOP_HOME"); } try { // couldn't find either setting for hadoop's home directory if (home == null) { throw new IOException("HADOOP_HOME or hadoop.home.dir are not set."); } if (home.startsWith("\"") && home.endsWith("\"")) { home = home.substring(1, home.length()-1); } // check that the home setting is actually a directory that exists File homedir = new File(home); if (!homedir.isAbsolute() || !homedir.exists() || !homedir.isDirectory()) { throw new IOException("Hadoop home directory " + homedir + " does not exist, is not a directory, or is not an absolute path."); } home = homedir.getCanonicalPath(); } catch (IOException ioe) { if (LOG.isDebugEnabled()) { LOG.debug("Failed to detect a valid hadoop home directory", ioe); } home = null; } //固定本机的hadoop地址 home="D:\\hadoop-2.2.0"; return home; }
第二个异常,Could not locate executable D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe in the Hadoop binaries. 找不到win上的执行程序,可以去https://github.com/srccodes/hadoop-common-2.2.0-bin下载bin包,覆盖本机的hadoop跟目录下的bin包即可
第三个异常:
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs://192.168.130.54:19000/user/hmail/output/part-00000, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356) at com.netease.hadoop.HDFSCatWithAPI.main(HDFSCatWithAPI.java:23)
出现这个异常,一般是HDFS的路径写的有问题,解决办法,拷贝集群上的core-site.xml和hdfs-site.xml文件,放在eclipse的src根目录下即可。
第四个异常:
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
出现这个异常,一般是由于HADOOP_HOME的环境变量配置的有问题,在这里散仙特别说明一下,如果想在Win上的eclipse中成功调试Hadoop2.2,就需要在本机的环境变量上,添加如下的环境变量:
(1)在系统变量中,新建HADOOP_HOME变量,属性值为D:\hadoop-2.2.0.也就是本机对应的hadoop目录
(2)在系统变量的Path里,追加%HADOOP_HOME%/bin即可
以上的问题,是散仙在测试遇到的,经过对症下药,我们的eclipse终于可以成功的调试MR程序了,散仙这里的Hellow World源码如下:
package com.qin.wordcount; import java.io.IOException; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; /*** * * Hadoop2.2.0测试 * 放WordCount的例子 * * @author qindongliang * * hadoop技术交流群: 376932160 * * * */ public class MyWordCount { /** * Mapper * * **/ private static class WMapper extends Mapper<LongWritable, Text, Text, IntWritable>{ private IntWritable count=new IntWritable(1); private Text text=new Text(); @Override protected void map(LongWritable key, Text value,Context context) throws IOException, InterruptedException { String values[]=value.toString().split("#"); //System.out.println(values[0]+"========"+values[1]); count.set(Integer.parseInt(values[1])); text.set(values[0]); context.write(text,count); } } /** * Reducer * * **/ private static class WReducer extends Reducer<Text, IntWritable, Text, Text>{ private Text t=new Text(); @Override protected void reduce(Text key, Iterable<IntWritable> value,Context context) throws IOException, InterruptedException { int count=0; for(IntWritable i:value){ count+=i.get(); } t.set(count+""); context.write(key,t); } } /** * 改动一 * (1)shell源码里添加checkHadoopHome的路径 * (2)974行,FileUtils里面 * **/ public static void main(String[] args) throws Exception{ // String path1=System.getenv("HADOOP_HOME"); // System.out.println(path1); // System.exit(0); JobConf conf=new JobConf(MyWordCount.class); //Configuration conf=new Configuration(); //conf.set("mapred.job.tracker","192.168.75.130:9001"); //读取person中的数据字段 // conf.setJar("tt.jar"); //注意这行代码放在最前面,进行初始化,否则会报 /**Job任务**/ Job job=new Job(conf, "testwordcount"); job.setJarByClass(MyWordCount.class); System.out.println("模式: "+conf.get("mapred.job.tracker"));; // job.setCombinerClass(PCombine.class); // job.setNumReduceTasks(3);//设置为3 job.setMapperClass(WMapper.class); job.setReducerClass(WReducer.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); String path="hdfs://192.168.46.28:9000/qin/output"; FileSystem fs=FileSystem.get(conf); Path p=new Path(path); if(fs.exists(p)){ fs.delete(p, true); System.out.println("输出路径存在,已删除!"); } FileInputFormat.setInputPaths(job, "hdfs://192.168.46.28:9000/qin/input"); FileOutputFormat.setOutputPath(job,p ); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
控制台,打印日志如下:
INFO - Configuration.warnOnceIfDeprecated(840) | mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 模式: local 输出路径存在,已删除! INFO - Configuration.warnOnceIfDeprecated(840) | session.id is deprecated. Instead, use dfs.metrics.session-id INFO - JvmMetrics.init(76) | Initializing JVM Metrics with processName=JobTracker, sessionId= WARN - JobSubmitter.copyAndConfigureFiles(149) | Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. WARN - JobSubmitter.copyAndConfigureFiles(258) | No job jar file set. User classes may not be found. See Job or Job#setJar(String). INFO - FileInputFormat.listStatus(287) | Total input paths to process : 1 INFO - JobSubmitter.submitJobInternal(394) | number of splits:1 INFO - Configuration.warnOnceIfDeprecated(840) | user.name is deprecated. Instead, use mapreduce.job.user.name INFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class INFO - Configuration.warnOnceIfDeprecated(840) | mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class INFO - Configuration.warnOnceIfDeprecated(840) | mapred.job.name is deprecated. Instead, use mapreduce.job.name INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class INFO - Configuration.warnOnceIfDeprecated(840) | mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir INFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir INFO - Configuration.warnOnceIfDeprecated(840) | mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class INFO - Configuration.warnOnceIfDeprecated(840) | mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps INFO - Configuration.warnOnceIfDeprecated(840) | mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class INFO - Configuration.warnOnceIfDeprecated(840) | mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class INFO - Configuration.warnOnceIfDeprecated(840) | mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir INFO - JobSubmitter.printTokens(477) | Submitting tokens for job: job_local1181216011_0001 WARN - Configuration.loadProperty(2172) | file:/root/hadoop/tmp/mapred/staging/qindongliang1181216011/.staging/job_local1181216011_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. WARN - Configuration.loadProperty(2172) | file:/root/hadoop/tmp/mapred/staging/qindongliang1181216011/.staging/job_local1181216011_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. WARN - Configuration.loadProperty(2172) | file:/root/hadoop/tmp/mapred/local/localRunner/qindongliang/job_local1181216011_0001/job_local1181216011_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. WARN - Configuration.loadProperty(2172) | file:/root/hadoop/tmp/mapred/local/localRunner/qindongliang/job_local1181216011_0001/job_local1181216011_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. INFO - Job.submit(1272) | The url to track the job: http://localhost:8080/ INFO - Job.monitorAndPrintJob(1317) | Running job: job_local1181216011_0001 INFO - LocalJobRunner$Job.createOutputCommitter(323) | OutputCommitter set in config null INFO - LocalJobRunner$Job.createOutputCommitter(341) | OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter INFO - LocalJobRunner$Job.run(389) | Waiting for map tasks INFO - LocalJobRunner$Job$MapTaskRunnable.run(216) | Starting task: attempt_local1181216011_0001_m_000000_0 INFO - ProcfsBasedProcessTree.isAvailable(129) | ProcfsBasedProcessTree currently is supported only on Linux. INFO - Task.initialize(581) | Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@39550640 INFO - MapTask.runNewMapper(732) | Processing split: hdfs://192.168.46.28:9000/qin/input/test.txt:0+38 INFO - MapTask.createSortingCollector(387) | Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer INFO - MapTask$MapOutputBuffer.setEquator(1183) | (EQUATOR) 0 kvi 26214396(104857584) INFO - MapTask$MapOutputBuffer.init(975) | mapreduce.task.io.sort.mb: 100 INFO - MapTask$MapOutputBuffer.init(976) | soft limit at 83886080 INFO - MapTask$MapOutputBuffer.init(977) | bufstart = 0; bufvoid = 104857600 INFO - MapTask$MapOutputBuffer.init(978) | kvstart = 26214396; length = 6553600 INFO - LocalJobRunner$Job.statusUpdate(513) | INFO - MapTask$MapOutputBuffer.flush(1440) | Starting flush of map output INFO - MapTask$MapOutputBuffer.flush(1459) | Spilling map output INFO - MapTask$MapOutputBuffer.flush(1460) | bufstart = 0; bufend = 44; bufvoid = 104857600 INFO - MapTask$MapOutputBuffer.flush(1462) | kvstart = 26214396(104857584); kvend = 26214384(104857536); length = 13/6553600 INFO - MapTask$MapOutputBuffer.sortAndSpill(1648) | Finished spill 0 INFO - Task.done(995) | Task:attempt_local1181216011_0001_m_000000_0 is done. And is in the process of committing INFO - LocalJobRunner$Job.statusUpdate(513) | map INFO - Task.sendDone(1115) | Task 'attempt_local1181216011_0001_m_000000_0' done. INFO - LocalJobRunner$Job$MapTaskRunnable.run(241) | Finishing task: attempt_local1181216011_0001_m_000000_0 INFO - LocalJobRunner$Job.run(397) | Map task executor complete. INFO - ProcfsBasedProcessTree.isAvailable(129) | ProcfsBasedProcessTree currently is supported only on Linux. INFO - Task.initialize(581) | Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@68843e7b INFO - Merger$MergeQueue.merge(568) | Merging 1 sorted segments INFO - Merger$MergeQueue.merge(667) | Down to the last merge-pass, with 1 segments left of total size: 45 bytes INFO - LocalJobRunner$Job.statusUpdate(513) | INFO - Configuration.warnOnceIfDeprecated(840) | mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords INFO - Task.done(995) | Task:attempt_local1181216011_0001_r_000000_0 is done. And is in the process of committing INFO - LocalJobRunner$Job.statusUpdate(513) | INFO - Task.commit(1156) | Task attempt_local1181216011_0001_r_000000_0 is allowed to commit now INFO - FileOutputCommitter.commitTask(439) | Saved output of task 'attempt_local1181216011_0001_r_000000_0' to hdfs://192.168.46.28:9000/qin/output/_temporary/0/task_local1181216011_0001_r_000000 INFO - LocalJobRunner$Job.statusUpdate(513) | reduce > reduce INFO - Task.sendDone(1115) | Task 'attempt_local1181216011_0001_r_000000_0' done. INFO - Job.monitorAndPrintJob(1338) | Job job_local1181216011_0001 running in uber mode : false INFO - Job.monitorAndPrintJob(1345) | map 100% reduce 100% INFO - Job.monitorAndPrintJob(1356) | Job job_local1181216011_0001 completed successfully INFO - Job.monitorAndPrintJob(1363) | Counters: 32 File System Counters FILE: Number of bytes read=372 FILE: Number of bytes written=382174 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=76 HDFS: Number of bytes written=27 HDFS: Number of read operations=17 HDFS: Number of large read operations=0 HDFS: Number of write operations=6 Map-Reduce Framework Map input records=4 Map output records=4 Map output bytes=44 Map output materialized bytes=58 Input split bytes=109 Combine input records=0 Combine output records=0 Reduce input groups=3 Reduce shuffle bytes=0 Reduce input records=4 Reduce output records=3 Spilled Records=8 Shuffled Maps =0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=532938752 File Input Format Counters Bytes Read=38 File Output Format Counters Bytes Written=27
输入的测试数据如下:
中国#1 美国#2 英国#3 中国#2
输出的结果如下:
中国 3 美国 2 英国 3
至此,我们已经成功的在eclipse里远程调试hadoop成功,调试时,注意散仙,在上文提出的几个问题,如果遇到时,按照对应的方法解决即可。
- hadoop-common-2.2.0-bin-master.zip (272.6 KB)
- 下载次数: 26
发表评论
-
Apache Flink在阿里的使用(译)
2019-02-21 21:18 1221Flink是未来大数据实时 ... -
计算机图形处理的一些知识
2018-04-25 17:46 1237最近在搞opencv来做一些 ... -
如何在kylin中构建一个cube
2017-07-11 19:06 1290前面的文章介绍了Apache Kylin的安装及数据仓 ... -
Apache Kylin的入门安装
2017-06-27 21:27 2151Apache Kylin™是一个开源的分布式分析引擎,提供 ... -
ES-Hadoop插件介绍
2017-04-27 18:07 1999上篇文章,写了使用spark集成es框架,并向es写入数据,虽 ... -
如何在Scala中读取Hadoop集群上的gz压缩文件
2017-04-05 18:51 2142存在Hadoop集群上的文件,大部分都会经过压缩,如果是压缩 ... -
如何收集项目日志统一发送到kafka中?
2017-02-07 19:07 2799上一篇(http://qindongliang.iteye. ... -
Hue+Hive临时目录权限不够解决方案
2016-06-14 10:40 4732安装Hue后,可能会分配多个账户给一些业务部门操作hive,虽 ... -
Hadoop的8088页面失效问题
2016-03-31 11:21 4471前两天重启了测试的hadoop集群,今天访问集群的8088任 ... -
Hadoop+Hbase集群数据迁移问题
2016-03-23 21:00 2540数据迁移或备份是任何 ... -
如何监控你的Hadoop+Hbase集群?
2016-03-21 16:10 4927前言 监控hadoop的框架 ... -
Logstash与Kafka集成
2016-02-24 18:44 11658在ELKK的架构中,各个框架的角色分工如下: Elastic ... -
Kakfa集群搭建
2016-02-23 15:36 2655先来整体熟悉下Kafka的一些概念和架构 (一)什么是Ka ... -
大数据日志收集框架之Flume入门
2016-02-02 14:25 4194Flume是Cloudrea公司开源的一款优秀的日志收集框架 ... -
Apache Tez0.7编译笔记
2016-01-15 16:33 2541目前最新的Tez版本是0.8,但还不是稳定版,所以大家还 ... -
Bug死磕之hue集成的oozie+pig出现资源任务死锁问题
2016-01-14 15:52 3846这两天,打算给现有的 ... -
Hadoop2.7.1和Hbase0.98添加LZO压缩
2016-01-04 17:46 26111,执行命令安装一些依赖组件 yum install -y ... -
Hadoop2.7.1配置NameNode+ResourceManager高可用原理分析
2015-11-11 19:51 3187关于NameNode高可靠需要配置的文件有core-site ... -
设置Hadoop+Hbase集群pid文件存储位置
2015-10-20 13:40 2874有时候,我们对运行几 ... -
Hadoop+Maven项目打包异常
2015-08-11 19:36 1599先简单说下业务:有一个单独的模块,可以在远程下载Hadoop上 ...
相关推荐
在Windows环境下,使用Eclipse进行Hadoop 2.2.0分布式集群的调试是一项重要的技能,这可以帮助开发者更好地理解和优化Hadoop程序。以下是一些关键的知识点,将指导你完成这个过程。 首先,Hadoop是一个开源的分布式...
hadoop Eclipse插件Linux版本,编译环境hadoop2.2.0
在Windows环境下,开发基于Hadoop的Java应用程序通常需要一个集成开发环境(IDE),Eclipse是其中常用的一个。本文将详细讲解如何使用Eclipse与Hadoop 2.2.0插件进行连接,以便于在Windows操作系统上进行Hadoop相关...
Hadoop Eclipse Plugin 2.2.0能够很好地与YARN集成,使得开发者在Eclipse中就能管理和调试基于YARN的MapReduce作业。 总结起来,Hadoop Eclipse Plugin 2.2.0是Hadoop开发者不可或缺的工具,它将强大的Eclipse IDE...
hadoop2.2.0 eclipse插件-重新编译过。hadoop用的是hadoop2.2.0版本,eclipse用的是 eclipse-kepler。 插件 eclipse-kepler
在这个配置文件中,我们将会探讨Hadoop 2.2.0 在4台CentOS 6.4系统上运行所需的配置细节。 首先,Hadoop的核心组件包括HDFS(Hadoop Distributed File System)和MapReduce,它们都需要通过一系列的配置文件来定制...
9. 数据处理生态:在Hadoop 2.2.0中,除了核心的HDFS和MapReduce,还有许多配套项目,如HBase(分布式数据库)、Hive(数据仓库工具)、Pig(高级数据分析语言)、Oozie(工作流管理系统)等,构建了一个完整的大...
在本案例中,我们关注的是在64位Linux系统上安装和运行Hadoop 2.2.0。 通常,Hadoop的官方下载页面提供的是预编译的二进制包,这些包可能针对特定的处理器架构。如果在64位Linux系统上直接使用32位的Hadoop安装包,...
自己编译的hadoop-eclipse-plugin-2.2.0.jar插件:hadoop版本hadoop-2.2.0、eclipse版本:Eclipse Standard 4.3.1
hadoop 2.2.0 eclipse plugin
根据提供的文件标题、描述、标签以及部分内容,我们可以推断出这份文档主要涉及Hadoop 2.2.0版本在Linux 64位系统上的安装包和源码包的相关信息。以下将详细介绍与这些关键词相关的重要知识点。 ### Hadoop 2.2.0 ...
这个64位的native文件是Hadoop针对64位Linux操作系统编译的一组库文件,它们对于Hadoop在Linux环境下高效运行至关重要。在Hadoop的源代码中,native库主要是由C++编写的,提供了与Java层交互的关键功能,尤其是涉及...
hadoop2.2.0/2.6.0/2.7.0/2.7.1 64位安装包。
Hadoop2.2.0安装配置手册,新手安装和配置
8. **Eclipse插件**:hadoop-eclipse-plugin-2.2.0允许开发者在Eclipse环境中直接创建、构建、运行和调试Hadoop MapReduce程序,极大地简化了开发流程。 总结来说,Hadoop 2.2.0依赖的jar包构成了其分布式计算框架...
本教程将带你逐步走进Hadoop的世界,从零开始,教你如何在本地环境中安装配置Hadoop2.2.0版本,然后结合Eclipse开发环境,编写并运行Hadoop实例程序。 首先,了解Hadoop的基本概念是必要的。Hadoop是由Apache基金会...
hadoop2.2.0下的eclipse插件,已经编译好的,直接可以使用哦。
标题 "hadoop2.2.0-eclipse-plugin" 指的是一个针对Hadoop 2.2.0版本的Eclipse集成开发环境插件。这个插件允许开发者在Eclipse中方便地进行Hadoop相关项目的开发、调试和测试。Eclipse插件通常包括对Hadoop配置的...
标题中的“Hadoop2.2.0环境测试详细傻瓜说明”表明了本文将要讨论的是关于Hadoop 2.2.0版本的环境配置和简单的应用测试,特别是针对新手的指南。描述中的“配置以后的一些测试,wordcount啥的,有信心的就不用下了”...