- 浏览: 283627 次
- 性别:
- 来自: 广州
文章分类
- 全部博客 (247)
- free talking (11)
- java (18)
- search (16)
- hbase (34)
- open-sources (0)
- architect (1)
- zookeeper (16)
- vm (1)
- hadoop (34)
- nutch (33)
- lucene (5)
- ubuntu/shell (8)
- ant (0)
- mapreduce (5)
- hdfs (2)
- hadoop sources reading (13)
- AI (0)
- distributed tech (1)
- others (1)
- maths (6)
- english (1)
- art & entertainment (1)
- nosql (1)
- algorithms (8)
- hadoop-2.5 (16)
- hbase-0.94.2 source (28)
- zookeeper-3.4.3 source reading (1)
- solr (1)
- TODO (3)
- JVM optimization (1)
- architecture (0)
- hbase-guideline (1)
- data mining (3)
- hive (1)
- mahout (0)
- spark (28)
- scala (3)
- python (0)
- machine learning (1)
最新评论
-
jpsb:
...
为什么需要分布式? -
leibnitz:
hi guy, this is used as develo ...
compile hadoop-2.5.x on OS X(macbook) -
string2020:
撸主真土豪,在苹果里面玩大数据.
compile hadoop-2.5.x on OS X(macbook) -
youngliu_liu:
怎样运行这个脚本啊??大牛,我刚进入搜索引擎行业,希望你能不吝 ...
nutch 数据增量更新 -
leibnitz:
also, there is a similar bug ...
2。hbase CRUD--Lease in hbase
看了一堆不太相关的东西...
其实只要解压运行即可,形如hadoop-0.20.xx,不过要注意jar的位置.
hadoop jar share/hadoop/hadoop-mapreduce/hadoop-example-xx.jar wordcount input output
接下来将进行cluster布署,加深了解新架构的动作流程.
12/06/17 11:59:01 WARN util.KerberosName: Kerberos krb5 configuration not found, setting default realm to empty
12/06/17 11:59:01 WARN conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
12/06/17 11:59:01 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/06/17 11:59:01 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/06/17 11:59:01 INFO input.FileInputFormat: Total input paths to process : 1
12/06/17 11:59:01 WARN snappy.LoadSnappy: Snappy native library not loaded
12/06/17 11:59:02 INFO mapreduce.JobSubmitter: number of splits:1
12/06/17 11:59:02 WARN conf.Configuration: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
12/06/17 11:59:02 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
12/06/17 11:59:02 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
12/06/17 11:59:02 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
12/06/17 11:59:02 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
12/06/17 11:59:02 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
12/06/17 11:59:02 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
12/06/17 11:59:02 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
12/06/17 11:59:02 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
12/06/17 11:59:02 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
12/06/17 11:59:02 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
12/06/17 11:59:02 WARN conf.Configuration: file:/tmp/hadoop-hadoop/mapred/staging/hadoop2008898472/.staging/job_local_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
12/06/17 11:59:02 WARN conf.Configuration: file:/tmp/hadoop-hadoop/mapred/staging/hadoop2008898472/.staging/job_local_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
12/06/17 11:59:02 WARN conf.Configuration: file:/tmp/hadoop-hadoop/mapred/local/localRunner/job_local_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
12/06/17 11:59:02 WARN conf.Configuration: file:/tmp/hadoop-hadoop/mapred/local/localRunner/job_local_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
12/06/17 11:59:02 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
12/06/17 11:59:02 INFO mapreduce.Job: Running job: job_local_0001
12/06/17 11:59:02 INFO mapred.LocalJobRunner: OutputCommitter set in config null
12/06/17 11:59:02 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
12/06/17 11:59:02 INFO mapred.LocalJobRunner: Waiting for map tasks
12/06/17 11:59:02 INFO mapred.LocalJobRunner: Starting task: attempt_local_0001_m_000000_0
12/06/17 11:59:02 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@2bc3f5
12/06/17 11:59:02 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
12/06/17 11:59:02 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
12/06/17 11:59:02 INFO mapred.MapTask: soft limit at 83886080
12/06/17 11:59:02 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
12/06/17 11:59:02 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
12/06/17 11:59:02 INFO mapred.LocalJobRunner:
12/06/17 11:59:02 INFO mapred.MapTask: Starting flush of map output
12/06/17 11:59:02 INFO mapred.MapTask: Spilling map output
12/06/17 11:59:02 INFO mapred.MapTask: bufstart = 0; bufend = 2055; bufvoid = 104857600
12/06/17 11:59:02 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213684(104854736); length = 713/6553600
12/06/17 11:59:02 INFO mapred.MapTask: Finished spill 0
12/06/17 11:59:02 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of committing
12/06/17 11:59:02 INFO mapred.LocalJobRunner: map
12/06/17 11:59:02 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/06/17 11:59:02 INFO mapred.LocalJobRunner: Finishing task: attempt_local_0001_m_000000_0
12/06/17 11:59:02 INFO mapred.LocalJobRunner: Map task executor complete.
12/06/17 11:59:03 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@da3a1e
12/06/17 11:59:03 INFO mapred.Merger: Merging 1 sorted segments
12/06/17 11:59:03 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
12/06/17 11:59:03 INFO mapred.LocalJobRunner:
12/06/17 11:59:03 WARN conf.Configuration: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
12/06/17 11:59:03 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of committing
12/06/17 11:59:03 INFO mapred.LocalJobRunner:
12/06/17 11:59:03 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/06/17 11:59:03 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/cc/hadoop/standalone/hadoop-2.0.0-alpha/out/wc/_temporary/0/task_local_0001_r_000000
12/06/17 11:59:03 INFO mapred.LocalJobRunner: reduce > reduce
12/06/17 11:59:03 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
12/06/17 11:59:03 INFO mapreduce.Job: Job job_local_0001 running in uber mode : false
12/06/17 11:59:03 INFO mapreduce.Job: map 100% reduce 100%
12/06/17 11:59:03 INFO mapreduce.Job: Job job_local_0001 completed successfully
12/06/17 11:59:03 INFO mapreduce.Job: Counters: 27
File System Counters
FILE: Number of bytes read=544748
FILE: Number of bytes written=798104
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=31
Map output records=179
Map output bytes=2055
Map output materialized bytes=1836
Input split bytes=121
Combine input records=179
Combine output records=131
Reduce input groups=131
Reduce shuffle bytes=0
Reduce input records=131
Reduce output records=131
Spilled Records=262
Shuffled Maps =0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=111
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=242360320
File Input Format Counters
Bytes Read=1366
File Output Format Counters
Bytes Written=1326
ref:
Hadoop 0.23.x/NameNode federation原理、编译、安装、体验
发表评论
-
hadoop-replication written flow
2017-08-14 17:00 560w:net write r :net read( ... -
hbase-export table to json file
2015-12-25 17:21 1669i wanna export a table to j ... -
yarn-similar logs when starting up container
2015-12-09 17:17 94815/12/09 16:47:52 INFO yarn.E ... -
hadoop-compression
2015-10-26 16:52 492http://blog.cloudera.com/blog ... -
hadoop-yarn unmanaged application master
2015-06-30 17:40 10301.what is so as the named ... -
hadoop 2-state machine-(2)--flow of starting AM and Container
2015-06-24 17:42 447there are something dif ... -
hadoop 2-state machine-(1)
2015-06-23 17:41 6671.what in many scenarios ... -
flow of submitting a AM job to yarn
2015-06-11 11:08 489this figure demonstrates th ... -
hadoop yarn-distributedshell
2015-06-04 18:09 1999this application is introdu ... -
hoya--hbase on yarn
2015-04-23 17:00 446Introducing Hoya – HBase on YA ... -
hadoop 2.x-enable job historyserver
2014-12-21 23:20 764we know,hadoop will show on ... -
hadoop 2.x-the hadoop rpc protocols
2014-12-12 17:41 5301. submitting a MR job ... -
hadoop 2.x-HDFS snapshot
2014-11-17 16:40 814I dont want to restruct whe ... -
hadoop 2.x-HDFS federation
2014-11-14 17:08 996note,this article is focus on ... -
hadoop 2.x-HDFS HA --Part II: installation
2014-11-11 17:47 2798this article is the another ... -
hadoop 2.x-HDFS HA --Part I: abstraction
2014-11-12 17:50 848below are the outlines of t ... -
install hadoop-2.5 without HDFS HA /Federation
2014-11-05 18:23 1434I. installation mode sa ... -
compile hadoop-2.5.x on OS X(macbook)
2014-10-30 15:42 2496same as compile hbase ,it ' ... -
upgrades of hadoop and hbase
2014-10-28 11:39 7421.the match relationships ... -
how to submit jars to a map reduce job?
2014-04-02 01:23 544there maybe two ways : 1.serv ...
相关推荐
《JMeter依赖包Logkit-2.0.jar详解与应用》 在性能测试领域,Apache JMeter是一款广泛应用的开源工具,它主要用于模拟多种并发用户执行不同的HTTP请求,以测试服务器的性能和稳定性。在JMeter的众多插件和依赖中,...
linux下安装hadoop-2.0.0-alpha(双namenode federation)安装过程整理.txt
这个压缩包 "sqoop-1.4.6.bin__hadoop-2.0.4-alpha.zip" 包含的是Sqoop 1.4.6版本,针对Hadoop 2.0.4-alpha版本优化的二进制发行版。 **Sqoop 的核心功能:** 1. **数据导入**:Sqoop 提供了命令行接口,可以将结构...
ambari-2.7.5 编译过程中四个大包下载很慢,所以需要提前下载,包含:hbase-2.0.2.3.1.4.0-315-bin.tar.gz ,hadoop-3.1.1.3.1.4.0-315.tar.gz , grafana-6.4.2.linux-amd64.tar.gz ,phoenix-5.0.0.3.1.4.0-315....
这个压缩包 "sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz" 包含了 Sqoop 的 1.4.6 版本,它已针对 Hadoop 2.0.4-alpha 版本进行了优化。让我们深入了解一下 Sqoop 的核心功能、工作原理以及如何在 Hadoop 环境中...
为了方便开发者在Eclipse或MyEclipse这样的集成开发环境中高效地进行Hadoop应用开发,Hadoop-Eclipse-Plugin应运而生。这个插件允许开发者直接在IDE中对Hadoop集群进行操作,如创建、编辑和运行MapReduce任务,极大...
在Hadoop 2.0中,NameNode的High Availability(HA)和Federation是为了解决传统Hadoop架构中的两个关键问题:单点故障和集群扩展性。在Hadoop 2.0之前,NameNode作为HDFS的核心组件,它的单点故障可能导致整个...
Sqoop(发音:skup)是一款开源的工具,主要用于在Hadoop(Hive)与传统的数据库(mysql、postgresql...)间进行数据的传递,可以将一个关系型数据库(例如 : MySQL ,Oracle ,Postgres等)中的数据导进到Hadoop的HDFS中,...
这个压缩包 "sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.zip" 包含了 Sqoop 1.4.6 版本,该版本是为 Hadoop 2.0.4-alpha 版本定制的。Sqoop 的主要功能是让大数据分析师和开发人员能够方便地导入和导出数据,它弥补了...
Sqoop是一个用于在Hadoop和关系数据库或大型机之间传输数据的工具。您可以使用Sqoop将关系数据库管理系统(RDBMS)中的数据导入Hadoop分布式文件系统(HDFS),转换Hadoop MapReduce中的数据,然后将数据导出回RDBMS...
Flink jar包,官网下载很慢,有需要的自行下载 Apache Flink是由Apache软件基金会开发的开源流处理框架,其核心是用Java和Scala编写的分布式流数据流引擎。Flink以数据并行和流水线方式执行任意流数据程序,Flink...
赠送jar包:hadoop-auth-2.5.1.jar; 赠送原API文档:hadoop-auth-2.5.1-javadoc.jar; 赠送源代码:hadoop-auth-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-auth-2.5.1.pom; 包含翻译后的API文档:hadoop...
赠送jar包:hadoop-yarn-client-2.6.5.jar; 赠送原API文档:hadoop-yarn-client-2.6.5-javadoc.jar; 赠送源代码:hadoop-yarn-client-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-client-2.6.5.pom;...
1. `hadoop-lzo-0.4.21-SNAPSHOT-javadoc.jar`:这是Hadoop-LZO的Java文档(Javadoc),包含了一份详细的API文档,开发者可以通过查阅这份文档了解如何在自己的代码中调用Hadoop-LZO提供的接口和类,进行数据压缩...
Apache Hadoop (hadoop-3.3.4.tar.gz)项目为可靠、可扩展的分布式计算开发开源软件。官网下载速度非常缓慢,因此将hadoop-3.3.4 版本放在这里,欢迎大家来下载使用! Hadoop 架构是一个开源的、基于 Java 的编程...
赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...
在Hadoop 2.0中,为了解决单点故障问题,Hadoop社区引入了一系列的高可用性(HA)解决方案,以确保系统的关键组件能够持续运行,即使在某个节点失败时也能保持服务的稳定性。Hadoop 1.0的主要缺点在于MapReduce的...
Hadoop-Eclipse-Plugin-3.1.1是一款专为Eclipse集成开发环境设计的插件,用于方便地在Hadoop分布式文件系统(HDFS)上进行开发和调试MapReduce程序。这款插件是Hadoop生态系统的组成部分,它使得Java开发者能够更加...
标题 "sqoop-1.4.6.bin__hadoop-2.0.4-alpha" 指的是Sqoop的一个特定版本,即1.4.6,同时这个版本是为与Hadoop 2.0.4-alpha兼容而编译的。Hadoop 2.0.4-alpha是Hadoop 2.x系列的一个早期版本,引入了YARN(Yet ...
hadoop-annotations-3.1.1.jar hadoop-common-3.1.1.jar hadoop-mapreduce-client-core-3.1.1.jar hadoop-yarn-api-3.1.1.jar hadoop-auth-3.1.1.jar hadoop-hdfs-3.1.1.jar hadoop-mapreduce-client-hs-3.1.1.jar ...