- 浏览: 597371 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (213)
- jdk (40)
- J2EE (8)
- JavaScript (16)
- spring (1)
- 正则 (10)
- ORACLE (18)
- CSS (1)
- 生活学习 (14)
- XML (3)
- Linux (11)
- 项目架设 (12)
- Junit (1)
- Derby (3)
- JMX (5)
- 版本控制 (4)
- PowerDesigner (1)
- 加密解密 (1)
- android (4)
- Eclipse (4)
- Hibernate (6)
- Tomcat (4)
- JasperReport&iReport (1)
- SqlServer (6)
- C++ (3)
- 系统架构涉及名词解释 (5)
- Hadoop (8)
- windows (2)
- SOA (1)
- 有趣的Google (1)
- 编程语言 (0)
- 数据结构与算法 (1)
- nodejs (1)
- 一些测试整理备份在这里吧 (0)
- 性能 (3)
- Drill (5)
- 学习 (1)
最新评论
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# mkdir input root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# cp conf/*.xml input root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' Exception in thread "main" java.io.IOException: Error opening job jar: hadoop-*-examples.jar at org.apache.hadoop.util.RunJar.main(RunJar.java:90) Caused by: java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.<init>(ZipFile.java:131) at java.util.jar.JarFile.<init>(JarFile.java:150) at java.util.jar.JarFile.<init>(JarFile.java:87) at org.apache.hadoop.util.RunJar.main(RunJar.java:88)
ls下,官方给的命令后面没有版本号,而我本地的需要版本号,加上如下
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+' 11/05/22 11:26:37 INFO mapred.FileInputFormat: Total input paths to process : 6 11/05/22 11:26:38 INFO mapred.JobClient: Running job: job_local_0001 11/05/22 11:26:38 INFO mapred.MapTask: numReduceTasks: 1 11/05/22 11:26:38 INFO mapred.MapTask: io.sort.mb = 100 11/05/22 11:26:38 INFO mapred.MapTask: data buffer = 79691776/99614720 11/05/22 11:26:38 INFO mapred.MapTask: record buffer = 262144/327680 11/05/22 11:26:38 INFO mapred.MapTask: Starting flush of map output 11/05/22 11:26:38 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting 11/05/22 11:26:39 INFO mapred.JobClient: map 0% reduce 0% 11/05/22 11:26:41 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/capacity-scheduler.xml:0+7457 11/05/22 11:26:41 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done. 11/05/22 11:26:41 INFO mapred.MapTask: numReduceTasks: 1 11/05/22 11:26:41 INFO mapred.MapTask: io.sort.mb = 100 11/05/22 11:26:41 INFO mapred.MapTask: data buffer = 79691776/99614720 11/05/22 11:26:41 INFO mapred.MapTask: record buffer = 262144/327680 11/05/22 11:26:41 INFO mapred.MapTask: Starting flush of map output 11/05/22 11:26:41 INFO mapred.MapTask: Finished spill 0 11/05/22 11:26:41 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting 11/05/22 11:26:42 INFO mapred.JobClient: map 100% reduce 0% 11/05/22 11:26:44 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/hadoop-policy.xml:0+4644 11/05/22 11:26:44 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done. 11/05/22 11:26:44 INFO mapred.MapTask: numReduceTasks: 1 11/05/22 11:26:44 INFO mapred.MapTask: io.sort.mb = 100 11/05/22 11:26:44 INFO mapred.MapTask: data buffer = 79691776/99614720 11/05/22 11:26:44 INFO mapred.MapTask: record buffer = 262144/327680 11/05/22 11:26:44 INFO mapred.MapTask: Starting flush of map output 11/05/22 11:26:44 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting 11/05/22 11:26:47 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/mapred-queue-acls.xml:0+2033 11/05/22 11:26:47 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done. 11/05/22 11:26:47 INFO mapred.MapTask: numReduceTasks: 1 11/05/22 11:26:47 INFO mapred.MapTask: io.sort.mb = 100 11/05/22 11:26:47 INFO mapred.MapTask: data buffer = 79691776/99614720 11/05/22 11:26:47 INFO mapred.MapTask: record buffer = 262144/327680 11/05/22 11:26:47 INFO mapred.MapTask: Starting flush of map output 11/05/22 11:26:47 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting 11/05/22 11:26:50 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/hdfs-site.xml:0+178 11/05/22 11:26:50 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done. 11/05/22 11:26:50 INFO mapred.MapTask: numReduceTasks: 1 11/05/22 11:26:50 INFO mapred.MapTask: io.sort.mb = 100 11/05/22 11:26:50 INFO mapred.MapTask: data buffer = 79691776/99614720 11/05/22 11:26:50 INFO mapred.MapTask: record buffer = 262144/327680 11/05/22 11:26:50 INFO mapred.MapTask: Starting flush of map output 11/05/22 11:26:50 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting 11/05/22 11:26:53 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/core-site.xml:0+178 11/05/22 11:26:53 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done. 11/05/22 11:26:53 INFO mapred.MapTask: numReduceTasks: 1 11/05/22 11:26:53 INFO mapred.MapTask: io.sort.mb = 100 11/05/22 11:26:53 INFO mapred.MapTask: data buffer = 79691776/99614720 11/05/22 11:26:53 INFO mapred.MapTask: record buffer = 262144/327680 11/05/22 11:26:53 INFO mapred.MapTask: Starting flush of map output 11/05/22 11:26:53 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting 11/05/22 11:26:56 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/mapred-site.xml:0+178 11/05/22 11:26:56 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done. 11/05/22 11:26:56 INFO mapred.LocalJobRunner: 11/05/22 11:26:56 INFO mapred.Merger: Merging 6 sorted segments 11/05/22 11:26:56 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes 11/05/22 11:26:56 INFO mapred.LocalJobRunner: 11/05/22 11:26:56 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting 11/05/22 11:26:56 INFO mapred.LocalJobRunner: 11/05/22 11:26:56 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now 11/05/22 11:26:56 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449 11/05/22 11:26:59 INFO mapred.LocalJobRunner: reduce > reduce 11/05/22 11:26:59 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done. 11/05/22 11:27:00 INFO mapred.JobClient: map 100% reduce 100% 11/05/22 11:27:00 INFO mapred.JobClient: Job complete: job_local_0001 11/05/22 11:27:00 INFO mapred.JobClient: Counters: 17 11/05/22 11:27:00 INFO mapred.JobClient: File Input Format Counters 11/05/22 11:27:00 INFO mapred.JobClient: Bytes Read=14668 11/05/22 11:27:00 INFO mapred.JobClient: File Output Format Counters 11/05/22 11:27:00 INFO mapred.JobClient: Bytes Written=123 11/05/22 11:27:00 INFO mapred.JobClient: FileSystemCounters 11/05/22 11:27:00 INFO mapred.JobClient: FILE_BYTES_READ=1108835 11/05/22 11:27:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1232836 11/05/22 11:27:00 INFO mapred.JobClient: Map-Reduce Framework 11/05/22 11:27:00 INFO mapred.JobClient: Map output materialized bytes=55 11/05/22 11:27:00 INFO mapred.JobClient: Map input records=357 11/05/22 11:27:00 INFO mapred.JobClient: Reduce shuffle bytes=0 11/05/22 11:27:00 INFO mapred.JobClient: Spilled Records=2 11/05/22 11:27:00 INFO mapred.JobClient: Map output bytes=17 11/05/22 11:27:00 INFO mapred.JobClient: Map input bytes=14668 11/05/22 11:27:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=713 11/05/22 11:27:00 INFO mapred.JobClient: Combine input records=1 11/05/22 11:27:00 INFO mapred.JobClient: Reduce input records=1 11/05/22 11:27:00 INFO mapred.JobClient: Reduce input groups=1 11/05/22 11:27:00 INFO mapred.JobClient: Combine output records=1 11/05/22 11:27:00 INFO mapred.JobClient: Reduce output records=1 11/05/22 11:27:00 INFO mapred.JobClient: Map output records=1 11/05/22 11:27:00 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 11/05/22 11:27:00 INFO mapred.FileInputFormat: Total input paths to process : 1 11/05/22 11:27:00 INFO mapred.JobClient: Running job: job_local_0002 11/05/22 11:27:00 INFO mapred.MapTask: numReduceTasks: 1 11/05/22 11:27:00 INFO mapred.MapTask: io.sort.mb = 100 11/05/22 11:27:00 INFO mapred.MapTask: data buffer = 79691776/99614720 11/05/22 11:27:00 INFO mapred.MapTask: record buffer = 262144/327680 11/05/22 11:27:00 INFO mapred.MapTask: Starting flush of map output 11/05/22 11:27:00 INFO mapred.MapTask: Finished spill 0 11/05/22 11:27:00 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting 11/05/22 11:27:01 INFO mapred.JobClient: map 0% reduce 0% 11/05/22 11:27:03 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449/part-00000:0+111 11/05/22 11:27:03 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449/part-00000:0+111 11/05/22 11:27:03 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done. 11/05/22 11:27:03 INFO mapred.LocalJobRunner: 11/05/22 11:27:03 INFO mapred.Merger: Merging 1 sorted segments 11/05/22 11:27:03 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes 11/05/22 11:27:03 INFO mapred.LocalJobRunner: 11/05/22 11:27:03 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting 11/05/22 11:27:03 INFO mapred.LocalJobRunner: 11/05/22 11:27:03 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now 11/05/22 11:27:03 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/lidongbo/soft/hadoop-0.20.203.0/output 11/05/22 11:27:04 INFO mapred.JobClient: map 100% reduce 0% 11/05/22 11:27:06 INFO mapred.LocalJobRunner: reduce > reduce 11/05/22 11:27:06 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done. 11/05/22 11:27:07 INFO mapred.JobClient: map 100% reduce 100% 11/05/22 11:27:07 INFO mapred.JobClient: Job complete: job_local_0002 11/05/22 11:27:07 INFO mapred.JobClient: Counters: 17 11/05/22 11:27:07 INFO mapred.JobClient: File Input Format Counters 11/05/22 11:27:07 INFO mapred.JobClient: Bytes Read=123 11/05/22 11:27:07 INFO mapred.JobClient: File Output Format Counters 11/05/22 11:27:07 INFO mapred.JobClient: Bytes Written=23 11/05/22 11:27:07 INFO mapred.JobClient: FileSystemCounters 11/05/22 11:27:07 INFO mapred.JobClient: FILE_BYTES_READ=607997 11/05/22 11:27:07 INFO mapred.JobClient: FILE_BYTES_WRITTEN=701437 11/05/22 11:27:07 INFO mapred.JobClient: Map-Reduce Framework 11/05/22 11:27:07 INFO mapred.JobClient: Map output materialized bytes=25 11/05/22 11:27:07 INFO mapred.JobClient: Map input records=1 11/05/22 11:27:07 INFO mapred.JobClient: Reduce shuffle bytes=0 11/05/22 11:27:07 INFO mapred.JobClient: Spilled Records=2 11/05/22 11:27:07 INFO mapred.JobClient: Map output bytes=17 11/05/22 11:27:07 INFO mapred.JobClient: Map input bytes=25 11/05/22 11:27:07 INFO mapred.JobClient: SPLIT_RAW_BYTES=127 11/05/22 11:27:07 INFO mapred.JobClient: Combine input records=0 11/05/22 11:27:07 INFO mapred.JobClient: Reduce input records=1 11/05/22 11:27:07 INFO mapred.JobClient: Reduce input groups=1 11/05/22 11:27:07 INFO mapred.JobClient: Combine output records=0 11/05/22 11:27:07 INFO mapred.JobClient: Reduce output records=1 11/05/22 11:27:07 INFO mapred.JobClient: Map output records=1
ubuntu 11 默认没有安装SSHD
使用sudo apt-get install openssh-server
然后确认sshserver是否启动了: ps -e |grep ssh 如果只有ssh-agent那ssh-server还没有启动,需要/etc/init.d/ssh start,如果看到sshd那说明ssh-server已经启动了。
ssh-server配置文件位于/ etc/ssh/sshd_config,在这里可以定义SSH的服务端口,默认端口是22,你可以自己定义成其他端口号,如222。
然后重启SSH服务: sudo /etc/init.d/ssh restart
root@tiger:/etc# apt-get install openssh-server 正在读取软件包列表... 完成 正在分析软件包的依赖关系树 正在读取状态信息... 完成 将会安装下列额外的软件包: ssh-import-id 建议安装的软件包: rssh molly-guard openssh-blacklist openssh-blacklist-extra 下列【新】软件包将被安装: openssh-server ssh-import-id 升级了 0 个软件包,新安装了 2 个软件包,要卸载 0 个软件包,有 109 个软件包未被升级。 需要下载 317 kB 的软件包。 解压缩后会消耗掉 913 kB 的额外空间。 您希望继续执行吗?[Y/n]y 获取:1 http://cn.archive.ubuntu.com/ubuntu/ natty/main openssh-server i386 1:5.8p1-1ubuntu3 [311 kB] 获取:2 http://cn.archive.ubuntu.com/ubuntu/ natty/main ssh-import-id all 2.4-0ubuntu1 [5,934 B] 下载 317 kB,耗时 2秒 (144 kB/s) 正在预设定软件包 ... 选中了曾被取消选择的软件包 openssh-server。 (正在读取数据库 ... 系统当前共安装有 134010 个文件和目录。) 正在解压缩 openssh-server (从 .../openssh-server_1%3a5.8p1-1ubuntu3_i386.deb) ... 选中了曾被取消选择的软件包 ssh-import-id。 正在解压缩 ssh-import-id (从 .../ssh-import-id_2.4-0ubuntu1_all.deb) ... 正在处理用于 ureadahead 的触发器... ureadahead will be reprofiled on next reboot 正在处理用于 ufw 的触发器... 正在处理用于 man-db 的触发器... 正在设置 openssh-server (1:5.8p1-1ubuntu3) ... Creating SSH2 RSA key; this may take some time ... Creating SSH2 DSA key; this may take some time ... Creating SSH2 ECDSA key; this may take some time ... ssh start/running, process 2396 正在设置 ssh-import-id (2.4-0ubuntu1) ... root@tiger:/etc# sshd sshd re-exec requires execution with an absolute path
root@tiger:/etc/init.d# ssh 127.0.0.1 The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established. ECDSA key fingerprint is 72:0f:15:ff:d4:14:63:ab:6c:6e:5f:57:4b:5c:cf:dd. Are you sure you want to continue connecting (yes/no)?
解决了 ssh :connect to host 127.0.0.1 port 22: Connection refused 问题
发表评论
-
云计算平台管理的三大利器Nagios、Ganglia和Splunk
2012-08-30 13:20 1253原文链接:http://www.pro ... -
Why ths days are numbered for Hadoop as we know it
2012-08-28 13:04 1154For better or worse, Had ... -
Apache Drill Could Power Faster Through Data
2012-08-23 13:13 1830The proposed “Drill” project ... -
New Apache Project 'Drill' Aims to Speed Up Hadoop Queries
2012-08-23 12:45 1035New Apache Project 'Drill' A ... -
Let's pay attention & join the Drill Project
2012-08-23 12:43 1079Drill Project Incubation S ... -
New Apache project will Drill big data in near real time
2012-08-23 12:42 1176New Apache project will Drill b ... -
Hadoop资料
2011-05-18 09:26 1045http://www.cnblogs.com/wayne101 ...
相关推荐
Hadoop课程实验和报告——Hadoop安装实验报告 Hadoop是一个开源的大数据处理框架,由Apache基金会开发和维护。它提供了一种可靠、可扩展、可高效的方法来存储和处理大规模数据。在本实验报告中,我们将介绍Hadoop的...
Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程Hadoop安装使用教程...
Hadoop安装部署手册是针对初学者的全面指南,旨在帮助用户了解和实践Hadoop的安装与运行。Hadoop是一个开源的分布式计算框架,由Apache基金会开发,主要用于处理和存储大规模数据集。以下是详细的步骤和关键知识点:...
《Hadoop安装部署详解》 Hadoop,作为Google文件系统(GFS)的开源实现,是分布式计算领域的重要工具,其强大的数据处理能力和高容错性吸引了众多开发者和企业的关注。本文将详细介绍如何在Linux环境下安装和部署...
标题《hadoop的安装》所涉及的知识点涵盖Hadoop安装过程中的各个方面,包括但不限于JDK环境的配置与安装、Hadoop下载、解压、配置以及启动等步骤。以下是根据给定内容和描述生成的详细知识点: 1. JDK环境配置与...
Hadoop安装教程_单机/伪分布式配置_Hadoop2.7.1/Ubuntu 16.04 本教程主要讲述了在 Ubuntu 16.04 环境下安装 Hadoop 2.7.1 的步骤,包括单机模式、伪分布式模式和分布式模式三种安装方式。以下是本教程的知识点总结...
### Hadoop安装过程详解 #### 一、概述 Hadoop是一种能够处理大量数据的大规模分布式存储与计算框架,常用于构建大数据分析平台。本文档基于一位用户的实践经历,详细介绍了如何在虚拟机环境下安装和配置Hadoop的...
Hadoop 安装 学习 入门教程 Hadoop家族系列文章,主要介绍Hadoop家族产品,常用的项目包括Hadoop, Hive, Pig, HBase, Sqoop, Mahout, Zookeeper, Avro, Ambari, Chukwa,新增加的项目包括,YARN, Hcatalog, Oozie, ...
### Hadoop安装教程:单机与伪分布式配置详解 #### 一、引言 Hadoop是一种开源的大数据处理框架,广泛应用于数据存储和处理场景。本文档将详细介绍如何在Ubuntu 14.04环境下安装配置Hadoop 2.6.0版本,包括单机模式...
《Hadoop系统搭建及项目实践》课件02Hadoop安装与配置管理.pdf《Hadoop系统搭建及项目实践》课件02Hadoop安装与配置管理.pdf《Hadoop系统搭建及项目实践》课件02Hadoop安装与配置管理.pdf《Hadoop系统搭建及项目实践...
### Hadoop安装与配置详解 #### 一、概述 Hadoop是一个开源软件框架,用于分布式存储和处理大数据集。它能够高效地处理PB级别的数据,适用于海量数据的存储和计算场景。本文将详细介绍如何在多台虚拟机上安装和...
Hadoop 安装详解 Hadoop 是一种基于 Java 的大数据处理框架,它由 Doug Cutting 和 Mike Cafarella 共同开发,于 2005 年捐献给 Apache 软件基金会。Hadoop 的安装相对较为复杂,需要配置环境变量、core-site.xml、...
### Linux下载、安装、JDK配置、Hadoop安装相关知识点 #### 一、Linux环境准备与安装 **1.1 Linux版本选择** - **CentOS 6.5**:适用于本教程,是一款稳定且广受支持的企业级操作系统。 **1.2 下载Linux** - **...
##### 2.3 Hadoop安装与配置 1. **下载与解压Hadoop**:下载Hadoop压缩包,并解压到指定目录。 2. **配置环境变量**:在`/etc/profile.d/hadoop.sh`中添加Hadoop的环境变量。 3. **配置hadoop-env.sh**:配置JDK的...
单机版 Hadoop 安装(Linux) 单机版 Hadoop 安装是指在单台机器上安装 Hadoop 环境,以便快速入门 Hadoop 和了解 Hadoop 的基本原理和使用方法。在这个安装过程中,我们将创建 Hadoop 用户组和用户,安装 JDK,...
hadoop安装和配置,这份PPT讲诉了如何安装和配置Hadoop
3. **验证Hadoop安装**: - 使用`jps`命令检查进程是否正常启动。 - 测试写入数据:`hadoop fs -put testfile /user/hadoop/` - 测试读取数据:`hadoop fs -cat /user/hadoop/testfile` #### 目的意义 本环节...
网上找的,适合于原生Hadoop2,包括Hadoop 2.6.0, Hadoop 2.7.1 等版本,主要参考了官方安装教程,步骤详细,辅以适当说明,相信按照步骤来,都能顺利安装并运行Hadoop。
在安装 Hadoop 过程中,可能会遇到一些问题,例如 JDK 安装失败、Hadoop 文件夹权限问题等。可以通过查看日志文件或搜索在线资源来解决这些问题。 安装 Hadoop 需要小心翼翼,需要按照步骤安装和配置每个组件,否则...