- 浏览: 294181 次
文章分类
最新评论
-
feargod:
...
ActivityGroup的子activity响应back事件的顺序问题 -
hoarhoar:
谢谢你,终于解决了,我真是受够了,总是45秒钟,真是疯了。
youku 的广告必须要屏蔽 -
lilai:
...
youku 的广告必须要屏蔽 -
aijuans2:
...
youku 的广告必须要屏蔽 -
weiwo1978:
说的非常好,mark
SELECT语句执行的顺序
为方便查询故在此列出
- <? xml version = "1.0" ?>
- <? xml-stylesheet type = "text/xsl" href = "configuration.xsl" ?>
- <!-- Do not modify this file directly. Instead, copy entries that you -->
- <!-- wish to modify from this file into hdfs-site.xml and change them -->
- <!-- there. If hdfs-site.xml does not already exist, create it. -->
- < configuration >
- < property >
- < name > dfs.namenode.logging.level </ name >
- < value > info </ value >
- < description > The logging level for dfs namenode. Other values are "dir"(trac
- e namespace mutations), "block"(trace block under/over replications and block
- creations/deletions), or "all".</ description >
- </ property >
- < property >
- < name > dfs.secondary.http.address </ name >
- < value > 0.0.0.0:50090 </ value >
- < description >
- The secondary namenode http server address and port.
- If the port is 0 then the server will start on a free port.
- </ description >
- </ property >
- < property >
- < name > dfs.datanode.address </ name >
- < value > 0.0.0.0:50010 </ value >
- < description >
- The address where the datanode server will listen to.
- If the port is 0 then the server will start on a free port.
- </ description >
- </ property >
- < property >
- < name > dfs.datanode.http.address </ name >
- < value > 0.0.0.0:50075 </ value >
- < description >
- The datanode http server address and port.
- If the port is 0 then the server will start on a free port.
- </ description >
- </ property >
- < property >
- < name > dfs.datanode.ipc.address </ name >
- < value > 0.0.0.0:50020 </ value >
- < description >
- The datanode ipc server address and port.
- If the port is 0 then the server will start on a free port.
- </ description >
- </ property >
- < property >
- < name > dfs.datanode.handler.count </ name >
- < value > 3 </ value >
- < description > The number of server threads for the datanode. </ description >
- </ property >
- < property >
- < name > dfs.http.address </ name >
- < value > 0.0.0.0:50070 </ value >
- < description >
- The address and the base port where the dfs namenode web ui will listen on.
- If the port is 0 then the server will start on a free port.
- </ description >
- </ property >
- < property >
- < name > dfs.https.enable </ name >
- < value > false </ value >
- < description > Decide if HTTPS(SSL) is supported on HDFS
- </ description >
- </ property >
- < property >
- < name > dfs.https.need.client.auth </ name >
- < value > false </ value >
- < description > Whether SSL client certificate authentication is required
- </ description >
- </ property >
- < property >
- < name > dfs.https.server.keystore.resource </ name >
- < value > ssl-server.xml </ value >
- < description > Resource file from which ssl server keystore
- information will be extracted
- </ description >
- </ property >
- < property >
- < name > dfs.https.client.keystore.resource </ name >
- < value > ssl-client.xml </ value >
- < description > Resource file from which ssl client keystore
- information will be extracted
- </ description >
- </ property >
- < property >
- < name > dfs.datanode.https.address </ name >
- < value > 0.0.0.0:50475 </ value >
- </ property >
- < property >
- < name > dfs.https.address </ name >
- < value > 0.0.0.0:50470 </ value >
- </ property >
- < property >
- < name > dfs.datanode.dns.interface </ name >
- < value > default </ value >
- < description > The name of the Network Interface from which a data node should
- report its IP address.
- </ description >
- </ property >
- < property >
- < name > dfs.datanode.dns.nameserver </ name >
- < value > default </ value >
- < description > The host name or IP address of the name server (DNS)
- which a DataNode should use to determine the host name used by the
- NameNode for communication and display purposes.
- </ description >
- </ property >
- < property >
- < name > dfs.replication.considerLoad </ name >
- < value > true </ value >
- < description > Decide if chooseTarget considers the target's load or not
- </ description >
- </ property >
- < property >
- < name > dfs.default.chunk.view.size </ name >
- < value > 32768 </ value >
- < description > The number of bytes to view for a file on the browser.
- </ description >
- </ property >
- < property >
- < name > dfs.datanode.du.reserved </ name >
- < value > 0 </ value >
- < description > Reserved space in bytes per volume. Always leave this much space free for non dfs use.
- </ description >
- </ property >
- < property >
- < name > dfs.name.dir </ name >
- < value > ${hadoop.tmp.dir}/dfs/name </ value >
- < description > Determines where on the local filesystem the DFS name node
- should store the name table(fsimage). If this is a comma-delimited list
- of directories then the name table is replicated in all of the
- directories, for redundancy. </ description >
- </ property >
- < property >
- < name > dfs.name.edits.dir </ name >
- < value > ${dfs.name.dir} </ value >
- < description > Determines where on the local filesystem the DFS name node
- should store the transaction (edits) file. If this is a comma-delimited list
- of directories then the transaction file is replicated in all of the
- directories, for redundancy. Default value is same as dfs.name.dir
- </ description >
- </ property >
- < property >
- < name > dfs.web.ugi </ name >
- < value > webuser,webgroup </ value >
- < description > The user account used by the web interface.
- Syntax: USERNAME,GROUP1,GROUP2, ...
- </ description >
- </ property >
- < property >
- < name > dfs.permissions </ name >
- < value > true </ value >
- < description >
- If "true", enable permission checking in HDFS.
- If "false", permission checking is turned off,
- but all other behavior is unchanged.
- Switching from one parameter value to the other does not change the mode,
- owner or group of files or directories.
- </ description >
- </ property >
- < property >
- < name > dfs.permissions.supergroup </ name >
- < value > supergroup </ value >
- < description > The name of the group of super-users. </ description >
- </ property >
- < property >
- < name > dfs.data.dir </ name >
- < value > ${hadoop.tmp.dir}/dfs/data </ value >
- < description > Determines where on the local filesystem an DFS data node
- should store its blocks. If this is a comma-delimited
- list of directories, then data will be stored in all named
- directories, typically on different devices.
- Directories that do not exist are ignored.
- </ description >
- </ property >
- < property >
- < name > dfs.replication </ name >
- < value > 3 </ value >
- < description > Default block replication.
- The actual number of replications can be specified when the file is created.
- The default is used if replication is not specified in create time.
- </ description >
- </ property >
- < property >
- < name > dfs.replication.max </ name >
- < value > 512 </ value >
- < description > Maximal block replication.
- </ description >
- </ property >
- < property >
- < name > dfs.replication.min </ name >
- < value > 1 </ value >
- < description > Minimal block replication.
- </ description >
- </ property >
- < property >
- < name > dfs.block.size </ name >
- < value > 67108864 </ value >
- < description > The default block size for new files. </ description >
- </ property >
- < property >
- < name > dfs.df.interval </ name >
- < value > 60000 </ value >
- < description > Disk usage statistics refresh interval in msec. </ description >
- </ property >
- < property >
- < name > dfs.client.block.write.retries </ name >
- < value > 3 </ value >
- < description > The number of retries for writing blocks to the data nodes,
- before we signal failure to the application.
- </ description >
- </ property >
- < property >
- < name > dfs.blockreport.intervalMsec </ name >
- < value > 3600000 </ value >
- < description > Determines block reporting interval in milliseconds. </ description >
- </ property >
- < property >
- < name > dfs.blockreport.initialDelay </ name > < value > 0 </ value >
- < description > Delay for first block report in seconds. </ description >
- </ property >
- < property >
- < name > dfs.heartbeat.interval </ name >
- < value > 3 </ value >
- < description > Determines datanode heartbeat interval in seconds. </ description >
- </ property >
- < property >
- < name > dfs.namenode.handler.count </ name >
- < value > 10 </ value >
- < description > The number of server threads for the namenode. </ description >
- </ property >
- < property >
- < name > dfs.safemode.threshold.pct </ name >
- < value > 0.999f </ value >
- < description >
- Specifies the percentage of blocks that should satisfy
- the minimal replication requirement defined by dfs.replication.min.
- Values less than or equal to 0 mean not to start in safe mode.
- Values greater than 1 will make safe mode permanent.
- </ description >
- </ property >
- < property >
- < name > dfs.safemode.extension </ name >
- < value > 30000 </ value >
- < description >
- Determines extension of safe mode in milliseconds
- after the threshold level is reached.
- </ description >
- </ property >
- < property >
- < name > dfs.balance.bandwidthPerSec </ name >
- < value > 1048576 </ value >
- < description >
- Specifies the maximum amount of bandwidth that each datanode
- can utilize for the balancing purpose in term of
- the number of bytes per second.
- </ description >
- </ property >
- < property >
- < name > dfs.hosts </ name >
- < value > </ value >
- < description > Names a file that contains a list of hosts that are
- permitted to connect to the namenode. The full pathname of the file
- must be specified. If the value is empty, all hosts are
- permitted.</ description >
- </ property >
- < property >
- < name > dfs.hosts.exclude </ name >
- < value > </ value >
- < description > Names a file that contains a list of hosts that are
- not permitted to connect to the namenode. The full pathname of the
- file must be specified. If the value is empty, no hosts are
- excluded.</ description >
- </ property >
- < property >
- < name > dfs.max.objects </ name >
- < value > 0 </ value >
- < description > The maximum number of files, directories and blocks
- dfs supports. A value of zero indicates no limit to the number
- of objects that dfs supports.
- </ description >
- </ property >
- < property >
- < name > dfs.namenode.decommission.interval </ name >
- < value > 30 </ value >
- < description > Namenode periodicity in seconds to check if decommission is
- complete.</ description >
- </ property >
- < property >
- < name > dfs.namenode.decommission.nodes.per.interval </ name >
- < value > 5 </ value >
- < description > The number of nodes namenode checks if decommission is complete
- in each dfs.namenode.decommission.interval.</ description >
- </ property >
- < property >
- < name > dfs.replication.interval </ name >
- < value > 3 </ value >
- < description > The periodicity in seconds with which the namenode computes
- repliaction work for datanodes. </ description >
- </ property >
- < property >
- < name > dfs.access.time.precision </ name >
- < value > 3600000 </ value >
- < description > The access time for HDFS file is precise upto this value.
- The default value is 1 hour. Setting a value of 0 disables
- access times for HDFS.
- </ description >
- </ property >
- < property >
- < name > dfs.support.append </ name >
- < value > false </ value >
- < description > Does HDFS allow appends to files?
- This is currently set to false because there are bugs in the
- "append code" and is not supported in any prodction cluster.
- </ description >
- </ property >
- </ configuration >
更多信息请查看 java进阶网 http://www.javady.com
发表评论
-
hadoop FSNamesystem中的recentInvalidateSets
2012-04-20 20:28 1010今天早就回来了,然后偷懒了2个小时,现在才开始分析代码, ... -
hadoop namenode后台jetty web
2012-04-20 20:28 1685现在开始分析namenode启动时开启的第2类线程, ... -
hadoop namenode format做了什么?
2012-04-18 20:58 1146一看到format就和磁盘格式化联想到一起,然后这个fo ... -
hadoop分布式配置(服务器系统为centos5,配置时使用的用户是root)
2012-04-14 21:19 1063目前我们使 ... -
hadoop系列A:多文件输出
2012-04-14 21:18 1471package org.myorg; import ... -
Hadoop 安装问题和解决方案
2012-04-10 13:21 1256前几天在Window和Linux主机安装了Hadoop, ... -
运行Hadoop遇到的问题
2012-04-10 13:19 1609运行Hadoop遇到的问题 1, 伪分布式模式 ... -
运行Hadoop遇到的问题
2012-04-10 13:19 0运行Hadoop遇到的问题 1, 伪分布式模式 ... -
hadoop使用过程中的一些小技巧
2012-04-09 10:16 1168hadoop使用过程中的一些小技巧 ------------- ... -
运行hadoop时的一些技巧
2012-04-09 10:14 767//用来给key分区的,需要实现Partitioner接口 ... -
hive相关操作文档收集
2012-04-08 10:51 0How to load data into Hive ... -
hive sql doc
2012-04-08 10:51 0记录2个常用的hive sql语法查询地 官方 ht ... -
hive Required table missing : "`DBS`" in Catalog "" Schema "
2012-04-08 10:51 0最近需要提取一些数据,故开始使用hive,本机搭建了一个hiv ... -
HDFS数据兼容拷贝
2012-04-08 10:50 0系统中使用了hadoop 19.2 20.2 2个版本,为啥有 ... -
hdfs 简单的api 读写文件
2012-04-08 10:50 0Java代码 import ... -
hbase之htable线程安全性
2012-04-22 15:22 1187在单线程环境下使用hbase的htable是没有问题,但是突然 ... -
hbase之scan的rowkey问题
2012-04-22 15:22 1769最近使用到hbase做存储,发现使用scan的时候,返回的ro ... -
datanode启动开启了那些任务线程
2012-04-22 15:22 1086今天开始分析datanode,首先看看datanode开启了哪 ... -
namenode这个类的主要功能
2012-04-22 15:22 1507今天来总看下namenode这个类的主要功能 首先看下这个类 ... -
hadoop监控
2012-04-22 15:21 1597通过从hadoop的 hadoop-metrics文件中就可以 ...
相关推荐
Hadoop 2.9.0版本的HDFS配置文件hdfs-site.xml定义了分布式文件系统的主要配置参数,下面详细说明这些属性的关键知识点。 1. hadoop.hdfs.configuration.version 这是一个配置属性,用于记录当前使用的Hadoop HDFS...
第一种方法是选择相应版本的 Hadoop,下载解压后,搜索 \*.xml,找到 core-default.xml、hdfs-default.xml 和 mapred-default.xml,这些文件就是默认配置,可以参考这些配置的说明和 key 设置 Hadoop 集群。...
`core-site.xml` 是Hadoop配置文件中的核心组件之一,它定义了Hadoop集群的基本设置,包括但不限于HDFS的访问地址、临时目录以及一些基本的I/O配置。通过正确配置这些参数,可以确保Hadoop集群的稳定运行和高效处理...
同时,需要注意的是,随着Hadoop版本的更新,某些默认配置和参数可能会有所变化,因此建议根据实际使用的Hadoop版本查阅相应的默认配置文件,如Apache官网提供的`core-default.xml`、`hdfs-default.xml`和`mapred-...
- 方法一:下载与Hadoop版本相对应的Hadoop压缩包,解压后通过搜索*.xml文件,可以找到core-default.xml、hdfs-default.xml和mapred-default.xml,这些文件中包含了默认的配置属性及说明,可以作为设置Hadoop集群时...
在这些文件中设置HDFS的相关参数,例如`fs.defaultFS`用于指定HDFS的默认名称节点地址,通常设置为`hdfs://localhost:9000`。 4. 配置`mapred-site.xml`以启用MapReduce作业历史服务器。 5. 初始化HDFS。使用`hadoop...
- **配置说明**: - **Configuration Files**:根据需要调整`core-site.xml`、`hdfs-site.xml`、`yarn-site.xml` 和 `mapred-site.xml`中的参数。 - **Environment Configuration**:通过`hadoop-env.sh`和`yarn-...
其他配置如`dfs.namenode.servicerpc-address`、`dfs.https.address`等涉及HDFS的服务地址和端口,而`dfs.client.use.datanode.hostname`则控制是否使用数据节点的主机名进行通信。 为了使用Java API访问HDFS,你还...
本文将对 Hadoop 的配置文件进行详细的解释,并对每个配置文件中的关键参数进行详细的说明。 一、core-site.xml core-site.xml 是 Hadoop 集群的核心配置文件,用于配置 Hadoop 的基本参数。这里面有七个关键参数...
在`etc/hadoop/core-site.xml`中配置默认文件系统为HDFS,`etc/hadoop/hdfs-site.xml`中配置副本数量等参数。 5. **格式化NameNode**:首次安装时需执行`hdfs namenode -format`。 6. **启动Hadoop**:依次启动...
以下是对配置过程的详细说明: 一、Hadoop HA概述 Hadoop HA主要通过在两个不同的节点上设置NameNode的热备来实现。这两个节点被称为活动NameNode(Active NN)和备用NameNode(Standby NN)。当活动NameNode出现...
- 配置core-site.xml:设置HDFS的默认文件系统(fs.defaultFS)和临时目录(hadoop.tmp.dir)。 - 配置hdfs-site.xml:设置数据副本数(dfs.replication)和NameNode的HTTP访问地址。 - 配置mapred-site.xml:...
- **hdfs-site.xml**:此文件专门针对HDFS(Hadoop Distributed File System)进行配置,包括命名节点(NameNode)和数据节点(DataNode)的位置,文件副本数量,以及文件访问权限等。 - **mapred-site.xml**:...
- 配置`core-site.xml`,指定`fs.default.name`为HDFS的名称节点地址,`hadoop.tmp.dir`为临时目录。 - 配置`hdfs-site.xml`,例如设置`dfs.replication`为副本数量。 - 格式化HDFS:`hdfs namenode -format`。 ...
- **hdfs-site.xml**:配置HDFS的具体参数,如副本数量、数据块大小等。 - **mapred-site.xml**:配置MapReduce相关的参数,如JobTracker的地址等。 - **yarn-site.xml**:配置YARN的具体参数,如ResourceManager...
- 在`core-site.xml`中,设置`fs.defaultFS`为HDFS的命名节点地址,通常为Master节点的IP。 - 在`hdfs-site.xml`中,配置副本因子(replication factor)和DFS相关参数,如`dfs.namenode.name.dir`和`dfs.datanode...
- 编辑`core-site.xml`、`hdfs-site.xml`、`mapred-site.xml`和`yarn-site.xml`等配置文件,根据实际需求设置相应的属性值。 - 特别注意`core-site.xml`中的`fs.defaultFS`应该设置为`hdfs://localhost:9000`...
- **core-site.xml**:用于配置HDFS的基本信息,如NameNode的地址等。 - **hdfs-site.xml**:特定于HDFS的服务配置。 - **mapred-site.xml**:MapReduce框架的核心配置文件。 - **yarn-site.xml**:YARN资源管理器的...
1. **默认配置**:包含`core-default.xml`、`hdfs-default.xml` 和 `mapred-default.xml` 文件,这些文件提供了Hadoop的基本配置项。 2. **站点特定配置**:包括 `core-site.xml`、`hdfs-site.xml` 和 `mapred-site....