- 浏览: 294190 次
文章分类
最新评论
-
feargod:
...
ActivityGroup的子activity响应back事件的顺序问题 -
hoarhoar:
谢谢你,终于解决了,我真是受够了,总是45秒钟,真是疯了。
youku 的广告必须要屏蔽 -
lilai:
...
youku 的广告必须要屏蔽 -
aijuans2:
...
youku 的广告必须要屏蔽 -
weiwo1978:
说的非常好,mark
SELECT语句执行的顺序
为了以后查找方便,故将配置说明等列在这里
- <? xml version = "1.0" ?>
- <? xml-stylesheet type = "text/xsl" href = "configuration.xsl" ?>
- <!-- Do not modify this file directly. Instead, copy entries that you -->
- <!-- wish to modify from this file into core-site.xml and change them -->
- <!-- there. If core-site.xml does not already exist, create it. -->
- < configuration >
- <!--- global properties -->
- < property >
- < name > hadoop.tmp.dir </ name >
- < value > /tmp/hadoop-${user.name} </ value >
- < description > A base for other temporary directories. </ description >
- </ property >
- < property >
- < name > hadoop.native.lib </ name >
- < value > true </ value >
- < description > Should native hadoop libraries, if present, be used. </ description >
- </ property >
- < property >
- < name > hadoop.http.filter.initializers </ name >
- < value > </ value >
- < description > A comma separated list of class names. Each class in the list
- must extend org.apache.hadoop.http.FilterInitializer. The corresponding
- Filter will be initialized. Then, the Filter will be applied to all user
- facing jsp and servlet web pages. The ordering of the list defines the
- ordering of the filters.</ description >
- </ property >
- < property >
- < name > hadoop.security.authorization </ name >
- < value > false </ value >
- < description > Is service-level authorization enabled? </ description >
- </ property >
- <!--- logging properties -->
- < property >
- < name > hadoop.logfile.size </ name >
- < value > 10000000 </ value >
- < description > The max size of each log file </ description >
- </ property >
- < property >
- < name > hadoop.logfile.count </ name >
- < value > 10 </ value >
- < description > The max number of log files </ description >
- </ property >
- <!-- i/o properties -->
- < property >
- < name > io.file.buffer.size </ name >
- < value > 4096 </ value >
- < description > The size of buffer for use in sequence files.
- The size of this buffer should probably be a multiple of hardware
- page size (4096 on Intel x86), and it determines how much data is
- buffered during read and write operations.</ description >
- </ property >
- < property >
- < name > io.bytes.per.checksum </ name >
- < value > 512 </ value >
- < description > The number of bytes per checksum. Must not be larger than
- io.file.buffer.size.</ description >
- </ property >
- < property >
- < name > io.skip.checksum.errors </ name >
- < value > false </ value >
- < description > If true, when a checksum error is encountered while
- reading a sequence file, entries are skipped, instead of throwing an
- exception.</ description >
- </ property >
- < property >
- < name > io.compression.codecs </ name >
- < value > org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec </ value >
- < description > A list of the compression codec classes that can be used
- for compression/decompression.</ description >
- </ property >
- < property >
- < name > io.serializations </ name >
- < value > org.apache.hadoop.io.serializer.WritableSerialization </ value >
- < description > A list of serialization classes that can be used for
- obtaining serializers and deserializers.</ description >
- </ property >
- <!-- file system properties -->
- < property >
- < name > fs.default.name </ name >
- < value > file:/// </ value >
- < description > The name of the default file system. A URI whose
- scheme and authority determine the FileSystem implementation. The
- uri's scheme determines the config property (fs.SCHEME.impl) naming
- the FileSystem implementation class. The uri's authority is used to
- determine the host, port, etc. for a filesystem.</ description >
- </ property >
- < property >
- < name > fs.trash.interval </ name >
- < value > 0 </ value >
- < description > Number of minutes between trash checkpoints.
- If zero, the trash feature is disabled.
- </ description >
- </ property >
- < property >
- < name > fs.file.impl </ name >
- < value > org.apache.hadoop.fs.LocalFileSystem </ value >
- < description > The FileSystem for file: uris. </ description >
- </ property >
- < property >
- < name > fs.hdfs.impl </ name >
- < value > org.apache.hadoop.hdfs.DistributedFileSystem </ value >
- < description > The FileSystem for hdfs: uris. </ description >
- </ property >
- < property >
- < name > fs.s3.impl </ name >
- < value > org.apache.hadoop.fs.s3.S3FileSystem </ value >
- < description > The FileSystem for s3: uris. </ description >
- </ property >
- < property >
- < name > fs.s3n.impl </ name >
- < value > org.apache.hadoop.fs.s3native.NativeS3FileSystem </ value >
- < description > The FileSystem for s3n: (Native S3) uris. </ description >
- </ property >
- < property >
- < name > fs.kfs.impl </ name >
- < value > org.apache.hadoop.fs.kfs.KosmosFileSystem </ value >
- < description > The FileSystem for kfs: uris. </ description >
- </ property >
- < property >
- < name > fs.hftp.impl </ name >
- < value > org.apache.hadoop.hdfs.HftpFileSystem </ value >
- </ property >
- < property >
- < name > fs.hsftp.impl </ name >
- < value > org.apache.hadoop.hdfs.HsftpFileSystem </ value >
- </ property >
- < property >
- < name > fs.ftp.impl </ name >
- < value > org.apache.hadoop.fs.ftp.FTPFileSystem </ value >
- < description > The FileSystem for ftp: uris. </ description >
- </ property >
- < property >
- < name > fs.ramfs.impl </ name >
- < value > org.apache.hadoop.fs.InMemoryFileSystem </ value >
- < description > The FileSystem for ramfs: uris. </ description >
- </ property >
- < property >
- < name > fs.har.impl </ name >
- < value > org.apache.hadoop.fs.HarFileSystem </ value >
- < description > The filesystem for Hadoop archives. </ description >
- </ property >
- < property >
- < name > fs.har.impl.disable.cache </ name >
- < value > true </ value >
- < description > Don't cache 'har' filesystem instances. </ description >
- </ property >
- < property >
- < name > fs.checkpoint.dir </ name >
- < value > ${hadoop.tmp.dir}/dfs/namesecondary </ value >
- < description > Determines where on the local filesystem the DFS secondary
- name node should store the temporary images to merge.
- If this is a comma-delimited list of directories then the image is
- replicated in all of the directories for redundancy.
- </ description >
- </ property >
- < property >
- < name > fs.checkpoint.edits.dir </ name >
- < value > ${fs.checkpoint.dir} </ value >
- < description > Determines where on the local filesystem the DFS secondary
- name node should store the temporary edits to merge.
- If this is a comma-delimited list of directoires then teh edits is
- replicated in all of the directoires for redundancy.
- Default value is same as fs.checkpoint.dir
- </ description >
- </ property >
- < property >
- < name > fs.checkpoint.period </ name >
- < value > 3600 </ value >
- < description > The number of seconds between two periodic checkpoints.
- </ description >
- </ property >
- < property >
- < name > fs.checkpoint.size </ name >
- < value > 67108864 </ value >
- < description > The size of the current edit log (in bytes) that triggers
- a periodic checkpoint even if the fs.checkpoint.period hasn't expired.
- </ description >
- </ property >
- < property >
- < name > fs.s3.block.size </ name >
- < value > 67108864 </ value >
- < description > Block size to use when writing files to S3. </ description >
- </ property >
- < property >
- < name > fs.s3.buffer.dir </ name >
- < value > ${hadoop.tmp.dir}/s3 </ value >
- < description > Determines where on the local filesystem the S3 filesystem
- should store files before sending them to S3
- (or after retrieving them from S3).
- </ description >
- </ property >
- < property >
- < name > fs.s3.maxRetries </ name >
- < value > 4 </ value >
- < description > The maximum number of retries for reading or writing files to S3,
- before we signal failure to the application.
- </ description >
- </ property >
- < property >
- < name > fs.s3.sleepTimeSeconds </ name >
- < value > 10 </ value >
- < description > The number of seconds to sleep between each S3 retry.
- </ description >
- </ property >
- < property >
- < name > local.cache.size </ name >
- < value > 10737418240 </ value >
- < description > The limit on the size of cache you want to keep, set by default
- to 10GB. This will act as a soft limit on the cache directory for out of band data.
- </ description >
- </ property >
- < property >
- < name > io.seqfile.compress.blocksize </ name >
- < value > 1000000 </ value >
- < description > The minimum block size for compression in block compressed
- SequenceFiles.
- </ description >
- </ property >
- < property >
- < name > io.seqfile.lazydecompress </ name >
- < value > true </ value >
- < description > Should values of block-compressed SequenceFiles be decompressed
- only when necessary.
- </ description >
- </ property >
- < property >
- < name > io.seqfile.sorter.recordlimit </ name >
- < value > 1000000 </ value >
- < description > The limit on number of records to be kept in memory in a spill
- in SequenceFiles.Sorter
- </ description >
- </ property >
- < property >
- < name > io.mapfile.bloom.size </ name >
- < value > 1048576 </ value >
- < description > The size of BloomFilter-s used in BloomMapFile. Each time this many
- keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter).
- Larger values minimize the number of filters, which slightly increases the performance,
- but may waste too much space if the total number of keys is usually much smaller
- than this number.
- </ description >
- </ property >
- < property >
- < name > io.mapfile.bloom.error.rate </ name >
- < value > 0.005 </ value >
- < description > The rate of false positives in BloomFilter-s used in BloomMapFile.
- As this value decreases, the size of BloomFilter-s increases exponentially. This
- value is the probability of encountering false positives (default is 0.5%).
- </ description >
- </ property >
- < property >
- < name > hadoop.util.hash.type </ name >
- < value > murmur </ value >
- < description > The default implementation of Hash. Currently this can take one of the
- two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash.
- </ description >
- </ property >
- <!-- ipc properties -->
- < property >
- < name > ipc.client.idlethreshold </ name >
- < value > 4000 </ value >
- < description > Defines the threshold number of connections after which
- connections will be inspected for idleness.
- </ description >
- </ property >
- < property >
- < name > ipc.client.kill.max </ name >
- < value > 10 </ value >
- < description > Defines the maximum number of clients to disconnect in one go.
- </ description >
- </ property >
- < property >
- < name > ipc.client.connection.maxidletime </ name >
- < value > 10000 </ value >
- < description > The maximum time in msec after which a client will bring down the
- connection to the server.
- </ description >
- </ property >
- < property >
- < name > ipc.client.connect.max.retries </ name >
- < value > 10 </ value >
- < description > Indicates the number of retries a client will make to establish
- a server connection.
- </ description >
- </ property >
- < property >
- < name > ipc.server.listen.queue.size </ name >
- < value > 128 </ value >
- < description > Indicates the length of the listen queue for servers accepting
- client connections.
- </ description >
- </ property >
- < property >
- < name > ipc.server.tcpnodelay </ name >
- < value > false </ value >
- < description > Turn on/off Nagle's algorithm for the TCP socket connection on
- the server. Setting to true disables the algorithm and may decrease latency
- with a cost of more/smaller packets.
- </ description >
- </ property >
- < property >
- < name > ipc.client.tcpnodelay </ name >
- < value > false </ value >
- < description > Turn on/off Nagle's algorithm for the TCP socket connection on
- the client. Setting to true disables the algorithm and may decrease latency
- with a cost of more/smaller packets.
- </ description >
- </ property >
- <!-- Web Interface Configuration -->
- < property >
- < name > webinterface.private.actions </ name >
- < value > false </ value >
- < description > If set to true, the web interfaces of JT and NN may contain
- actions, such as kill job, delete file, etc., that should
- not be exposed to public. Enable this option if the interfaces
- are only reachable by those who have the right authorization.
- </ description >
- </ property >
- <!-- Proxy Configuration -->
- < property >
- < name > hadoop.rpc.socket.factory.class.default </ name >
- < value > org.apache.hadoop.net.StandardSocketFactory </ value >
- < description > Default SocketFactory to use. This parameter is expected to be
- formatted as "package.FactoryClassName".
- </ description >
- </ property >
- < property >
- < name > hadoop.rpc.socket.factory.class.ClientProtocol </ name >
- < value > </ value >
- < description > SocketFactory to use to connect to a DFS. If null or empty, use
- hadoop.rpc.socket.class.default. This socket factory is also used by
- DFSClient to create sockets to DataNodes.
- </ description >
- </ property >
- < property >
- < name > hadoop.socks.server </ name >
- < value > </ value >
- < description > Address (host:port) of the SOCKS server to be used by the
- SocksSocketFactory.
- </ description >
- </ property >
- <!-- Rack Configuration -->
- < property >
- < name > topology.node.switch.mapping.impl </ name >
- < value > org.apache.hadoop.net.ScriptBasedMapping </ value >
- < description > The default implementation of the DNSToSwitchMapping. It
- invokes a script specified in topology.script.file.name to resolve
- node names. If the value for topology.script.file.name is not set, the
- default value of DEFAULT_RACK is returned for all node names.
- </ description >
- </ property >
- < property >
- < name > topology.script.file.name </ name >
- < value > </ value >
- < description > The script name that should be invoked to resolve DNS names to
- NetworkTopology names. Example: the script would take host.foo.bar as an
- argument, and return /rack1 as the output.
- </ description >
- </ property >
- < property >
- < name > topology.script.number.args </ name >
- < value > 100 </ value >
- < description > The max number of args that the script configured with
- topology.script.file.name should be run with. Each arg is an
- IP address.
- </ description >
- </ property >
- </ configuration >
更多信息请查看 java进阶网 http://www.javady.com
发表评论
-
hadoop FSNamesystem中的recentInvalidateSets
2012-04-20 20:28 1010今天早就回来了,然后偷懒了2个小时,现在才开始分析代码, ... -
hadoop namenode后台jetty web
2012-04-20 20:28 1685现在开始分析namenode启动时开启的第2类线程, ... -
hadoop namenode format做了什么?
2012-04-18 20:58 1146一看到format就和磁盘格式化联想到一起,然后这个fo ... -
hadoop分布式配置(服务器系统为centos5,配置时使用的用户是root)
2012-04-14 21:19 1063目前我们使 ... -
hadoop系列A:多文件输出
2012-04-14 21:18 1471package org.myorg; import ... -
Hadoop 安装问题和解决方案
2012-04-10 13:21 1256前几天在Window和Linux主机安装了Hadoop, ... -
运行Hadoop遇到的问题
2012-04-10 13:19 1610运行Hadoop遇到的问题 1, 伪分布式模式 ... -
运行Hadoop遇到的问题
2012-04-10 13:19 0运行Hadoop遇到的问题 1, 伪分布式模式 ... -
hadoop使用过程中的一些小技巧
2012-04-09 10:16 1168hadoop使用过程中的一些小技巧 ------------- ... -
运行hadoop时的一些技巧
2012-04-09 10:14 767//用来给key分区的,需要实现Partitioner接口 ... -
hive相关操作文档收集
2012-04-08 10:51 0How to load data into Hive ... -
hive sql doc
2012-04-08 10:51 0记录2个常用的hive sql语法查询地 官方 ht ... -
hive Required table missing : "`DBS`" in Catalog "" Schema "
2012-04-08 10:51 0最近需要提取一些数据,故开始使用hive,本机搭建了一个hiv ... -
HDFS数据兼容拷贝
2012-04-08 10:50 0系统中使用了hadoop 19.2 20.2 2个版本,为啥有 ... -
hdfs 简单的api 读写文件
2012-04-08 10:50 0Java代码 import ... -
hbase之htable线程安全性
2012-04-22 15:22 1187在单线程环境下使用hbase的htable是没有问题,但是突然 ... -
hbase之scan的rowkey问题
2012-04-22 15:22 1769最近使用到hbase做存储,发现使用scan的时候,返回的ro ... -
datanode启动开启了那些任务线程
2012-04-22 15:22 1087今天开始分析datanode,首先看看datanode开启了哪 ... -
namenode这个类的主要功能
2012-04-22 15:22 1507今天来总看下namenode这个类的主要功能 首先看下这个类 ... -
hadoop监控
2012-04-22 15:21 1598通过从hadoop的 hadoop-metrics文件中就可以 ...
相关推荐
除了上述核心配置外,`core-site.xml`还可能包含其他一些与网络通信相关的参数,如安全设置等。这些参数同样对Hadoop集群的性能和安全性有着重要的影响。 #### 四、实例说明 以具体的配置为例,假设我们有一个...
第一种方法是选择相应版本的 Hadoop,下载解压后,搜索 \*.xml,找到 core-default.xml、hdfs-default.xml 和 mapred-default.xml,这些文件就是默认配置,可以参考这些配置的说明和 key 设置 Hadoop 集群。...
- **配置说明**:为了确保Struts2.5.16能够正确地初始化并运行,需要在web.xml中配置Struts的过滤器。 ```xml <filter-name>struts2</filter-name> <filter-class>org.apache.struts2.dispatcher.filter....
### 软件项目配置说明书知识点解析 #### 1. 编写目的 该文档的主要目的是为统一用户授权服务提供一套完整的配置指南。这不仅包括了基础的服务配置,还涉及了高级特性的设置方法,旨在帮助开发人员快速理解并正确...
同时,需要注意的是,随着Hadoop版本的更新,某些默认配置和参数可能会有所变化,因此建议根据实际使用的Hadoop版本查阅相应的默认配置文件,如Apache官网提供的`core-default.xml`、`hdfs-default.xml`和`mapred-...
- 方法一:下载与Hadoop版本相对应的Hadoop压缩包,解压后通过搜索*.xml文件,可以找到core-default.xml、hdfs-default.xml和mapred-default.xml,这些文件中包含了默认的配置属性及说明,可以作为设置Hadoop集群时...
- **配置说明**: - **Configuration Files**:根据需要调整`core-site.xml`、`hdfs-site.xml`、`yarn-site.xml` 和 `mapred-site.xml`中的参数。 - **Environment Configuration**:通过`hadoop-env.sh`和`yarn-...
6. **Struts2配置**: Struts2的配置文件通常包括struts-default.xml、struts-plugin.xml以及自定义的struts.xml等,它们定义了全局和特定应用的行为,包括Action、Interceptor、Result以及其他组件的配置。...
### Struts2.0配置说明 #### 创建第一个Struts2.0项目 在开始之前,我们需要了解Struts2.0框架的基本概念以及如何构建一个基本的Web应用项目。本篇文章将详细阐述创建第一个Struts2.0项目的步骤,并对每个步骤进行...
- `net.core.wmem_default` 和 `net.core.rmem_default`:调整网络缓冲区大小,适应高并发下的数据传输。 4. **调度器**:根据系统负载情况,选择合适的调度器。如`sysctl -w kernel.sched_latency_ns=50000`,...
### Resin 3.1 配置文件解析与说明 #### 一、概述 Resin 是一款高性能且功能丰富的 Java 应用服务器和 Web 服务器,由 Caucho Technology 开发。Resin 3.1 版本是该系列中的一个稳定版本,广泛应用于企业级应用...
- **core-site.xml**:这是集群全局参数的配置文件,用于设定系统级别的参数,比如默认的文件系统(fs.defaultFS)和Hadoop的临时工作目录(hadoop.tmp.dir)。 - **hdfs-site.xml**:此文件专门针对HDFS(Hadoop ...
- **其他配置文件**:如velocity.properties、struts-default.vm和struts-plugin.xml,用于特定功能或插件的配置。 4. **集成开发环境支持**: - 在MyEclipse中,为了获取XML(如struts.xml)的代码提示,需要...
`struts-default.xml`和`struts-plugin.xml`等配置文件定义了全局行为,而`struts.xml`则用于特定应用的配置。 5. **ValueStack**:Struts2使用值栈来管理请求和响应的数据。`...
以下是对配置过程的详细说明: 一、Hadoop HA概述 Hadoop HA主要通过在两个不同的节点上设置NameNode的热备来实现。这两个节点被称为活动NameNode(Active NN)和备用NameNode(Standby NN)。当活动NameNode出现...