`

hadoop mapred-default.xml配置文件

 
阅读更多

name value description
hadoop.job.history.location   If job tracker is static the history files are stored in this single well known place. If No value is set here, by default, it is in the local file system at ${hadoop.log.dir}/history.
hadoop.job.history.user.location   User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none".
mapred.job.tracker.history.completed.location   The completed job history files are stored at this single well known location. If nothing is specified, the files are stored at ${hadoop.job.history.location}/done.
io.sort.factor 10 The number of streams to merge at once while sorting files. This determines the number of open file handles.
io.sort.mb 100 The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks.
io.sort.record.percent 0.05 The percentage of io.sort.mb dedicated to tracking record boundaries. Let this value be r, io.sort.mb be x. The maximum number of records collected before the collection thread must block is equal to (r * x) / 4
io.sort.spill.percent 0.80 The soft limit in either the buffer or record collection buffers. Once reached, a thread will begin to spill the contents to disk in the background. Note that this does not imply any chunking of data to the spill. A value less than 0.5 is not recommended.
io.map.index.skip 0 Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large map files using less memory.
mapred.job.tracker local The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
mapred.job.tracker.http.address 0.0.0.0:50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port.
mapred.job.tracker.handler.count 10 The number of server threads for the JobTracker. This should be roughly 4% of the number of tasktracker nodes.
mapred.task.tracker.report.address 127.0.0.1:0 The interface and port that task tracker server listens on. Since it is only connected to by the tasks, it uses the local interface. EXPERT ONLY. Should only be changed if your host does not have the loopback interface.
mapred.local.dir ${hadoop.tmp.dir}/mapred/local The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored.
mapred.system.dir ${hadoop.tmp.dir}/mapred/system The directory where MapReduce stores control files.
mapreduce.jobtracker.staging.root.dir ${hadoop.tmp.dir}/mapred/staging The root of the staging area for users' job files In practice, this should be the directory where users' home directories are located (usually /user)
mapred.temp.dir ${hadoop.tmp.dir}/mapred/temp A shared directory for temporary files.
mapred.local.dir.minspacestart 0 If the space in mapred.local.dir drops under this, do not ask for more tasks. Value in bytes.
mapred.local.dir.minspacekill 0 If the space in mapred.local.dir drops under this, do not ask more tasks until all the current ones have finished and cleaned up. Also, to save the rest of the tasks we have running, kill one of them, to clean up some space. Start with the reduce tasks, then go with the ones that have finished the least. Value in bytes.
mapred.tasktracker.expiry.interval 600000 Expert: The time-interval, in miliseconds, after which a tasktracker is declared 'lost' if it doesn't send heartbeats.
mapred.tasktracker.resourcecalculatorplugin   Name of the class whose instance will be used to query resource information on the tasktracker. The class must be an instance of org.apache.hadoop.util.ResourceCalculatorPlugin. If the value is null, the tasktracker attempts to use a class appropriate to the platform. Currently, the only platform supported is Linux.
mapred.tasktracker.taskmemorymanager.monitoring-interval 5000 The interval, in milliseconds, for which the tasktracker waits between two cycles of monitoring its tasks' memory usage. Used only if tasks' memory management is enabled via mapred.tasktracker.tasks.maxmemory.
mapred.tasktracker.tasks.sleeptime-before-sigkill 5000 The time, in milliseconds, the tasktracker waits for sending a SIGKILL to a process, after it has been sent a SIGTERM.
mapred.map.tasks 2 The default number of map tasks per job. Ignored when mapred.job.tracker is "local".
mapred.reduce.tasks 1 The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Ignored when mapred.job.tracker is "local".
mapreduce.tasktracker.outofband.heartbeat false Expert: Set this to true to let the tasktracker send an out-of-band heartbeat on task-completion for better latency.
mapred.jobtracker.restart.recover false "true" to enable (job) recovery upon restart, "false" to start afresh
mapred.jobtracker.job.history.block.size 3145728 The block size of the job history file. Since the job recovery uses job history, its important to dump job history to disk as soon as possible. Note that this is an expert level parameter. The default value is set to 3 MB.
mapreduce.job.split.metainfo.maxsize 10000000 The maximum permissible size of the split metainfo file. The JobTracker won't attempt to read split metainfo files bigger than the configured value. No limits if set to -1.
mapred.jobtracker.taskScheduler org.apache.hadoop.mapred.JobQueueTaskScheduler The class responsible for scheduling the tasks.
mapred.jobtracker.taskScheduler.maxRunningTasksPerJob   The maximum number of running tasks for a job before it gets preempted. No limits if undefined.
mapred.map.max.attempts 4 Expert: The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it.
mapred.reduce.max.attempts 4 Expert: The maximum number of attempts per reduce task. In other words, framework will try to execute a reduce task these many number of times before giving up on it.
mapred.reduce.parallel.copies 5 The default number of parallel transfers run by reduce during the copy(shuffle) phase.
mapreduce.reduce.shuffle.maxfetchfailures 10 The maximum number of times a reducer tries to fetch a map output before it reports it.
mapreduce.reduce.shuffle.connect.timeout 180000 Expert: The maximum amount of time (in milli seconds) a reduce task spends in trying to connect to a tasktracker for getting map output.
mapreduce.reduce.shuffle.read.timeout 180000 Expert: The maximum amount of time (in milli seconds) a reduce task waits for map output data to be available for reading after obtaining connection.
mapred.task.timeout 600000 The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string.
mapred.tasktracker.map.tasks.maximum 2 The maximum number of map tasks that will be run simultaneously by a task tracker.
mapred.tasktracker.reduce.tasks.maximum 2 The maximum number of reduce tasks that will be run simultaneously by a task tracker.
mapred.jobtracker.completeuserjobs.maximum 100 The maximum number of complete jobs per user to keep around before delegating them to the job history.
mapreduce.reduce.input.limit -1 The limit on the input size of the reduce. If the estimated input size of the reduce is greater than this value, job is failed. A value of -1 means that there is no limit set.
mapred.job.tracker.retiredjobs.cache.size 1000 The number of retired job status to keep in the cache.
mapred.job.tracker.jobhistory.lru.cache.size 5 The number of job history files loaded in memory. The jobs are loaded when they are first accessed. The cache is cleared based on LRU.
mapred.child.java.opts -Xmx200m Java opts for the task tracker child processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc The configuration variable mapred.child.ulimit can be used to control the maximum virtual memory of the child processes.
mapred.child.env   User added environment variables for the task tracker child processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit tasktracker's B env variable.
mapred.child.ulimit   The maximum virtual memory, in KB, of a process launched by the Map-Reduce framework. This can be used to control both the Mapper/Reducer tasks and applications using Hadoop Pipes, Hadoop Streaming etc. By default it is left unspecified to let cluster admins control it via limits.conf and other such relevant mechanisms. Note: mapred.child.ulimit must be greater than or equal to the -Xmx passed to JavaVM, else the VM might not start.
mapred.cluster.map.memory.mb -1 The size, in terms of virtual memory, of a single map slot in the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single map task via mapred.job.map.memory.mb, upto the limit specified by mapred.cluster.max.map.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
mapred.cluster.reduce.memory.mb -1 The size, in terms of virtual memory, of a single reduce slot in the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single reduce task via mapred.job.reduce.memory.mb, upto the limit specified by mapred.cluster.max.reduce.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
mapred.cluster.max.map.memory.mb -1 The maximum size, in terms of virtual memory, of a single map task launched by the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single map task via mapred.job.map.memory.mb, upto the limit specified by mapred.cluster.max.map.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
mapred.cluster.max.reduce.memory.mb -1 The maximum size, in terms of virtual memory, of a single reduce task launched by the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single reduce task via mapred.job.reduce.memory.mb, upto the limit specified by mapred.cluster.max.reduce.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
mapred.job.map.memory.mb -1 The size, in terms of virtual memory, of a single map task for the job. A job can ask for multiple slots for a single map task, rounded up to the next multiple of mapred.cluster.map.memory.mb and upto the limit specified by mapred.cluster.max.map.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off iff mapred.cluster.map.memory.mb is also turned off (-1).
mapred.job.reduce.memory.mb -1 The size, in terms of virtual memory, of a single reduce task for the job. A job can ask for multiple slots for a single map task, rounded up to the next multiple of mapred.cluster.reduce.memory.mb and upto the limit specified by mapred.cluster.max.reduce.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off iff mapred.cluster.reduce.memory.mb is also turned off (-1).
mapred.child.tmp ./tmp To set the value of tmp directory for map and reduce tasks. If the value is an absolute path, it is directly assigned. Otherwise, it is prepended with task's working directory. The java tasks are executed with option -Djava.io.tmpdir='the absolute path of the tmp dir'. Pipes and streaming are set with environment variable, TMPDIR='the absolute path of the tmp dir'
mapred.inmem.merge.threshold 1000 The threshold, in terms of the number of files for the in-memory merge process. When we accumulate threshold number of files we initiate the in-memory merge and spill to disk. A value of 0 or less than 0 indicates we want to DON'T have any threshold and instead depend only on the ramfs's memory consumption to trigger the merge.
mapred.job.shuffle.merge.percent 0.66 The usage threshold at which an in-memory merge will be initiated, expressed as a percentage of the total memory allocated to storing in-memory map outputs, as defined by mapred.job.shuffle.input.buffer.percent.
mapred.job.shuffle.input.buffer.percent 0.70 The percentage of memory to be allocated from the maximum heap size to storing map outputs during the shuffle.
mapred.job.reduce.input.buffer.percent 0.0 The percentage of memory- relative to the maximum heap size- to retain map outputs during the reduce. When the shuffle is concluded, any remaining map outputs in memory must consume less than this threshold before the reduce can begin.
mapred.map.tasks.speculative.execution true If true, then multiple instances of some map tasks may be executed in parallel.
mapred.reduce.tasks.speculative.execution true If true, then multiple instances of some reduce tasks may be executed in parallel.
mapred.job.reuse.jvm.num.tasks 1 How many tasks to run per jvm. If set to -1, there is no limit.
mapred.min.split.size 0 The minimum size chunk that map input should be split into. Note that some file formats may have minimum split sizes that take priority over this setting.
mapred.jobtracker.maxtasks.per.job -1 The maximum number of tasks for a single job. A value of -1 indicates that there is no maximum.
mapred.submit.replication 10 The replication level for submitted job files. This should be around the square root of the number of nodes.
mapred.tasktracker.dns.interface default The name of the Network Interface from which a task tracker should report its IP address.
mapred.tasktracker.dns.nameserver default The host name or IP address of the name server (DNS) which a TaskTracker should use to determine the host name used by the JobTracker for communication and display purposes.
tasktracker.http.threads 40 The number of worker threads that for the http server. This is used for map output fetching
mapred.task.tracker.http.address 0.0.0.0:50060 The task tracker http server address and port. If the port is 0 then the server will start on a free port.
keep.failed.task.files false Should the files for failed tasks be kept. This should only be used on jobs that are failing, because the storage is never reclaimed. It also prevents the map outputs from being erased from the reduce directory as they are consumed.
mapred.output.compress false Should the job outputs be compressed?
mapred.output.compression.type RECORD If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK.
mapred.output.compression.codec org.apache.hadoop.io.compress.DefaultCodec If the job outputs are compressed, how should they be compressed?
mapred.compress.map.output false Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression.
mapred.map.output.compression.codec org.apache.hadoop.io.compress.DefaultCodec If the map outputs are compressed, how should they be compressed?
map.sort.class org.apache.hadoop.util.QuickSort The default sort class for sorting keys.
mapred.userlog.limit.kb 0 The maximum size of user-logs of each task in KB. 0 disables the cap.
mapred.userlog.retain.hours 24 The maximum time, in hours, for which the user-logs are to be retained after the job completion.
mapred.user.jobconf.limit 5242880 The maximum allowed size of the user jobconf. The default is set to 5 MB
mapred.hosts   Names a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are permitted.
mapred.hosts.exclude   Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded.
mapred.heartbeats.in.second 100 Expert: Approximate number of heart-beats that could arrive at JobTracker in a second. Assuming each RPC can be processed in 10msec, the default value is made 100 RPCs in a second.
mapred.max.tracker.blacklists 4 The number of blacklists for a tasktracker by various jobs after which the tasktracker will be marked as potentially faulty and is a candidate for graylisting across all jobs. (Unlike blacklisting, this is advisory; the tracker remains active. However, it is reported as graylisted in the web UI, with the expectation that chronically graylisted trackers will be manually decommissioned.) This value is tied to mapred.jobtracker.blacklist.fault-timeout-window; faults older than the window width are forgiven, so the tracker will recover from transient problems. It will also become healthy after a restart.
mapred.jobtracker.blacklist.fault-timeout-window 180 The timeout (in minutes) after which per-job tasktracker faults are forgiven. The window is logically a circular buffer of time-interval buckets whose width is defined by mapred.jobtracker.blacklist.fault-bucket-width; when the "now" pointer moves across a bucket boundary, the previous contents (faults) of the new bucket are cleared. In other words, the timeout's granularity is determined by the bucket width.
mapred.jobtracker.blacklist.fault-bucket-width 15 The width (in minutes) of each bucket in the tasktracker fault timeout window. Each bucket is reused in a circular manner after a full timeout-window interval (defined by mapred.jobtracker.blacklist.fault-timeout-window).
mapred.max.tracker.failures 4 The number of task-failures on a tasktracker of a given job after which new tasks of that job aren't assigned to it.
jobclient.output.filter FAILED The filter for controlling the output of the task's userlogs sent to the console of the JobClient. The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and ALL.
mapred.job.tracker.persist.jobstatus.active false Indicates if persistency of job status information is active or not.
mapred.job.tracker.persist.jobstatus.hours 0 The number of hours job status information is persisted in DFS. The job status information will be available after it drops of the memory queue and between jobtracker restarts. With a zero value the job status information is not persisted at all in DFS.
mapred.job.tracker.persist.jobstatus.dir /jobtracker/jobsInfo The directory where the job status information is persisted in a file system to be available after it drops of the memory queue and between jobtracker restarts.
mapreduce.job.complete.cancel.delegation.tokens true if false - do not unregister/cancel delegation tokens from renewal, because same tokens may be used by spawned jobs
mapred.task.profile false To set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory. The value is "true" if task profiling is enabled.
mapred.task.profile.maps 0-2 To set the ranges of map tasks to profile. mapred.task.profile has to be set to true for the value to be accounted.
mapred.task.profile.reduces 0-2 To set the ranges of reduce tasks to profile. mapred.task.profile has to be set to true for the value to be accounted.
mapred.line.input.format.linespermap 1 Number of lines per split in NLineInputFormat.
mapred.skip.attempts.to.start.skipping 2 The number of Task attempts AFTER which skip mode will be kicked off. When skip mode is kicked off, the tasks reports the range of records which it will process next, to the TaskTracker. So that on failures, TT knows which ones are possibly the bad records. On further executions, those are skipped.
mapred.skip.map.auto.incr.proc.count true The flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own.
mapred.skip.reduce.auto.incr.proc.count true The flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own.
mapred.skip.out.dir   If no value is specified here, the skipped records are written to the output directory at _logs/skip. User can stop writing skipped records by giving the value "none".
mapred.skip.map.max.skip.records 0 The number of acceptable skip records surrounding the bad record PER bad record in mapper. The number includes the bad record as well. To turn the feature of detection/skipping of bad records off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever records(depends on application) get skipped are acceptable.
mapred.skip.reduce.max.skip.groups 0 The number of acceptable skip groups surrounding the bad group PER bad group in reducer. The number includes the bad group as well. To turn the feature of detection/skipping of bad groups off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever groups(depends on application) get skipped are acceptable.
job.end.retry.attempts 0 Indicates how many times hadoop should attempt to contact the notification URL
job.end.retry.interval 30000 Indicates time in milliseconds between notification URL retry calls
hadoop.rpc.socket.factory.class.JobSubmissionProtocol   SocketFactory to use to connect to a Map/Reduce master (JobTracker). If null or empty, then use hadoop.rpc.socket.class.default.
mapred.task.cache.levels 2 This is the max level of the task cache. For example, if the level is 2, the tasks cached are at the host level and at the rack level.
mapred.queue.names default Comma separated list of queues configured for this jobtracker. Jobs are added to queues and schedulers can configure different scheduling properties for the various queues. To configure a property for a queue, the name of the queue must match the name specified in this value. Queue properties that are common to all schedulers are configured here with the naming convention, mapred.queue.$QUEUE-NAME.$PROPERTY-NAME, for e.g. mapred.queue.default.submit-job-acl. The number of queues configured in this parameter could depend on the type of scheduler being used, as specified in mapred.jobtracker.taskScheduler. For example, the JobQueueTaskScheduler supports only a single queue, which is the default configured here. Before adding more queues, ensure that the scheduler you've configured supports multiple queues.
mapred.acls.enabled false Specifies whether ACLs should be checked for authorization of users for doing various queue and job level operations. ACLs are disabled by default. If enabled, access control checks are made by JobTracker and TaskTracker when requests are made by users for queue operations like submit job to a queue and kill a job in the queue and job operations like viewing the job-details (See mapreduce.job.acl-view-job) or for modifying the job (See mapreduce.job.acl-modify-job) using Map/Reduce APIs, RPCs or via the console and web user interfaces.
mapred.queue.default.state RUNNING This values defines the state , default queue is in. the values can be either "STOPPED" or "RUNNING" This value can be changed at runtime.
mapred.job.queue.name default Queue to which a job is submitted. This must match one of the queues defined in mapred.queue.names for the system. Also, the ACL setup for the queue must allow the current user to submit a job to the queue. Before specifying a queue, ensure that the system is configured with the queue, and access is allowed for submitting jobs to the queue.
mapreduce.job.acl-modify-job   Job specific access-control list for 'modifying' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapred.acls.enabled to true. This specifies the list of users and/or groups who can do modification operations on the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard all the modifications with respect to this job and takes care of all the following operations: o killing this job o killing a task of this job, failing a task of this job o setting the priority of this job Each of these operations are also protected by the per-queue level ACL "acl-administer-jobs" configured via mapred-queues.xml. So a caller should have the authorization to satisfy either the queue-level ACL or the job-level ACL. Irrespective of this ACL configuration, job-owner, the user who started the cluster, cluster administrators configured via mapreduce.cluster.administrators and queue administrators of the queue to which this job is submitted to configured via mapred.queue.queue-name.acl-administer-jobs in mapred-queue-acls.xml can do all the modification operations on a job. By default, nobody else besides job-owner, the user who started the cluster, cluster administrators and queue administrators can perform modification operations on a job.
mapreduce.job.acl-view-job   Job specific access-control list for 'viewing' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapred.acls.enabled to true. This specifies the list of users and/or groups who can view private details about the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard some of the job-views and at present only protects APIs that can return possibly sensitive information of the job-owner like o job-level counters o task-level counters o tasks' diagnostic information o task-logs displayed on the TaskTracker web-UI and o job.xml showed by the JobTracker's web-UI Every other piece of information of jobs is still accessible by any other user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc. Irrespective of this ACL configuration, job-owner, the user who started the cluster, cluster administrators configured via mapreduce.cluster.administrators and queue administrators of the queue to which this job is submitted to configured via mapred.queue.queue-name.acl-administer-jobs in mapred-queue-acls.xml can do all the view operations on a job. By default, nobody else besides job-owner, the user who started the cluster, cluster administrators and queue administrators can perform view operations on a job.
mapred.tasktracker.indexcache.mb 10 The maximum memory that a task tracker allows for the index cache that is used when serving map outputs to reducers.
mapred.combine.recordsBeforeProgress 10000 The number of records to process during combine output collection before sending a progress notification to the TaskTracker.
mapred.merge.recordsBeforeProgress 10000 The number of records to process during merge before sending a progress notification to the TaskTracker.
mapred.reduce.slowstart.completed.maps 0.05 Fraction of the number of maps in the job which should be complete before reduces are scheduled for the job.
mapred.task.tracker.task-controller org.apache.hadoop.mapred.DefaultTaskController TaskController which is used to launch and manage task execution
mapreduce.tasktracker.group   Expert: Group to which TaskTracker belongs. If LinuxTaskController is configured via mapreduce.tasktracker.taskcontroller, the group owner of the task-controller binary should be same as this group.
mapred.healthChecker.script.path   Absolute path to the script which is periodicallyrun by the node health monitoring service to determine if the node is healthy or not. If the value of this key is empty or the file does not exist in the location configured here, the node health monitoring service is not started.
mapred.healthChecker.interval 60000 Frequency of the node health script to be run, in milliseconds
mapred.healthChecker.script.timeout 600000 Time after node health script should be killed if unresponsive and considered that the script has failed.
mapred.healthChecker.script.args   List of arguments which are to be passed to node health script when it is being launched comma seperated.
mapreduce.job.counters.limit 120 Limit on the number of counters allowed per job.
分享到:
评论

相关推荐

    hadoop 2.9.0 mapred-default.xml 属性集

    Hadoop 2.9.0版本中的mapred-default.xml文件包含了MapReduce作业的配置属性,这些属性定义了MapReduce作业执行过程中的各种行为和参数。下面我们来详细介绍mapred-site.xml文件中的一些关键属性。 1. mapreduce....

    hadoop默认配置文件x-default.xml

    本篇文章将深入探讨Hadoop的默认配置文件,包括`core-default.xml`、`hdfs-default.xml`、`mapred-default.xml`和`yarn-default.xml`,以及这些文件中的关键配置选项。 首先,`core-default.xml`是Hadoop的核心配置...

    mapred-default.xml

    Hadoop的mapred默认配置文件

    flink-shaded-hadoop-2-uber-2.7.5-10.0.jar

    4. `hdfs-default.xml`、`mapred-default.xml`、`yarn-default.xml`、`core-default.xml`:这些都是 Hadoop 的默认配置文件,定义了 HDFS(Hadoop 分布式文件系统)、MapReduce(Hadoop 的并行计算模型)和 YARN 的...

    hadoop-2.7.4.tar.gz

    此外,还需要配置Hadoop的核心配置文件(如`core-site.xml`、`hdfs-site.xml`、`yarn-site.xml`和`mapred-site.xml`),以定义数据存储、集群通信和其他关键参数。 对于Windows用户,可能还需要配置Hadoop与本地...

    hadoop的默认配置文件

    接下来,我们将详细探讨这四个默认配置文件——hdfs-default.xml、yarn-default.xml、core-default.xml和mapred-default.xml,以及它们所包含的关键知识点。 1. **hdfs-default.xml**:这是Hadoop分布式文件系统的...

    hadoop2.6.0版本-hadoop-2.6.0.tar.gz

    `conf`目录则存放配置文件,如`core-site.xml`, `hdfs-site.xml`, `mapred-site.xml`和`yarn-site.xml`。 配置Hadoop集群时,需要在这些配置文件中指定诸如NameNode、DataNode、ResourceManager、NodeManager等节点...

    hadoop-3.3.1 windows + apache-hadoop-3.1.0-winutils-master.zip

    3. **配置Hadoop**:在Hadoop的conf目录下,编辑`core-site.xml`,设置HDFS的默认FS(如`fs.defaultFS`为`hdfs://localhost:9000`),以及临时目录(如`hadoop.tmp.dir`为`C:\Hadoop\tmp`)。然后编辑`hdfs-site.xml...

    Hadoop 三个配置文件的参数含义说明

    同时,需要注意的是,随着Hadoop版本的更新,某些默认配置和参数可能会有所变化,因此建议根据实际使用的Hadoop版本查阅相应的默认配置文件,如Apache官网提供的`core-default.xml`、`hdfs-default.xml`和`mapred-...

    hadoop-2.4.1安装软件包以及教程jdk.zip

    注意:hadoop2.x的配置文件$HADOOP_HOME/etc/hadoop 伪分布式需要修改5个配置文件 3.1配置hadoop 第一个:hadoop-env.sh vim hadoop-env.sh #第27行 export JAVA_HOME=/usr/java/jdk1.7.0_65 第二个:core...

    win环境 hadoop 3.1.0安装包

    编辑`hdfs-site.xml`,设置HDFS的基本配置,如副本数量(default.replication)通常设为1,因为Windows单机环境无需复制。 6. **配置YARN**: 编辑`yarn-site.xml`,设置YARN的相关参数,如`yarn.nodemanager....

    hadoop3.1配置

    3. **mapred-site.xml**: 这个文件是MapReduce框架的配置,用于管理任务的执行。比如,`mapreduce.framework.name`设置运行模式,可以是local或yarn;`mapreduce.map.memory.mb`和`mapreduce.reduce.memory.mb`分别...

    hadoop伪分布式安装.pdf

    - 使用命令`vim /root/soft/hadoop/etc/hadoop/mapred-site.xml`编辑文件。 - 在`<configuration>`标签内添加以下配置: ```xml <name>mapreduce.framework.name <value>yarn ``` - **配置yarn-site.xml*...

    Hadoop默认的配置文件

    在你提到的压缩包文件中,我们能看到四个主要的默认配置文件:`core-default.xml`,`hdfs-default.xml`,`mapred-default.xml`,以及`yarn-default.xml`。下面我们将逐一详细介绍这些文件及其包含的配置项。 首先,...

    HadoopHA集群配置文件

    本文将深入探讨Hadoop HA(高可用性)集群的配置文件,包括`core-site.xml`、`hdfs-site.xml`、`mapred-site.xml`、`yarn-site.xml`以及`slaves`文件,这些都是确保Hadoop集群稳定运行的基础。 1. `core-site.xml`:...

    eclipse3.3+hadoop-0.20.0+hadoop-0.20.0-eclipse-plugin环境成功搭建.docx

    配置的核心是定义Hadoop集群的相关参数,例如在`core-site.xml`中设置默认文件系统的URI(`fs.default.name`),在`hdfs-site.xml`中设置数据块的副本数(`dfs.replication`),以及在`mapred-site.xml`中配置...

    hadoop配置文件详解

    核心配置文件core-site.xml涉及整个Hadoop环境的设置,而hdfs-site.xml和mapred-site.xml则针对Hadoop分布式文件系统(HDFS)和MapReduce计算框架进行局部配置。 core-site.xml作为全局配置文件,定义了Hadoop集群的...

Global site tag (gtag.js) - Google Analytics