`
sunwinner
  • 浏览: 204244 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

Hadoop Library MapReduce Classes

 
阅读更多

Hadoop comes with a set of Mappers and Reducers for commonly used functions, this blog post will explain the usage of these built-in Mappers and Reducers in Hadoop v1.1.1.

 

  1. ChainMapper, ChainReducer   This pair runs a chain of mappers in a single mapper, and runs a reducer followed by a chain of mappers in a single reducer. This can substantially reduce the amount of disk I/O incurred compared to running multiple MapReduce Jobs. Since ChainMapper and ChainReducer new API is missing from Hadoop 1.1.1, and there's no proper patch for the current stable version, I add a example here in old API without compiling and running it on my local cluster.
    /**
     * Licensed to the Apache Software Foundation (ASF) under one
     * or more contributor license agreements.  See the NOTICE file
     * distributed with this work for additional information
     * regarding copyright ownership.  The ASF licenses this file
     * to you under the Apache License, Version 2.0 (the
     * "License"); you may not use this file except in compliance
     * with the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    import java.io.IOException;
    import java.util.Iterator;
    import java.util.StringTokenizer;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.conf.Configured;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapred.*;
    import org.apache.hadoop.mapred.lib.ChainMapper;
    import org.apache.hadoop.mapred.lib.ChainReducer;
    import org.apache.hadoop.util.Tool;
    import org.apache.hadoop.util.ToolRunner;
    
    /**
     * Sample program for ChainMapper/ChainReducer.
     */
    public class ChainWordCount extends Configured implements Tool {
    
        public static class Tokenizer extends MapReduceBase
                implements Mapper {
    
            private final static IntWritable one = new IntWritable(1);
            private Text word = new Text();
    
            public void map(LongWritable key, Text value,
                            OutputCollector output,
                            Reporter reporter) throws IOException {
                String line = value.toString();
                System.out.println("Line:"+line);
                StringTokenizer itr = new StringTokenizer(line);
                while (itr.hasMoreTokens()) {
                    word.set(itr.nextToken());
                    output.collect(word, one);
                }
            }
        }
    
        public static class UpperCaser extends MapReduceBase
                implements Mapper {
    
            public void map(Text key, IntWritable value,
                            OutputCollector output,
                            Reporter reporter) throws IOException {
                String word = key.toString().toUpperCase();
                System.out.println("Upper Case:"+word);
                output.collect(new Text(word), value);
            }
        }
    
        public static class Reduce extends MapReduceBase
                implements Reducer {
    
            public void reduce(Text key, Iterator values,
                               OutputCollector output,
                               Reporter reporter) throws IOException {
                int sum = 0;
                while (values.hasNext()) {
                    sum += values.next().get();
                }
                System.out.println("Word:"+key.toString()+"\tCount:"+sum);
                output.collect(key, new IntWritable(sum));
            }
        }
    
        static int printUsage() {
            System.out.println("wordcount  ");
            ToolRunner.printGenericCommandUsage(System.out);
            return -1;
        }
    
        public int run(String[] args) throws Exception {
            JobConf conf = new JobConf(getConf(), ChainWordCount.class);
            conf.setJobName("wordcount");
    
            if (args.length != 2) {
                System.out.println("ERROR: Wrong number of parameters: " +
                        args.length + " instead of 2.");
                return printUsage();
            }
            FileInputFormat.setInputPaths(conf, args[0]);
            FileOutputFormat.setOutputPath(conf, new Path(args[1]));
    
            conf.setInputFormat(TextInputFormat.class);
            conf.setOutputFormat(TextOutputFormat.class);
    
            JobConf mapAConf = new JobConf(false);
            ChainMapper.addMapper(conf, Tokenizer.class, LongWritable.class, Text.class, Text.class, IntWritable.class, true, mapAConf);
    
            JobConf mapBConf = new JobConf(false);
            ChainMapper.addMapper(conf, UpperCaser.class, Text.class, IntWritable.class, Text.class, IntWritable.class, true, mapBConf);
    
            JobConf reduceConf = new JobConf(false);
            ChainReducer.setReducer(conf, Reduce.class, Text.class, IntWritable.class, Text.class, IntWritable.class, true, reduceConf);
    
            JobClient.runJob(conf);
            return 0;
        }
    
        public static void main(String[] args) throws Exception {
            int res = ToolRunner.run(new Configuration(), new ChainWordCount(), args);
            System.exit(res);
        }
    }
     
  2. FieldSelectionMapper and FieldSelectionReducer. This pair can select fields (like the Unix cut command) from the input keys and values and emit them as output keys and values.
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.conf.Configured;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper;
    import org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionMapper;
    import org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionReducer;
    import org.apache.hadoop.util.Tool;
    import org.apache.hadoop.util.ToolRunner;
    
    /**
     * Demonstrate how to use FieldSelectionMapper and FieldSelectionReducer.
     * It implements a job that can be used to perform field selections in a
     * manner similar to unix cut.
     * <p/>
     * User: George Sun
     * Date: 7/4/13
     * Time: 9:44 PM
     */
    public class FieldSelectionMRExample extends Configured implements Tool {
    
        @Override
        public int run(String[] args) throws Exception {
            if (args.length != 2) {
                JobBuilder.printUsage(this, "<input> <output>");
                return -1;
            }
    
            Configuration conf = getConf();
            conf.set(FieldSelectionHelper.DATA_FIELD_SEPERATOR, "-");
            conf.set(FieldSelectionHelper.MAP_OUTPUT_KEY_VALUE_SPEC, "6,5,1-3:0-");
            conf.set(FieldSelectionHelper.REDUCE_OUTPUT_KEY_VALUE_SPEC, ":4,3,2,1,0,0-");
    
            Job job = JobBuilder.parseInputAndOutput(this, getConf(), args);
            job.setMapperClass(FieldSelectionMapper.class);
            job.setReducerClass(FieldSelectionReducer.class);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(Text.class);
            // 1 reducer is ok for this job.
            job.setNumReduceTasks(1);
    
            return job.waitForCompletion(true) ? 0 : 1;
        }
    
        public static void main(String[] args) throws Exception {
            int exitCode = ToolRunner.run(new FieldSelectionMRExample(), args);
            System.exit(exitCode);
        }
    }
    
     The JobBuilder class:
    // == JobBuilder
    import java.io.IOException;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.GenericOptionsParser;
    import org.apache.hadoop.util.Tool;
    
    public class JobBuilder {
    
        private final Class<?> driverClass;
        private final Job job;
        private final int extraArgCount;
        private final String extrArgsUsage;
    
        private String[] extraArgs;
    
        public JobBuilder(Class<?> driverClass) throws IOException {
            this(driverClass, 0, "");
        }
    
        public JobBuilder(Class<?> driverClass, int extraArgCount, String extrArgsUsage) throws IOException {
            this.driverClass = driverClass;
            this.extraArgCount = extraArgCount;
            this.job = new Job();
            this.job.setJarByClass(driverClass);
            this.extrArgsUsage = extrArgsUsage;
        }
    
        // vv JobBuilder
        public static Job parseInputAndOutput(Tool tool, Configuration conf,
                                              String[] args) throws IOException {
    
            if (args.length != 2) {
                printUsage(tool, "<input> <output>");
                return null;
            }
            Job job = new Job(conf);
            job.setJarByClass(tool.getClass());
            FileInputFormat.addInputPath(job, new Path(args[0]));
            FileOutputFormat.setOutputPath(job, new Path(args[1]));
            return job;
        }
    
        public static void printUsage(Tool tool, String extraArgsUsage) {
            System.err.printf("Usage: %s [genericOptions] %s\n\n",
                    tool.getClass().getSimpleName(), extraArgsUsage);
            GenericOptionsParser.printGenericCommandUsage(System.err);
        }
        // ^^ JobBuilder
    
        public JobBuilder withCommandLineArgs(String... args) throws IOException {
            Configuration conf = job.getConfiguration();
            GenericOptionsParser parser = new GenericOptionsParser(conf, args);
            String[] otherArgs = parser.getRemainingArgs();
            if (otherArgs.length < 2 && otherArgs.length > 3 + extraArgCount) {
                System.err.printf("Usage: %s [genericOptions] [-overwrite] <input path> <output path> %s\n\n",
                        driverClass.getSimpleName(), extrArgsUsage);
                GenericOptionsParser.printGenericCommandUsage(System.err);
                System.exit(-1);
            }
            int index = 0;
            boolean overwrite = false;
            if (otherArgs[index].equals("-overwrite")) {
                overwrite = true;
                index++;
            }
            Path input = new Path(otherArgs[index++]);
            Path output = new Path(otherArgs[index++]);
    
            if (index < otherArgs.length) {
                extraArgs = new String[otherArgs.length - index];
                System.arraycopy(otherArgs, index, extraArgs, 0, otherArgs.length - index);
            }
    
            if (overwrite) {
                output.getFileSystem(conf).delete(output, true);
            }
    
            FileInputFormat.addInputPath(job, input);
            FileOutputFormat.setOutputPath(job, output);
            return this;
        }
    
        public Job build() {
            return job;
        }
    
        public String[] getExtraArgs() {
            return extraArgs;
        }
    }
    
     
  3. IntSumReducer, LongSumReducer      Reducers that sum integer values to produce a total for every key. It's really easy to understand how them work.
    public static class IntSumReducer 
           extends Reducer<Text,IntWritable,Text,IntWritable> {
        private IntWritable result = new IntWritable();
    
        public void reduce(Text key, Iterable<IntWritable> values, 
                           Context context
                           ) throws IOException, InterruptedException {
          int sum = 0;
          for (IntWritable val : values) {
            sum += val.get();
          }
          result.set(sum);
          context.write(key, result);
        }
      }
     
  4. InverseMapper    A mapper that swaps keys and values. Really easy, so no example here.
    /**
     * Licensed to the Apache Software Foundation (ASF) under one
     * or more contributor license agreements.  See the NOTICE file
     * distributed with this work for additional information
     * regarding copyright ownership.  The ASF licenses this file
     * to you under the Apache License, Version 2.0 (the
     * "License"); you may not use this file except in compliance
     * with the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    package org.apache.hadoop.mapreduce.lib.map;
    
    import java.io.IOException;
    
    import org.apache.hadoop.mapreduce.Mapper;
    
    /** A {@link Mapper} that swaps keys and values. */
    public class InverseMapper<K, V> extends Mapper<K,V,V,K> {
    
      /** The inverse function.  Input keys and values are swapped.*/
      @Override
      public void map(K key, V value, Context context
                      ) throws IOException, InterruptedException {
        context.write(value, key);
      }
      
    }
     
  5. MultithreadedMapper     Mapper that runs mappers concurrently in separate threads. MultithreadedMapper is useful for mappers that are not CPU-bound. When you are I/O bound like fetch pages from web which has more latency than from local I/O. In such case, using MultithreadedMapper would help as you are not blocked on a single network I/O call and you can continue processing as data is made available to you. But if you have large data in HDFS to be processed then they are readily fetched as the data is localized and if the computation is CPU bound then multi-core, multi-process solution is more helpful. Also you will have to ensure that your mappers are thread safe. Code snippet of an example as below:
    Configuration conf = new Configuration();
    Job job = new Job(conf);
    job.setMapperClass(MultithreadedMapper.class);
    MultithreadedMapper.setMapperClass(job, WebGraphMapper.class);
    MultithreadedMapper.setNumberOfThreads(job, 8);
    //conf.set("mapred.map.multithreadedrunner.class", //WebGraphMapper.class.getCanonicalName());
    //conf.set("mapred.map.multithreadedrunner.threads", "8");
    job.setJarByClass(WebGraphMapper.class);
    // rest ommitted
    job.waitForCompletion(true);
     
  6. TokenCounterMapper    A mapper that tokenizes the input value into words (using Java's StringTokenizer) and emits each word along with a count of one. It's simple, so no examples.
    /**
     * Licensed to the Apache Software Foundation (ASF) under one
     * or more contributor license agreements.  See the NOTICE file
     * distributed with this work for additional information
     * regarding copyright ownership.  The ASF licenses this file
     * to you under the Apache License, Version 2.0 (the
     * "License"); you may not use this file except in compliance
     * with the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    package org.apache.hadoop.mapreduce.lib.map;
    
    import java.io.IOException;
    import java.util.StringTokenizer;
    
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Mapper;
    
    /**
     * Tokenize the input values and emit each word with a count of 1.
     */
    public class TokenCounterMapper extends Mapper<Object, Text, Text, IntWritable>{
        
      private final static IntWritable one = new IntWritable(1);
      private Text word = new Text();
      
      @Override
      public void map(Object key, Text value, Context context
                      ) throws IOException, InterruptedException {
        StringTokenizer itr = new StringTokenizer(value.toString());
        while (itr.hasMoreTokens()) {
          word.set(itr.nextToken());
          context.write(word, one);
        }
      }
    }
     
  7. RegexMapper    A mapper that finds matches of a regular expression in the input value and emits the matches along with a count of one. The new API of this class is also missing from Hadoop 1.1.1. But the old API here is enough to illustrate the concept of it.
    /**
     * Licensed to the Apache Software Foundation (ASF) under one
     * or more contributor license agreements.  See the NOTICE file
     * distributed with this work for additional information
     * regarding copyright ownership.  The ASF licenses this file
     * to you under the Apache License, Version 2.0 (the
     * "License"); you may not use this file except in compliance
     * with the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    package org.apache.hadoop.mapred.lib;
    
    import java.io.IOException;
    import java.util.regex.Matcher;
    import java.util.regex.Pattern;
    
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapred.JobConf;
    import org.apache.hadoop.mapred.MapReduceBase;
    import org.apache.hadoop.mapred.Mapper;
    import org.apache.hadoop.mapred.OutputCollector;
    import org.apache.hadoop.mapred.Reporter;
    
    
    /** A {@link Mapper} that extracts text matching a regular expression. */
    public class RegexMapper<K> extends MapReduceBase
        implements Mapper<K, Text, Text, LongWritable> {
    
      private Pattern pattern;
      private int group;
    
      public void configure(JobConf job) {
        pattern = Pattern.compile(job.get("mapred.mapper.regex"));
        group = job.getInt("mapred.mapper.regex.group", 0);
      }
    
      public void map(K key, Text value,
                      OutputCollector<Text, LongWritable> output,
                      Reporter reporter)
        throws IOException {
        String text = value.toString();
        Matcher matcher = pattern.matcher(text);
        while (matcher.find()) {
          output.collect(new Text(matcher.group(group)), new LongWritable(1));
        }
      }
    
    }
     
分享到:
评论

相关推荐

    hadoop源码编译

    Hadoop是一款开源的大数据处理框架,其核心组件包括HDFS(分布式文件系统)和MapReduce(分布式计算模型)。在某些情况下,当用户直接从官网下载Hadoop安装包并尝试启动Hadoop集群时,可能会遇到如下的警告信息:...

    hadoop_the_definitive_guide_3nd_edition

    MapReduce Library Classes 294 9. Setting Up a Hadoop Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Cluster Specification 295 Network Topology 297 ...

    基于vue的菜谱网站,前端采用vue,后端采用express,数据库采用mysql。.zip-毕设&课设&实训&大作业&竞赛&项目

    项目工程资源经过严格测试运行并且功能上ok,可实现复现复刻,拿到资料包后可实现复现出一样的项目,本人系统开发经验充足(全栈全领域),有任何使用问题欢迎随时与我联系,我会抽时间努力为您解惑,提供帮助 【资源内容】:包含源码+工程文件+说明等。答辩评审平均分达到96分,放心下载使用!可实现复现;设计报告也可借鉴此项目;该资源内项目代码都经过测试运行,功能ok 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 【提供帮助】:有任何使用上的问题欢迎随时与我联系,抽时间努力解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 下载后请首先打开说明文件(如有);整理时不同项目所包含资源内容不同;项目工程可实现复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用

    数据分析全流程指南:从基础知识到实战项目的Python&R生态应用

    内容概要:本文档提供了关于数据分析全面的知识介绍与实战资源链接。首先,在数据分析的基础教程部分讲述了使用Python以及R两种语言来进行实际的数据分析工作所需具备的各项基本技能。其次,进阶教程涵盖从机器学习到深度学习的概念及其Python具体应用场景。接着,在工具有效利用层面介绍了多种热门库与平台的作用特点。在项目实践中,列举了四个实战案例:Titanic幸存者预测、房价预测、社交媒体情感倾向分析以及市场顾客购买模式研究,每个项目都有详细的技术流程指引。另外列出多个外部网站资源供进一步提升学习。 适用人群:本文主要面向有志于从事数据挖掘工作的学生和技术爱好者,同时也可辅助在职人士自我能力进阶。无论是在学术科研还是实际业务需求环境中都值得研读。 使用场景及目标:学习者将能够获取到系统的理论知识体系,熟悉业界主流软件包的功能优势,掌握具体业务问题解决方案路径,提高自身的综合技术素质,从而为个人职业规划增添竞争力。 其他说明:文档里推荐了不少高质量参考资料和实用线上学习社区,能有效补充专业知识空白并促进社交协作交流。

    从埃安泰国工厂竣工看中国车企加快海外建厂步伐.pptx

    从埃安泰国工厂竣工看中国车企加快海外建厂步伐.pptx

    复现改进的L-SHADE差分进化算法求解最优化问题详解:附MATLAB源码与测试函数集,复现改进的L-SHADE差分进化算法求解最优化问题详解:MATLAB源码与测试集全攻略,复现改进的L-SHADE

    复现改进的L-SHADE差分进化算法求解最优化问题详解:附MATLAB源码与测试函数集,复现改进的L-SHADE差分进化算法求解最优化问题详解:MATLAB源码与测试集全攻略,复现改进的L-SHADE差分进化算法求最优化问题 对配套文献所提出的改进的L-SHADE差分进化算法求解最优化问题的的复现,提供完整MATLAB源代码和测试函数集,到手可运行,运行效果如图2所示。 代码所用测试函数集与文献相同:对CEC2014最优化测试函数集中的全部30个函数进行了测试验证,运行结果与文献一致。 ,复现; 改进的L-SHADE差分进化算法; 最优化问题求解; MATLAB源代码; 测试函数集; CEC2014最优化测试函数集,复现改进L-SHADE算法:最优化问题的MATLAB求解与验证

    DCDC 电阻分压计算器

    可选择参考电压与输出电压 可选择电阻精度以及输出电压误差值

    西门子博途三部十层电梯程序案例解析:基于Wincc RT Professional V14及更高版本的应用探索,西门子博途三部十层电梯程序案例解析:基于Wincc RT Professional画面与

    西门子博途三部十层电梯程序案例解析:基于Wincc RT Professional V14及更高版本的应用探索,西门子博途三部十层电梯程序案例解析:基于Wincc RT Professional画面与V14及以上版本技术参考,西门子1200博途三部十层电梯程序案例,加Wincc RT Professional画面三部十层电梯程序,版本V14及以上。 程序仅限于参考资料使用。 ,西门子;1200博途;三部十层电梯程序案例;Wincc RT Professional;V14以上程序版本。,西门子V14+博途三部十层电梯程序案例:Wincc RT Pro专业画面技术解析

    2023政务大数据解决方案.pptx

    2023政务大数据解决方案.pptx

    基于SSM设计的校园二手物品交易网站

    项目工程资源经过严格测试运行并且功能上ok,可实现复现复刻,拿到资料包后可实现复现出一样的项目,本人系统开发经验充足(全栈全领域),有任何使用问题欢迎随时与我联系,我会抽时间努力为您解惑,提供帮助 【资源内容】:包含源码+工程文件+说明等。答辩评审平均分达到96分,放心下载使用!可实现复现;设计报告也可借鉴此项目;该资源内项目代码都经过测试运行;功能ok 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 【提供帮助】:有任何使用上的问题欢迎随时与我联系,抽时间努力解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 下载后请首先打开说明文件(如有);整理时不同项目所包含资源内容不同;项目工程可实现复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用

    基于java的学业跟踪评价系统设计的详细项目实例(含完整的程序,GUI设计和代码详解)

    内容概要:本文介绍了基于Java的学业跟踪评价系统的详细设计与实现,涵盖系统的多维度数据整合与评价、智能化学习建议、数据可视化和实时反馈等方面。系统通过收集课堂表现、作业成绩、考试成绩等多源数据,对学生的学业表现进行全面跟踪和评价,提供可视化反馈以及个性化的学习建议,促进家校互动,助力学生全面素质提升和发展。 适合人群:具备一定Java编程经验的研究者和开发者,特别是从事教育信息化领域的从业人员和技术爱好者。 使用场景及目标:该系统主要用于K12教育阶段、高等教育以及其他涉及教育培训的场景。其目的是提高教育管理效率、推进教育数字化转型和个人化教育实施。 其他说明:该文档详细介绍了系统的设计思路、功能模块和技术细节,并提供了完整的程序代码以及GUI设计说明。对于希望深入了解或实际部署学业跟踪评价系统的机构非常有参考价值。文中强调技术创新与实践经验相结合,突出了实用性和前瞻性特点。

    基于vue实现的WebAPP.zip

    项目工程资源经过严格测试运行并且功能上ok,可实现复现复刻,拿到资料包后可实现复现出一样的项目,本人系统开发经验充足(全栈全领域),有任何使用问题欢迎随时与我联系,我会抽时间努力为您解惑,提供帮助 【资源内容】:包含源码+工程文件+说明等。答辩评审平均分达到96分,放心下载使用!可实现复现;设计报告也可借鉴此项目;该资源内项目代码都经过测试运行;功能ok 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 【提供帮助】:有任何使用上的问题欢迎随时与我联系,抽时间努力解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 下载后请首先打开说明文件(如有);整理时不同项目所包含资源内容不同;项目工程可实现复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用

    rabbmit相关安装包

    erlang安装包,rabbmit安装环境

    linux 下的oracle数据库的开机启动脚本

    linux 下的oracle数据库的开机启动脚本,将里面的/home/oracle/app/oracle/product/11.2.0/dbhome_1 都改成你的oracle数据库的路径。在root用户下chmod +x 添加执行权限,然后./oracle_setup.sh 执行即可。

    基于目标级联分析法的多微网主动配电系统自治优化经济调度算法实现与初级应用,基于目标级联分析法的多微网主动配电系统自治优化经济调度算法实践:初级拉格朗日算法应用,GAMS代码:基于目标级联分析法的多微网

    基于目标级联分析法的多微网主动配电系统自治优化经济调度算法实现与初级应用,基于目标级联分析法的多微网主动配电系统自治优化经济调度算法实践:初级拉格朗日算法应用,GAMS代码:基于目标级联分析法的多微网主动配电系统自治优化经济调度 该代码并非完全复现该文献,而是参照文献 《基于目标级联分析法的多微网主动配电系统自治优化经济调度》 的目标级联分析法(ATC)的算法部分,采用初级的拉格朗日算法,主网与配网部分模型较为简化。 代码结构完整,注释详细,可读性较强,可以在此基础上进行修改或者移植。 适用于初学者学习ATC模型 ,GAMS代码;目标级联分析法(ATC);微网主动配电系统;自治优化经济调度;拉格朗日算法;主网与配网模型简化;代码结构完整;注释详细;可读性强;初学者学习ATC模型。,基于ATC算法的GAMS多微网经济调度优化代码:简化版学习指南

    基于ISODATA改进算法的负荷场景曲线聚类-适用于风光场景生成的高效算法创新,基于ISODATA改进算法的负荷场景曲线聚类(适用于风光场景生成,包含K-means等多种聚类方法与效果评价),基于I

    基于ISODATA改进算法的负荷场景曲线聚类——适用于风光场景生成的高效算法创新,基于ISODATA改进算法的负荷场景曲线聚类(适用于风光场景生成,包含K-means等多种聚类方法与效果评价),基于ISODATA改进算法的负荷场景曲线聚类(适用于风光场景生成) 摘要:代码主要做的是一种基于改进ISODATA算法的负荷场景曲线聚类,代码中,主要做了四种聚类算法,包括基础的K-means算法、ISODATA算法、L-ISODATA算法以及K-L-ISODATA算法,并且包含了对聚类场景以及聚类效果的评价,通过DBI的计算值综合对比评价不同方法的聚类效果,程序实现效果非常好,适合对于算法创新有需求的人,且也包含基础的k-means算法,用来学习也非常棒 另外,此代码同样适用于风光场景生成,自己准备好风光场景数据即可 代码非常精品,有部分注释; ,核心关键词: 1. 基于ISODATA改进算法 2. 负荷场景曲线聚类 3. K-means算法 4. 聚类场景评价 5. 聚类效果评价 6. DBI计算值 7. 算法创新需求 8. 风光场景生成 以上关键词用分号分隔为: 1. 基于ISO

    xpack-qemu-arm-8.2.2-1-win32-x64.zip

    xPack qemu arm 是一款高性能且跨平台的 ARM 架构虚拟机

    莫理莉+AI+为新型能源系统赋能-技术与建筑建筑供配电论坛琶洲.pptx

    莫理莉+AI+为新型能源系统赋能-技术与建筑建筑供配电论坛琶洲.pptx

    学生毕业离校系统(源码+数据库+论文+ppt)java开发springboot框架javaweb,可做计算机毕业设计或课程设计

    学生毕业离校系统(源码+数据库+论文+ppt)java开发springboot框架javaweb,可做计算机毕业设计或课程设计 【功能需求】 本系统分为学生、教师、管理员3个角色 (1)学生功能需求 学生进入系统可以查看首页、个人中心、离校流程、网站公告、费用结算管理、论文审核管理、我的收藏管理等操作。 (2)教师功能需求 教师进入系统可以查看首页、学生管理、离校流程管理、费用结算管理、论文审核管理等操作。 (2)管理员功能需求 管理员登陆后,主要功能模块包括首页、个人中心、学生管理、教师管理、离校信息管理、费用结算管理、论文审核管理、管理员管理、留言板管理、系统管理等功能。 【环境需要】 1.运行环境:最好是java jdk 1.8,我们在这个平台上运行的。其他版本理论上也可以。 2.IDE环境:IDEA,Eclipse,Myeclipse都可以。 3.tomcat环境:Tomcat 7.x,8.x,9.x版本均可 4.数据库:MySql 5.7/8.0等版本均可; 【购买须知】 本源码项目经过严格的调试,项目已确保无误,可直接用于课程实训或毕业设计提交。里面都有配套的运行环境软件,讲解视频,部署视频教程,一应俱全,可以自己按照教程导入运行。附有论文参考,使学习者能够快速掌握系统设计和实现的核心技术。

    揭秘 OpenAI 在 2027 年前创建 AGI 的计划.pptx

    揭秘 OpenAI 在 2027 年前创建 AGI 的计划.pptx

Global site tag (gtag.js) - Google Analytics