`
endual
  • 浏览: 3578829 次
  • 性别: Icon_minigender_1
  • 来自: 杭州
社区版块
存档分类
最新评论

在java代码中用weka

    博客分类:
  • java
 
阅读更多

 























guest|Join|Help|Sign In
     
 
The most common components you might want to use are
  • Instances - your data
  • Filter - for preprocessing the data
  • Classifier/Clusterer - built on the processed data
  • Evaluating - how good is the classifier/clusterer?
  • Attribute selection - removing irrelevant attributes from your data

The following sections explain how to use them in your own code. A link to an example class can be found at the end of this page, under the Links section. The classifiers and filters always list their options in the Javadoc API (bookstabledeveloper version) specification.

You might also want to check out the Weka Examples collection, containing examples for the different versions of Weka. Another, more comprehensive, source of information is the chapter Using the API of the Weka manual for the stable-3.6 and developer version (snapshots and releases later than 09/08/2009).

Instances

ARFF File

Pre 3.5.5 and 3.4.x

Reading from an ARFF file is straightforward:
 import weka.core.Instances;
 import java.io.BufferedReader;
 import java.io.FileReader;
 ...
 BufferedReader reader = new BufferedReader(
                              new FileReader("/some/where/data.arff"));
 Instances data = new Instances(reader);
 reader.close();
 // setting class attribute
 data.setClassIndex(data.numAttributes() - 1);

The class index indicates the target attribute used for classification. By default, in an ARFF file, it is the last attribute, which explains why it's set to numAttributes-1.
You must set it if your instances are used as a parameter of a weka function (e.g.,: weka.classifiers.Classifier.buildClassifier(data))

3.5.5 and newer

The DataSource class is not limited to ARFF files. It can also read CSV files and other formats (basically all file formats that Weka can import via its converters).
 import weka.core.converters.ConverterUtils.DataSource;
 ...
 DataSource source = new DataSource("/some/where/data.arff");
 Instances data = source.getDataSet();
 // setting class attribute if the data format does not provide this information
 // For example, the XRFF format saves the class attribute information as well
 if (data.classIndex() == -1)
   data.setClassIndex(data.numAttributes() - 1);

Database

Reading from Databases is slightly more complicated, but still very easy. First, you'll have to modify your DatabaseUtils.props file to reflect your database connection. Suppose you want to connect to a MySQL server that is running on the local machine on the default port 3306. The MySQL JDBC driver is called Connector/J. (The driver class is org.gjt.mm.mysql.Driver.) The database where your target data resides is called some_database. Since you're only reading, you can use the default user nobody without a password. Your props file must contain the following lines:
 jdbcDriver=org.gjt.mm.mysql.Driver
 jdbcURL=jdbc:mysql://localhost:3306/some_database
Secondly, your Java code needs to look like this to load the data from the database:
 import weka.core.Instances;
 import weka.experiment.InstanceQuery;
 ...
 InstanceQuery query = new InstanceQuery();
 query.setUsername("nobody");
 query.setPassword("");
 query.setQuery("select * from whatsoever");
 // You can declare that your data set is sparse
 // query.setSparseData(true);
 Instances data = query.retrieveInstances();

Notes:
  • Don't forget to add the JDBC driver to your CLASSPATH.
  • For MS Access, you must use the JDBC-ODBC-bridge that is part of a JDK. The Windows databases article explains how to do this.
  • InstanceQuery automatically converts VARCHAR database columns to NOMINAL attributes, and long TEXT database columns to STRING attributes. So if you use InstanceQuery to do text mining against text that appears in a VARCHAR column, Weka will regard such text as nominal values. Thus it will fail to tokenize and mine that text. Use the NominalToString or StringToNominal filter (package weka.filters.unsupervised.attribute) to convert the attributes into the correct type.

Option handling

Weka schemes that implement the weka.core.OptionHandler interface, such as classifiers, clusterers, and filters, offer the following methods for setting and retrieving options:
  • void setOptions(String[] options)
  • String[] getOptions()
There are several ways of setting the options:
  • Manually creating a String array:
 String[] options = new String[2];
 options[0] = "-R";
 options[1] = "1";
  • Using a single command-line string and using the splitOptions method of the weka.core.Utils class to turn it into an array:
 String[] options = weka.core.Utils.splitOptions("-R 1");
  • Using the   class to automatically turn a command line into code. Especially handy if the command line contains nested classes that have their own options, such as kernels for SMO:
 java OptionsToCode weka.classifiers.functions.SMO
  • will generate output like this:
 // create new instance of scheme
 weka.classifiers.functions.SMO scheme = new weka.classifiers.functions.SMO();
 // set options
 scheme.setOptions(weka.core.Utils.splitOptions("-C 1.0 -L 0.0010 -P 1.0E-12 -N 0 -V -1 -W 1 -K \"weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0\""));
Also, the   tool allows you to view a nested options string, e.g., used at the command line, as a tree. This can help you spot nesting errors.

Filter

A filter has two different properties:
  • supervised or unsupervised
    either takes the class attribute into account or not
  • attribute- or instance-based
    e.g., removing a certain attribute or removing instances that meet a certain condition

Most filters implement the OptionHandler interface, which means you can set the options via a String array, rather than setting them each manually via set-methods.
For example, if you want to remove the first attribute of a dataset, you need this filter
 weka.filters.unsupervised.attribute.Remove
with this option
 -R 1
If you have an Instances object, called data, you can create and apply the filter like this:
 import weka.core.Instances;
 import weka.filters.Filter;
 import weka.filters.unsupervised.attribute.Remove;
 ...
 String[] options = new String[2];
 options[0] = "-R";                                    // "range"
 options[1] = "1";                                     // first attribute
 Remove remove = new Remove();                         // new instance of filter
 remove.setOptions(options);                           // set options
 remove.setInputFormat(data);                          // inform filter about dataset **AFTER** setting options
 Instances newData = Filter.useFilter(data, remove);   // apply filter

Filtering on-the-fly

The FilteredClassifier meta-classifier is an easy way of filtering data on the fly. It removes the necessity of filtering the data before the classifier can be trained. Also, the data need not be passed through the trained filter again at prediction time. The following is an example of using this meta-classifier with the Remove filter and J48 for getting rid of a numeric ID attribute in the data:
 import weka.classifiers.meta.FilteredClassifier;
 import weka.classifiers.trees.J48;
 import weka.filters.unsupervised.attribute.Remove;
 ...
 Instances train = ...         // from somewhere
 Instances test = ...          // from somewhere
 // filter
 Remove rm = new Remove();
 rm.setAttributeIndices("1");  // remove 1st attribute
 // classifier
 J48 j48 = new J48();
 j48.setUnpruned(true);        // using an unpruned J48
 // meta-classifier
 FilteredClassifier fc = new FilteredClassifier();
 fc.setFilter(rm);
 fc.setClassifier(j48);
 // train and make predictions
 fc.buildClassifier(train);
 for (int i = 0; i < test.numInstances(); i++) {
   double pred = fc.classifyInstance(test.instance(i));
   System.out.print("ID: " + test.instance(i).value(0));
   System.out.print(", actual: " + test.classAttribute().value((int) test.instance(i).classValue()));
   System.out.println(", predicted: " + test.classAttribute().value((int) pred));
 }

Other handy meta-schemes in Weka:

Batch filtering

On the command line, you can enable a second input/output pair (via -r and -s) with the -b option, in order to process the second file with the same filter setup as the first one. Necessary, if you're using attribute selection or standardization - otherwise you end up with incompatible datasets. This is done fairly easy, since one initializes the filter only once with the setInputFormat(Instances) method, namely with the training set, and then applies the filter subsequently to the training set and the test set. The following example shows how to apply the Standardize filter to a train and a test set.
 Instances train = ...   // from somewhere
 Instances test = ...    // from somewhere
 Standardize filter = new Standardize();
 filter.setInputFormat(train);  // initializing the filter once with training set
 Instances newTrain = Filter.useFilter(train, filter);  // configures the Filter based on train instances and returns filtered instances
 Instances newTest = Filter.useFilter(test, filter);    // create new test set

Calling conventions

The setInputFormat(Instances) method always has to be the last call before the filter is applied, e.g., with Filter.useFilter(Instances,Filter)Why? First, it is the convention for using filters and, secondly, lots of filters generate the header of the output format in the setInputFormat(Instances) method with the currently set options (setting otpions after this call doesn't have any effect any more).

Classification

The necessary classes can be found in this package:
 weka.classifiers

Building a Classifier

Batch

A Weka classifier is rather simple to train on a given dataset. E.g., we can train an unpruned C4.5 tree algorithm on a given dataset data. The training is done via the buildClassifier(Instances) method.
 import weka.classifiers.trees.J48;
 ...
 String[] options = new String[1];
 options[0] = "-U";            // unpruned tree
 J48 tree = new J48();         // new instance of tree
 tree.setOptions(options);     // set the options
 tree.buildClassifier(data);   // build classifier

Incremental

Classifiers implementing the weka.classifiers.UpdateableClassifier interface can be trained incrementally. This conserves memory, since the data doesn't have to be loaded into memory all at once. See the Javadoc of this interface to see what classifiers are implementing it.

The actual process of training an incremental classifier is fairly simple:
  • Call buildClassifier(Instances) with the structure of the dataset (may or may not contain any actual data rows).
  • Subsequently call the updateClassifier(Instance) method to feed the classifier new weka.core.Instance objects, one by one.

Here is an example using data from a weka.core.converters.ArffLoader to train weka.classifiers.bayes.NaiveBayesUpdateable:
 // load data
 ArffLoader loader = new ArffLoader();
 loader.setFile(new File("/some/where/data.arff"));
 Instances structure = loader.getStructure();
 structure.setClassIndex(structure.numAttributes() - 1);
 
 // train NaiveBayes
 NaiveBayesUpdateable nb = new NaiveBayesUpdateable();
 nb.buildClassifier(structure);
 Instance current;
 while ((current = loader.getNextInstance(structure)) != null)
   nb.updateClassifier(current);

A working example is  .

Evaluating

Cross-validation

If you only have a training set and no test you might want to evaluate the classifier by using 10 times 10-fold cross-validation. This can be easily done via the Evaluation class. Here we seed the random selection of our folds for the CV with 1. Check out the Evaluation class for more information about the statistics it produces.
 import weka.classifiers.Evaluation;
 import java.util.Random;
 ...
 Evaluation eval = new Evaluation(newData);
 eval.crossValidateModel(tree, newData, 10, new Random(1));

Note: The classifier (in our example tree) should not be trained when handed over to the crossValidateModel method. Why? If the classifier does not abide to the Weka convention that a classifier must be re-initialized every time the buildClassifiermethod is called (in other words: subsequent calls to the buildClassifier method always return the same results), you will get inconsistent and worthless results. The crossValidateModel takes care of training and evaluating the classifier. (It creates a copy of the original classifier that you hand over to the crossValidateModel for each run of the cross-validation.)

Train/test set

In case you have a dedicated test set, you can train the classifier and then evaluate it on this test set. In the following example, a J48 is instantiated, trained and then evaluated. Some statistics are printed to stdout:
 import weka.core.Instances;
 import weka.classifiers.Evaluation;
 import weka.classifiers.trees.J48;
 ...
 Instances train = ...   // from somewhere
 Instances test = ...    // from somewhere
 // train classifier
 Classifier cls = new J48();
 cls.buildClassifier(train);
 // evaluate classifier and print some statistics
 Evaluation eval = new Evaluation(train);
 eval.evaluateModel(cls, test);
 System.out.println(eval.toSummaryString("\nResults\n======\n", false));

Statistics

Some methods for retrieving the results from the evaluation:
  • nominal class
    • correct() - number of correctly classified instances (see also incorrect())
    • pctCorrect() - percentage of correctly classified instances (see also pctIncorrect())
    • kappa() - Kappa statistics
  • numeric class
    • correlationCoefficient() - correlation coefficient
  • general
    • meanAbsoluteError() - the mean absolute error
    • rootMeanSquaredError() - the root mean squared error
    • unclassified() - number of unclassified instances
    • pctUnclassified() - percentage of unclassified instances

If you want to have the exact same behavior as from the command line, use this call:
 import weka.classifiers.trees.J48;
 import weka.classifiers.Evaluation;
 ...
 String[] options = new String[2];
 options[0] = "-t";
 options[1] = "/some/where/somefile.arff";
 System.out.println(Evaluation.evaluateModel(new J48(), options));

ROC curves/AUC

Since Weka 3.5.1, you can also generate ROC curves/AUC with the predictions Weka recorded during testing. You can access these predictions via the predictions() method of the Evaluation class. See the Generating ROC curve article for a full example of how to generate ROC curves.

Classifying instances

In case you have an unlabeled dataset that you want to classify with your newly trained classifier, you can use the following code snippet. It loads the file /some/where/unlabeled.arff, uses the previously built classifier tree to label the instances, and saves the labeled data as /some/where/labeled.arff.
 import java.io.BufferedReader;
 import java.io.BufferedWriter;
 import java.io.FileReader;
 import java.io.FileWriter;
 import weka.core.Instances;
 ...
 // load unlabeled data
 Instances unlabeled = new Instances(
                         new BufferedReader(
                           new FileReader("/some/where/unlabeled.arff")));
 
 // set class attribute
 unlabeled.setClassIndex(unlabeled.numAttributes() - 1);
 
 // create copy
 Instances labeled = new Instances(unlabeled);
 
 // label instances
 for (int i = 0; i < unlabeled.numInstances(); i++) {
   double clsLabel = tree.classifyInstance(unlabeled.instance(i));
   labeled.instance(i).setClassValue(clsLabel);
 }
 // save labeled data
 BufferedWriter writer = new BufferedWriter(
                           new FileWriter("/some/where/labeled.arff"));
 writer.write(labeled.toString());
 writer.newLine();
 writer.flush();
 writer.close();

Note on nominal classes:
  • If you're interested in the distribution over all the classes, use the method distributionForInstance(Instance). This method returns a double array with the probability for each class.
  • The returned double value from classifyInstance (or the index in the array returned by distributionForInstance) is just the index for the string values in the attribute. That is, if you want the string representation for the class label returned above clsLabel, then you can print it like this:
System.out.println(clsLabel + " -> " + unlabeled.classAttribute().value((int) clsLabel));

Clustering

Clustering is similar to classification. The necessary classes can be found in this package:
 weka.clusterers

Building a Clusterer

Batch

A clusterer is built in much the same way as a classifier, but the buildClusterer(Instances) method instead of buildClassifier(Instances). The following code snippet shows how to build an EM clusterer with a maximum of 100 iterations.
 import weka.clusterers.EM;
 ...
 String[] options = new String[2];
 options[0] = "-I";                 // max. iterations
 options[1] = "100";
 EM clusterer = new EM();   // new instance of clusterer
 clusterer.setOptions(options);     // set the options
 clusterer.buildClusterer(data);    // build the clusterer

Incremental

Clusterers implementing the weka.clusterers.UpdateableClusterer interface can be trained incrementally (available since version 3.5.4). This conserves memory, since the data doesn't have to be loaded into memory all at once. See the Javadoc for this interface to see which clusterers implement it.

The actual process of training an incremental clusterer is fairly simple:
  • Call buildClusterer(Instances) with the structure of the dataset (may or may not contain any actual data rows).
  • Subsequently call the updateClusterer(Instance) method to feed the clusterer new weka.core.Instance objects, one by one.
  • Call updateFinished() after all Instance objects have been processed, for the clusterer to perform additional computations.

Here is an example using data from a weka.core.converters.ArffLoader to train weka.clusterers.Cobweb:
 // load data
 ArffLoader loader = new ArffLoader();
 loader.setFile(new File("/some/where/data.arff"));
 Instances structure = loader.getStructure();
 
 // train Cobweb
 Cobweb cw = new Cobweb();
 cw.buildClusterer(structure);
 Instance current;
 while ((current = loader.getNextInstance(structure)) != null)
   cw.updateClusterer(current);
 cw.updateFinished();

A working example is  .

Evaluating

For evaluating a clusterer, you can use the ClusterEvaluation class. In this example, the number of clusters found is written to output:
 import weka.clusterers.ClusterEvaluation;
 import weka.clusterers.Clusterer;
 ...
 ClusterEvaluation eval = new ClusterEvaluation();
 Clusterer clusterer = new EM();                                 // new clusterer instance, default options
 clusterer.buildClusterer(data);                                 // build clusterer
 eval.setClusterer(clusterer);                                   // the cluster to evaluate
 eval.evaluateClusterer(newData);                                // data to evaluate the clusterer on
 System.out.println("# of clusters: " + eval.getNumClusters());  // output # of clusters

Or, in the case of density based clusters, you can cross-validate the clusterer (Note: with MakeDensityBasedClusterer you can turn any clusterer into a density-based one):
 import weka.clusterers.ClusterEvaluation;
 import weka.clusterers.DensityBasedClusterer;
 import weka.core.Instances;
 import java.util.Random;
 ...
 Instances data = ...                                     // from somewhere
 DensityBasedClusterer clusterer = new ...                // the clusterer to evaluate
 double logLikelyhood =
    ClusterEvaluation.crossValidateModel(                 // cross-validate
    clusterer, data, 10,                                  // with 10 folds
    new Random(1));                                       // and random number generator with seed 1

Or, if you want the same behavior/print-out from command line, use this call:
 import weka.clusterers.EM;
 import weka.clusterers.ClusterEvaluation;
 ...
 String[] options = new String[2];
 options[0] = "-t";
 options[1] = "/some/where/somefile.arff";
 System.out.println(ClusterEvaluation.evaluateClusterer(new EM(), options));

Clustering instances

The only difference with regard to classification is the method name. Instead of classifyInstance(Instance), it is now clusterInstance(Instance). The method for obtaining the distribution is still the same, i.e.,distributionForInstance(Instance).

Classes to clusters evaluation

If your data contains a class attribute and you want to check how well the generated clusters fit the classes, you can perform a so-called classes to clusters evaluation. The Weka Explorer offers this functionality, and it's quite easy to implement. These are the necessary steps (complete source code:  ):
  • load the data and set the class attribute
 Instances data = new Instances(new BufferedReader(new FileReader("/some/where/file.arff")));
 data.setClassIndex(data.numAttributes() - 1);
  • generate the class-less data to train the clusterer with
 weka.filters.unsupervised.attribute.Remove filter = new weka.filters.unsupervised.attribute.Remove();
 filter.setAttributeIndices("" + (data.classIndex() + 1));
 filter.setInputFormat(data);
 Instances dataClusterer = Filter.useFilter(data, filter);
  • train the clusterer, e.g., EM
 EM clusterer = new EM();
 // set further options for EM, if necessary...
 clusterer.buildClusterer(dataClusterer);
  • evaluate the clusterer with the data still containing the class attribute
 ClusterEvaluation eval = new ClusterEvaluation();
 eval.setClusterer(clusterer);
 eval.evaluateClusterer(data);
  • print the results of the evaluation to stdout
 System.out.println(eval.clusterResultsToString());

Attribute selection

There is no real need to use the attribute selection classes directly in your own code, since there are already a meta-classifier and a filter available for applying attribute selection, but the low-level approach is still listed for the sake of completeness. The following examples all use CfsSubsetEval and GreedyStepwise (backwards). The code listed below is taken from the  .

Meta-Classifier

The following meta-classifier performs a preprocessing step of attribute selection before the data gets presented to the base classifier (in the example here, this is J48).
  Instances data = ...  // from somewhere
  AttributeSelectedClassifier classifier = new AttributeSelectedClassifier();
  CfsSubsetEval eval = new CfsSubsetEval();
  GreedyStepwise search = new GreedyStepwise();
  search.setSearchBackwards(true);
  J48 base = new J48();
  classifier.setClassifier(base);
  classifier.setEvaluator(eval);
  classifier.setSearch(search);
  // 10-fold cross-validation
  Evaluation evaluation = new Evaluation(data);
  evaluation.crossValidateModel(classifier, data, 10, new Random(1));
  System.out.println(evaluation.toSummaryString());

Filter

The filter approach is straightforward: after setting up the filter, one just filters the data through the filter and obtains the reduced dataset.
  Instances data = ...  // from somewhere
  AttributeSelection filter = new AttributeSelection();  // package weka.filters.supervised.attribute!
  CfsSubsetEval eval = new CfsSubsetEval();
  GreedyStepwise search = new GreedyStepwise();
  search.setSearchBackwards(true);
  filter.setEvaluator(eval);
  filter.setSearch(search);
  filter.setInputFormat(data);
  // generate new data
  Instances newData = Filter.useFilter(data, filter);
  System.out.println(newData);

Low-level

If neither the meta-classifier nor filter approach is suitable for your purposes, you can use the attribute selection classes themselves.
  Instances data = ...  // from somewhere
  AttributeSelection attsel = new AttributeSelection();  // package weka.attributeSelection!
  CfsSubsetEval eval = new CfsSubsetEval();
  GreedyStepwise search = new GreedyStepwise();
  search.setSearchBackwards(true);
  attsel.setEvaluator(eval);
  attsel.setSearch(search);
  attsel.SelectAttributes(data);
  // obtain the attribute indices that were selected
  int[] indices = attsel.selectedAttributes();
  System.out.println(Utils.arrayToString(indices));

Note on randomization

Most machine learning schemes, like classifiers and clusterers, are susceptible to the ordering of the data. Using a different seed for randomizing the data will most likely produce a different result. For example, the Explorer, or a classifier/clusterer run from the command line, uses only a seeded java.util.Random number generator, whereas the weka.core.Instances.getgetRandomNumberGenerator(int) (which the   uses) also takes the data into account for seeding. Unless one runs 10-fold cross-validation 10 times and averages the results, one will most likely get different results.

See also


Examples

The following are a few sample classes for using various parts of the Weka API:


Links

 
     

 

分享到:
评论

相关推荐

    「java调用Weka中神经网络的算法(从数据库中取数据)」.docx

    - `FastVector` 是Weka中用于存储属性的一种数据结构。 - `Attribute` 类代表数据集中的一项特征或属性。 - 创建测试集时,首先定义了两个属性 `ratio` 和 `preratio`,并通过 `addElement` 方法添加到 `FastVector`...

    Weka源代码系列- 编程 样例

    2. **OptionTree.java**:选项树是Weka中用于存储和解析命令行参数的一种数据结构。此样例展示了如何创建和操作选项树,这对于自定义命令行工具或算法很有用,因为它们通常需要处理各种输入参数。 3. **...

    手撕源码C++哈希表实现:从底层原理到性能优化,看完面试官都怕你!(文末附源码)

    哈希表源码

    sun_3ck_03_0119.pdf

    sun_3ck_03_0119

    MATLAB实现基于LSTM-AdaBoost长短期记忆网络结合AdaBoost时间序列预测(含模型描述及示例代码)

    内容概要:本文档详细介绍了基于 MATLAB 实现的 LSTM-AdaBoost 时间序列预测模型,涵盖项目背景、目标、挑战、特点、应用领域以及模型架构和代码示例。随着大数据和AI的发展,时间序列预测变得至关重要。传统方法如 ARIMA 在复杂非线性序列中表现欠佳,因此引入了 LSTM 来捕捉长期依赖性。但 LSTM 存在易陷局部最优、对噪声鲁棒性差的问题,故加入 AdaBoost 提高模型准确性和鲁棒性。两者结合能更好应对非线性和长期依赖的数据,提供更稳定的预测。项目还展示了如何在 MATLAB 中具体实现模型的各个环节。 适用人群:对时间序列预测感兴趣的开发者、研究人员及学生,特别是有一定 MATLAB 编程经验和熟悉深度学习或机器学习基础知识的人群。 使用场景及目标:①适用于金融市场价格预测、气象预报、工业生产故障检测等多种需要时间序列分析的场合;②帮助使用者理解并掌握将LSTM与AdaBoost结合的实现细节及其在提高预测精度和抗噪方面的优势。 其他说明:尽管该模型有诸多优点,但仍存在训练时间长、计算成本高等挑战。文中提及通过优化数据预处理、调整超参数等方式改进性能。同时给出了完整的MATLAB代码实现,便于学习与复现。

    免费1996-2019年各地级市平均工资数据

    1996-2019年各地级市平均工资数据 1、时间:1996-2019年 2、来源:城市nj、各地级市统计j 3、指标:平均工资(在岗职工) 4、范围:295个地级市

    [AB PLC例程源码][MMS_040384]Winder Application.zip

    AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!

    C2Former: 解决RGB-红外物体检测中模态校准与融合不精确问题的标定互补变压器

    内容概要:本文介绍了一种新颖的变压器模型C2Former(Calibrated and Complementary Transformer),专门用于解决RGB图像和红外图像之间的物体检测难题。传统方法在进行多模态融合时面临两个主要问题——模态错位(Modality miscalibration)和融合不准确(fusion imprecision)。作者针对这两个问题提出采用互模交叉注意力模块(Inter-modality Cross-Attention, ICA)以及自适应特征采样模块(Adaptive Feature Sampling, AFS)来改善。具体来说,ICA可以获取对齐并且互补的特性,在特征层面进行更好的整合;而AFS则减少了计算成本。通过实验验证了基于C2Former的一阶段和二阶段检测器均能在现有公开数据集上达到最先进的表现。 适合人群:计算机视觉领域的研究人员和技术人员,特别是从事跨模态目标检测的研究人员,对Transformer架构有一定了解的开发者。 使用场景及目标:适用于需要将可见光和热成像传感器相结合的应用场合,例如全天候的视频监控系统、无人驾驶汽车、无人

    上海人工智能实验室:金融大模型应用评测报告-摘要版2024.pdf

    上海人工智能实验室:金融大模型应用评测报告-摘要版2024.pdf

    malpass_02_0907.pdf

    malpass_02_0907

    C++-自制学习辅助工具

    C++-自制学习辅助工具

    微信生态系统开发指南:涵盖机器人、小程序及公众号的技术资源整合

    内容概要:本文提供了有关微信生态系统的综合开发指导,具体涵盖了微信机器人的Java与Python开发、全套及特定应用的小程序源码(PHP后台、DeepSeek集成),以及微信公众号的基础开发与智能集成方法。文中不仅给出了各种应用的具体案例和技术要点如图灵API对接、DeepSeek大模型接入等的简述,还指出了相关资源链接以便深度探究或直接获取源码进行开发。 适合人群:有意开发微信应用程序或提升相应技能的技术爱好者和专业人士。不论是初涉者寻求基本理解和操作流程,还是进阶者期望利用提供的资源进行项目构建或是研究。 使用场景及目标:开发者能够根据自身兴趣选择不同方向深入学习微信平台的应用创建,如社交自动化(机器人)、移动互联网服务交付(小程序),或者公众信息服务(公众号)。特别是想要尝试引入AI能力到应用中的人士,文中介绍的内容非常有价值。 其他说明:文中提及的多个项目都涉及到了最新技术栈(如DeepSeek大模型),并且为不同层次的学习者提供从零开始的详细资料。对于那些想要迅速获得成果同时深入了解背后原理的人来说是个很好的起点。

    pimpinella_3cd_01_0916.pdf

    pimpinella_3cd_01_0916

    mellitz_3cd_01_0516.pdf

    mellitz_3cd_01_0516

    schube_3cd_01_0118.pdf

    schube_3cd_01_0118

    [AB PLC例程源码][MMS_046683]ME Faceplates for 1738 Digital and Analog I-O with Descriptions.zip

    AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!

    [AB PLC例程源码][MMS_040371]Communication between CompactLogix Controllers on DeviceNet.zip

    AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!

    [AB PLC例程源码][MMS_046507]SE Faceplates for 1797 Digital and Analog I-O.zip

    AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!

    智慧用电平台建设解决方案【28页】.pptx

    智慧用电平台建设解决方案【28页】

    lusted_3ck_01_0519.pdf

    lusted_3ck_01_0519

Global site tag (gtag.js) - Google Analytics