- 浏览: 1527481 次
- 性别:
- 来自: 杭州
文章分类
- 全部博客 (525)
- SEO (16)
- JAVA-EE-Hibernate (6)
- JAVA-EE-Struts (29)
- JAVA-EE-Spring (15)
- Linux (37)
- JAVA-SE (29)
- NetWork (1)
- CMS (14)
- Semantic Research (3)
- RIA-Flex (0)
- Ajax-Extjs (4)
- Ajax-Jquery (1)
- www.godaddy.com (0)
- SSH (34)
- JavaScript (6)
- SoftwareEngineer (9)
- CMMI (0)
- IDE-Myeclipse (3)
- PHP (1)
- Algorithm (3)
- C/C++ (18)
- Concept&Items (2)
- Useful WebSite (1)
- ApacheServer (2)
- CodeReading (1)
- Socket (2)
- UML (10)
- PowerDesigner (1)
- Repository (19)
- MySQL (3)
- SqlServer (0)
- Society (1)
- Tomcat (7)
- WebService (5)
- JBoss (1)
- FCKeditor (1)
- PS/DW/CD/FW (0)
- DesignPattern (11)
- WebSite_Security (1)
- WordPress (5)
- WebConstruction (3)
- XML|XSD (7)
- Android (0)
- Project-In-Action (9)
- DatabaseDesign (3)
- taglib (7)
- DIV+CSS (10)
- Silverlight (52)
- JSON (7)
- VC++ (8)
- C# (8)
- LINQ (1)
- WCF&SOA (5)
- .NET (20)
- SOA (1)
- Mashup (2)
- RegEx (6)
- Psychology (5)
- Stock (1)
- Google (2)
- Interview (4)
- HTML5 (1)
- Marketing (4)
- Vaadin (2)
- Agile (2)
- Apache-common (6)
- ANTLR (0)
- REST (1)
- HtmlAnalysis (18)
- csv-export (3)
- Nucth (3)
- Xpath (1)
- Velocity (6)
- ASP.NET (9)
- Product (2)
- CSS (1)
最新评论
-
lt26w:
理解成门面模式应该比较容易明白吧
FacadePattern-Java代码实例讲解 -
lt26w:
看下面的例子比较明白.
FacadePattern-Java代码实例讲解 -
javaloverkehui:
这也叫文档,别逗我行吗,也就自己看看。
HtmlCleaner API -
SE_XiaoFeng:
至少也应该写个注释吧。
HtmlCleaner API -
jfzshandong:
...
org.springframework.web.filter.CharacterEncodingFilter 配置
zz:http://www.iteye.com/news/1731
构建于lucene之上的可用的Java开源Spider少之又
少,spindle长期没有更新且功能不够完善,故而自己参考其源
代码重新编写了一个可扩展的WebCrawler,本着开源共享,共同进步的想法发布于此,期冀得到大家的批评指正,
有任何意见及建议均可Email联系我 (kaninebruno@hotmail.com)
以下代码基于lucene-2.3.1,htmlparser-1.6,je-analysis-1.5.3,以及自己修改过的cpdetector-
1.0.5;
下载地址分别为
htmlparser:http://sourceforge.net/project/showfiles.php?group_id=24399
je-analysis:http://www.jesoft.cn/je-analysis-1.5.3.jar
lucene就不用说了,cpdetector-1.0.5见附件.
spindle的官方站点:http://www.bitmechanic.com/projects/spindle/
Java
代码
- package com.huizhi.kanine.util;
- import java.io.BufferedReader;
- import java.io.File;
- import java.io.FileNotFoundException;
- import java.io.IOException;
- import java.io.InputStream;
- import java.io.InputStreamReader;
- import java.io.UnsupportedEncodingException;
- import java.net.HttpURLConnection;
- import java.net.MalformedURLException;
- import java.net.SocketException;
- import java.net.SocketTimeoutException;
- import java.net.URL;
- import java.net.UnknownHostException;
- import java.nio.charset.Charset;
- import java.util.ArrayList;
- import java.util.Date;
- import java.util.HashSet;
- import jeasy.analysis.MMAnalyzer;
- import org.apache.lucene.analysis.Analyzer;
- import org.apache.lucene.document.DateTools;
- import org.apache.lucene.document.Document;
- import org.apache.lucene.document.Field;
- import org.apache.lucene.index.CorruptIndexException;
- import org.apache.lucene.index.IndexReader;
- import org.apache.lucene.index.IndexWriter;
- import org.apache.lucene.index.Term;
- import org.apache.lucene.search.Hits;
- import org.apache.lucene.search.IndexSearcher;
- import org.apache.lucene.search.TermQuery;
- import org.apache.lucene.store.Directory;
- import org.apache.lucene.store.LockObtainFailedException;
- import org.apache.lucene.store.RAMDirectory;
- import org.htmlparser.Parser;
- import org.htmlparser.PrototypicalNodeFactory;
- import org.htmlparser.filters.AndFilter;
- import org.htmlparser.filters.HasAttributeFilter;
- import org.htmlparser.filters.NodeClassFilter;
- import org.htmlparser.tags.BaseHrefTag;
- import org.htmlparser.tags.FrameTag;
- import org.htmlparser.tags.LinkTag;
- import org.htmlparser.tags.MetaTag;
- import org.htmlparser.util.EncodingChangeException;
- import org.htmlparser.util.NodeIterator;
- import org.htmlparser.util.NodeList;
- import org.htmlparser.util.ParserException;
- import org.htmlparser.visitors.HtmlPage;
- import cpdetector.io.ASCIIDetector;
- import cpdetector.io.CodepageDetectorProxy;
- import cpdetector.io.JChardetFacade;
- import cpdetector.io.ParsingDetector;
- import cpdetector.io.UnicodeDetector;
- /**
- * @author 张 波
- * E-mail:kaninebruno@hotmail.com
- * Created On : 2008-03-30
- */
- public class SiteCapturer implements Runnable{
- /* 基准(初始)URL */
- protected URL mSource;
- /* 索引文件的存放位置 */
- protected String mTarget;
- /**
- * 待 解析的URL地址集合,所有新检测到的链接均存放于此;
- * 解析时按照先入先出(First-In First-Out)法则线性取出
- */
- protected ArrayList mPages;
- /* 已解析的URL地址集合,避免链接的重复抓取 */
- protected HashSet mFinished;
- protected Parser mParser;
- /* StringBuffer的缓冲区大小 */
- protected final int TRANSFER_SIZE = 4096 ;
- /* 当前平台的行分隔符 */
- protected static String lineSep = System.getProperty( "line.separator" );
- /* 程序运行线程数,默认2个线程 */
- protected int mthreads;
- protected ArrayList threadList;
- /* 存储于磁盘的IndexWriter */
- protected IndexWriter FSDWriter;
- /* 存储于内存的IndexWriter */
- protected IndexWriter RAMWriter;
- protected IndexSearcher indexSearcher;
- protected RAMDirectory ramDirectory;
- /* 筛选页面内容的分词器 */
- protected Analyzer luceneAnalyzer;
- /* 解析页面时的字符编码 */
- protected String charset;
- /* 统计已抓取的页面数量 */
- protected int count = 0 ;
- /* 基准端口 */
- protected int mPort;
- /* 基准主机 */
- protected String mHost;
- /* 检测索引中是否存在当前URL信息,避免重复抓取 */
- protected boolean mCheck;
- /* 索引操作的写入线程锁 */
- public static final Object indexLock = new Object();
- public SiteCapturer() {
- mSource = null ;
- mTarget = null ;
- mthreads = 2 ;
- mCheck = false ;
- mPages = new ArrayList();
- mFinished = new HashSet();
- mParser = new Parser();
- PrototypicalNodeFactory factory = new PrototypicalNodeFactory();
- factory.registerTag(new LocalLinkTag());
- factory.registerTag(new LocalFrameTag());
- factory.registerTag(new LocalBaseHrefTag());
- mParser.setNodeFactory(factory);
- }
- public String getSource() {
- return mSource.toString();
- }
- public void setSource(String source) {
- if (source.endsWith( "/" ))
- source = source.substring(0 , source.length() - 1 );
- try {
- mSource = new URL(source);
- } catch (MalformedURLException e) {
- System.err.println("Invalid URL : " + getSource());
- }
- }
- public String getTarget() {
- return (mTarget);
- }
- public void setTarget(String target) {
- mTarget = target;
- }
- public int getThreads() {
- return (mthreads);
- }
- public void setThreads( int threads) {
- mthreads = threads;
- }
- public boolean isMCheck() {
- return mCheck;
- }
- public void setMCheck( boolean check) {
- mCheck = check;
- }
- /**
- * 程 序入口,在此初始化mPages、IndexWriter
- * 通过协调各线程间的活动完成website的抓取工作
- * 任务完成后将所有的索引片段合并为一个以优化检索
- */
- public void capture(){
- mPages.clear();
- mPages.add(getSource());
- int responseCode = 0 ;
- String contentType = "" ;
- try {
- HttpURLConnection uc = (HttpURLConnection) mSource.openConnection();
- responseCode = uc.getResponseCode();
- contentType = uc.getContentType();
- } catch (MalformedURLException mue) {
- System.err.println("Invalid URL : " + getSource());
- } catch (IOException ie) {
- if (ie instanceof UnknownHostException) {
- System.err.println("UnknowHost : " + getSource());
- } else if (ie instanceof SocketException) {
- System.err.println("Socket Error : " + ie.getMessage() + " "
- + getSource());
- } else
- ie.printStackTrace();
- }
- if (responseCode == HttpURLConnection.HTTP_OK
- && contentType.startsWith("text/html" )) {
- mPort = mSource.getPort();
- mHost = mSource.getHost();
- charset = autoDetectCharset(mSource);
- /* 存放索引文件的位置 */
- File indexDir = new File(mTarget);
- /* 标记是否重新建立索引,true为重新建立索引 */
- boolean flag = true ;
- if (!indexDir.exists()) {
- /* 如果文件夹不存在则创建 */
- indexDir.mkdir();
- } else if (IndexReader.indexExists(mTarget)) {
- /* 如果已存在索引,则追加索引 */
- flag = false ;
- File lockfile = new File(mTarget + File.separator + "write.lock" );
- if (lockfile.exists())
- lockfile.delete();
- }
- luceneAnalyzer = new MMAnalyzer();
- ramDirectory = new RAMDirectory();
- try {
- FSDWriter = new IndexWriter(indexDir, luceneAnalyzer, flag);
- RAMWriter = new IndexWriter(ramDirectory, luceneAnalyzer, true );
- while (mCheck) {
- IndexReader indexReader = IndexReader.open(mTarget);
- indexSearcher = new IndexSearcher(indexReader);
- }
- long start = System.currentTimeMillis();
- threadList = new ArrayList();
- for ( int i = 0 ; i < mthreads; i++) {
- Thread t = new Thread( this , "K-9 Spider Thread #" + (i + 1 ));
- t.start();
- threadList.add(t);
- }
- while (threadList.size() > 0 ) {
- Thread child = (Thread) threadList.remove(0 );
- try {
- child.join();
- } catch (InterruptedException e) {
- e.printStackTrace();
- }
- }
- long elapsed = System.currentTimeMillis() - start;
- RAMWriter.close();
- FSDWriter.addIndexes(new Directory[] { ramDirectory });
- FSDWriter.optimize();
- FSDWriter.close();
- System.out.println("Finished in " + (elapsed / 1000 )
- + " seconds" );
- System.out.println("The Count of the Links Captured is "
- + count);
- } catch (CorruptIndexException cie) {
- cie.printStackTrace();
- } catch (LockObtainFailedException lofe) {
- lofe.printStackTrace();
- } catch (IOException ie) {
- ie.printStackTrace();
- }
- }
- }
- public void run() {
- String url;
- while ((url = dequeueURL()) != null ) {
- if (isToBeCaptured(url))
- process(url);
- }
- mthreads--;
- }
- /**
- * 判 断提取到的链接是否符合解析条件;标准为Port及Host与基准URL相同且类型为text/html或text/plain
- */
- public boolean isToBeCaptured (String url){
- boolean flag = false ;
- HttpURLConnection uc = null ;
- int responseCode = 0 ;
- String contentType = "" ;
- String host = "" ;
- int port = 0 ;
- try {
- URL source = new URL(url);
- String protocol = source.getProtocol();
- if (protocol != null && protocol.equals( "http" )) {
- host = source.getHost();
- port = source.getPort();
- uc = (HttpURLConnection) source.openConnection();
- uc.setConnectTimeout(8000 );
- responseCode = uc.getResponseCode();
- contentType = uc.getContentType();
- }
- } catch (MalformedURLException mue) {
- System.err.println("Invalid URL : " + url);
- } catch (IOException ie) {
- if (ie instanceof UnknownHostException) {
- System.err.println("UnknowHost : " + url);
- } else if (ie instanceof SocketException) {
- System.err.println("Socket Error : " + ie.getMessage() + " "
- + url);
- } else if (ie instanceof SocketTimeoutException) {
- System.err.println("Socket Connection Time Out : " + url);
- } else if (ie instanceof FileNotFoundException) {
- System.err.println("broken link "
- + ((FileNotFoundException) ie.getCause()).getMessage()
- + " ignored" );
- } else
- ie.printStackTrace();
- }
- if (port == mPort
- && responseCode == HttpURLConnection.HTTP_OK
- && host.equals(mHost)
- && (contentType.startsWith("text/html" ) || contentType
- .startsWith("text/plain" )))
- flag = true ;
- return flag;
- }
- /* 从URL队列mPages里取出单个的URL */
- public synchronized String dequeueURL() {
- while ( true ) {
- if (mPages.size() > 0 ) {
- String url = (String) mPages.remove(0 );
- mFinished.add(url);
- if (isToBeCaptured(url)) {
- int bookmark;
- NodeList list;
- NodeList robots;
- MetaTag robot;
- String content;
- try {
- bookmark = mPages.size();
- /* 获取页面所有节点 */
- mParser.setURL(url);
- try {
- list = new NodeList();
- for (NodeIterator e = mParser.elements(); e
- .hasMoreNodes();)
- list.add(e.nextNode());
- } catch (EncodingChangeException ece) {
- /* 解码出错的异常处理 */
- mParser.reset();
- list = new NodeList();
- for (NodeIterator e = mParser.elements(); e
- .hasMoreNodes();)
- list.add(e.nextNode());
- }
- /**
- * 依 据 http://www.robotstxt.org/wc/meta-user.html 处 理
- * Robots tag
- */
- robots = list
- .extractAllNodesThatMatch(
- new AndFilter( new NodeClassFilter(
- MetaTag.class ),
- new HasAttributeFilter( "name" ,
- "robots" )), true );
- if ( 0 != robots.size()) {
- robot = (MetaTag) robots.elementAt(0 );
- content = robot.getAttribute("content" )
- .toLowerCase();
- if ((- 1 != content.indexOf( "none" ))
- || (-1 != content.indexOf( "nofollow" )))
- for ( int i = bookmark; i < mPages.size(); i++)
- mPages.remove(i);
- }
- } catch (ParserException pe) {
- pe.printStackTrace();
- }
- }
- return url;
- } else {
- mthreads--;
- if (mthreads > 0 ) {
- try {
- wait();
- mthreads++;
- } catch (InterruptedException ie) {
- ie.printStackTrace();
- }
- } else {
- notifyAll();
- return null ;
- }
- }
- }
- }
- /**
- * 处 理单独的URL地址,解析页面并加入到lucene索引中;通过自动探测页面编码保证抓取工作的顺利执行
- */
- protected void process(String url) {
- String result[];
- String content = null ;
- String title = null ;
- /* 此项操作较耗性能,故默认不予检测 */
- if (mCheck) {
- try {
- TermQuery query = new TermQuery( new Term( "url" , url));
- Hits hits = indexSearcher.search(query);
- if (hits.length() > 0 ) {
- System.out.println("The URL : " + url
- + " has already been captured" );
- } else {
- result = parseHtml(url, charset);
- content = result[0 ];
- title = result[1 ];
- }
- } catch (IOException ie) {
- ie.printStackTrace();
- }
- } else {
- result = parseHtml(url, charset);
- content = result[0 ];
- title = result[1 ];
- }
- if (content != null && content.trim().length() > 0 ) {
- Document document = new Document();
- document.add(new Field( "content" , content, Field.Store.YES,
- Field.Index.TOKENIZED,
- Field.TermVector.WITH_POSITIONS_OFFSETS));
- document.add(new Field( "url" , url, Field.Store.YES,
- Field.Index.UN_TOKENIZED));
- document.add(new Field( "title" , title, Field.Store.YES,
- Field.Index.TOKENIZED,
- Field.TermVector.WITH_POSITIONS_OFFSETS));
- document.add(new Field( "date" , DateTools.timeToString( new Date()
- .getTime(), DateTools.Resolution.DAY), Field.Store.YES,
- Field.Index.UN_TOKENIZED));
- synchronized (indexLock) {
- try {
- RAMWriter.addDocument(document);
- /**
- * 当 存放索引的内存使用大于指定值时将其写入硬盘;采用此方法的目的是
- * 通过内存缓冲避免频繁的IO操作,提高索引创建性能;
- */
- if (RAMWriter.ramSizeInBytes() > 512 * 1024 ) {
- RAMWriter.close();
- FSDWriter.addIndexes(new Directory[] { ramDirectory });
- RAMWriter = new IndexWriter(ramDirectory,
- luceneAnalyzer, true );
- }
- count++;
- System.out.println(Thread.currentThread().getName()
- + ": Finished Indexing URL: " + url);
- } catch (CorruptIndexException cie) {
- cie.printStackTrace();
- } catch (IOException ie) {
- ie.printStackTrace();
- }
- }
- }
- }
- /**
- * Link tag that rewrites the HREF.
- * The HREF is changed to a local target if it matches the source.
- */
- class LocalLinkTag extends LinkTag {
- public void doSemanticAction() {
- String link = getLink();
- if (link.endsWith( "/" ))
- link = link.substring(0 , link.length() - 1 );
- int pos = link.indexOf( "#" );
- if (pos != - 1 )
- link = link.substring(0 , pos);
- /* 将链接加入到处理队列中 */
- if (!(mFinished.contains(link) || mPages.contains(link)))
- mPages.add(link);
- setLink(link);
- }
- }
- /**
- * Frame tag that rewrites the SRC URLs. The SRC URLs are mapped to local
- * targets if they match the source.
- */
- class LocalFrameTag extends FrameTag {
- public void doSemanticAction() {
- String link = getFrameLocation();
- if (link.endsWith( "/" ))
- link = link.substring(0 , link.length() - 1 );
- int pos = link.indexOf( "#" );
- if (pos != - 1 )
- link = link.substring(0 , pos);
- /* 将链接加入到处理队列中 */
- if (!(mFinished.contains(link) || mPages.contains(link)))
- mPages.add(link);
- setFrameLocation(link);
- }
- }
- /**
- * Base tag that doesn't show. The toHtml() method is overridden to return
- * an empty string, effectively shutting off the base reference.
- */
- class LocalBaseHrefTag extends BaseHrefTag {
- public String toHtml() {
- return ( "" );
- }
- }
- /* 自动探测页面编码,避免中文乱码的出现 */
- protected String autoDetectCharset(URL url) {
- CodepageDetectorProxy detector = CodepageDetectorProxy.getInstance();
- /**
- * ParsingDetector 可用于检查HTML、XML等文件或字符流的编码
- * 构造方法中的参数用于指示是否显示探测过程的详细信息
- * 为false则不显示
- */
- detector.add(new ParsingDetector( false ));
- detector.add(JChardetFacade.getInstance());
- detector.add(ASCIIDetector.getInstance());
- detector.add(UnicodeDetector.getInstance());
- Charset charset = null ;
- try {
- charset = detector.detectCodepage(url);
- } catch (MalformedURLException mue) {
- mue.printStackTrace();
- } catch (IOException ie) {
- ie.printStackTrace();
- }
- if (charset == null )
- charset = Charset.defaultCharset();
- return charset.name();
- }
- /* 按照指定编码解析标准的html页面,为建立索引做准备*/
- protected String[] parseHtml(String url, String charset) {
- String result[] = null ;
- String content = null ;
-
tr
发表评论
-
htmlunit 示例
2010-08-20 18:40 4352先下载依赖的相关JAR包:http://sourcefor ... -
HTMLParser的两种使用方法
2010-04-15 16:37 5412HTMLParser的两种使用方法 ... -
HtmlCleanner结合xpath用法
2010-04-15 13:24 3574文章分类:Java编程 ... -
基于Htmlparser的天气预报程序(续)
2010-04-14 13:53 1100zz:http://www.iteye.com/topic/6 ... -
httpclient(校内网)
2010-04-13 15:10 1317Java code <!-- C ... -
httpclient(校内网)
2010-04-13 15:10 1437httpclient(校内网),大家帮忙看看我的 http ... -
HTTPClient模拟登陆人人网
2010-04-13 14:58 1914zz: 目的: http://www.iteye. ... -
HtmlCleaner API
2010-04-13 13:40 4523HtmlCleaner API Create cleaner ... -
htmlcleaner惯用法
2010-04-13 13:39 1465Common usage Tipically the f ... -
htmlcleaner惯用法
2010-04-13 13:39 1542Common usage Tipically t ... -
htmlcleaner 使用示例.
2010-04-13 13:10 10053原文出处:http://blog.chenlb.com/200 ... -
http://htmlparser.com.cn/
2010-04-12 16:20 1065http://htmlparser.com.cn/ ... -
开源网络蜘蛛spider(转载)
2010-04-12 15:42 1347spider是搜索引擎的必须 ... -
Cobra: Java HTML 解析器
2010-04-12 15:32 2966Cobra 简介: Cobra是一个 ... -
用htmlparser分析并抽取正文
2010-04-12 15:26 1562我这次要介绍的是如何抽取正文,这部分是最为核心的.因为如果不能 ... -
HtmlParser初步研究
2010-04-12 15:18 941目的是快速入手,而不 ... -
基于Htmlparser的天气预报程序
2010-04-12 15:16 1084htmlparser是一个纯的java写的html解析的库,它 ...
相关推荐
根据提供的文件内容,以下是关于Spindle Drive的详细知识点: 1. Spindle Drive的定义与用途: Spindle Drive(主轴驱动装置)是一种被maxonmotor设计成完整系统的易配置主轴驱动装置。主轴驱动装置主要用于传输...
【FANUC伺服电机AC SPINDLE β参数说明书】提供了关于FANUC公司生产的交流主轴电机β系列的详细参数设定指南。这份说明书包含了重要的安全警告、注意事项和补充信息,旨在确保操作人员的安全和机床的正常运行。 ...
**Tapestry Spindle插件详解** Tapestry Spindle是一款专门为MyEclipse集成环境设计的插件,它使得在MyEclipse 10.7这样的版本中能够方便地开发和调试Apache Tapestry应用程序。Tapestry是一个强大的Java Web框架,...
本文介绍了一种基于深度神经网络(DNN)的自动化睡眠纺锤波检测方法,该方法通过混合微尺度(深层)特征与宏尺度(熵)特征来提高检测准确性。睡眠纺锤波是一种特殊的爆发性脑波,在记忆巩固、皮层发育等睡眠相关的...
### 主轴热变形的特殊行为:热弯曲 #### 摘要 在精密加工中,由热引起的误差会降低加工精度。为此,大量的研究聚焦于机床中的这些误差补偿方法。然而,在启动或停止主轴旋转后的过渡期,主轴热变形的行为变得非常...
11. **基于Spindle的增强HTTP Spider** Spindle是一个HTTP客户端库,用于构建爬虫应用。文章可能介绍了如何利用Spindle进行高效的数据抓取和分析。 12. **Swing Explorer 1.0发布** Swing Explorer是一款用于...
斐济是一个基于ImageJ的开源软件,广泛用于生物医学图像分析,提供丰富的图像处理和分析功能。 2. 在已安装的斐济环境中,打开“更新”菜单,找到“Manage update sites”选项。 3. 在弹出的“Update Sites”窗口中...
Codegen Binary模块则专注于生成可执行的二进制文件,这些文件通常包含了基于模板的代码生成逻辑。 Gestalt,正如描述中所提到的,是一个用于Java JVM的配置框架,其核心在于使用Groovy语言编写配置。Groovy是一种...
YASA(又一个主轴算法)是 Python 中的一个命令行睡眠分析工具箱。...Python 编辑器:YASA 最适合与基于 Web 的交互式用户界面Jupyter Lab配合使用。 一些睡眠脑电图数据和可选的睡眠分期文件(催眠图)。
Spindle 是来自Adobe研究院的开源项目,是构建在 Spark 上的Web日志分析查询。通过 Spray 实现的多线程 HTTP 接口来输出查询结果。查询是通过加载来自 Apache Parquet 的 HDFS 柱状存储格式。
主轴基于的分布式锁定库。它使用Spanner对的支持和来实现其锁定机制。用法目前,需要使用以下DDL预先创建表( locktable只是一个示例): CREATE TABLE locktable ( name STRING(MAX) NOT NULL , heartbeat ...
SPINdle是一种逻辑推理器,可用于以有效方式计算可废除逻辑理论的结果。 当前的实现涵盖了基本的可废止逻辑和模态可废止逻辑。
本文将基于给定文件的标题、描述、标签以及部分内容,深入探讨如何利用LabWindows-CVI简化轮毂轴承的测试过程。 ### 轮毂轴承测试的重要性 轮毂轴承作为汽车的关键部件之一,其质量直接影响到车辆的行驶安全与稳定...
希尔伯特黄变换论文,结合了小波包变换 An approach based on wavelet packet decomposition and Hilbert–Huang transform (WPD–HHT) for spindle bearings condition monitoring
NX二次开发UF_PATH_create_spindle_on 函数介绍,Ufun提供了一系列丰富的 API 函数,可以帮助用户实现自动化、定制化和扩展 NX 软件的功能。无论您是从事机械设计、制造、模具设计、逆向工程、CAE 分析等领域的专业...
以Eclipse IDE插件的形式提供对Tapestry:Java Web Components的IDE支持。 Tapestry是一个Apache顶级项目。 主轴仅支持Tapestry的版本3
它基于一个一阶系统模型,该模型具有时间延迟特性,能够根据实际负载变化迅速调整主轴的电流输出。这种方法允许系统根据加工条件的变化快速响应,确保在不同工况下都能保持稳定的加工质量。 ### 3. 微调控制 微调...
可控高速主轴(Controllable High-Speed Spindle)是机械设备的关键部件,尤其在电子政务相关的数据处理中心、云计算服务器等领域,高速主轴的需求日益增长。这种主轴不仅要求有高的转速,还必须具备良好的动态特性...
主轴(A田模式) Spindle是一个PHP MYSQL应用程序,用于根据历史数据玩协作性的网络故事讲述游戏。 它最初是由Tabea Hirzel博士在研究的过程中作为原型... 玩此游戏的方式不同,所有方式都包含在Spindle的不同版本中。