以下是在weblech目录下config文件夹中的配置文件Spider.properties
# Spider configuration file
#
# All of these settings default to sensible values if not specified.
# Directory in which to save downloaded files, defaults to "."
saveRootDirectory = c:/weblech/sites
# Filename in which to save mailto links
mailtoLogFile = mailto.txt
# Tell the spider to reload HTML pages each time, but not images
# or other files
refreshHTMLs = true
refreshImages = false
refreshOthers = false
# Set the extensions the Spider should use to determine which
# pages are of MIME type text/html. The Spider also learns new
# types as it downloads them.
htmlExtensions = htm,html,shtm,shtml
# Similarly for MIME type image/*
imageExtensions = gif,jpg,jpeg,png,bmp
# URL at which we should start the spider
startLocation = http://www.slashdot.org/
# Whether to do depth first search, or the default breadth
# first search when finding URLs to download
depthFirst = false
# Maximum depth of pages to retrieve (the first page is depth
# 0, links from there depth 1, etc). Setting to 0 is "unlimited"
maxDepth = 2
# Basic URL filtering. URLs must contain this string in order
# to be downloaded by WebLech
urlMatch = slashdot.org
# Basic URL prioritisation. URLs which are "interesting" are
# downloaded first, URLs which are "boring" last.
interestingURLs=pollBooth.pl,faq
boringURLs=article.pl
# User Agent header
userAgent = Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
# Username and password for basic HTTP authentication, if required.
# The same username and password will be used for all authentication
# challenges during a download session.
basicAuthUser = myUser
basicAuthPassword = 1234
# Number of download threads to start
spiderThreads = 1
# How often to checkpoint the Spider. A checkpoint file is named
# "spider.checkpoint" and can be used to start the spider in the
# middle of a run. Setting this value to 0 disables checkpoints.
# Here we checkpoint every 30 seconds
checkpointInterval = 30000
分享到:
相关推荐
* 使用ObjectOutputStream将对象写入到文件中 2. List、Map、Set 的区别 List、Map、Set都是Java中的集合类,用于存储和操作数据。但是它们有着不同的特点和用途: * List是一个有序集合,元素可以重复,提供了...
6. **文件结构**:压缩包文件“weblech-0.0.3”很可能包含了Weblech的源代码、执行文件、配置文件以及相关的文档。用户可以通过解压文件来安装和使用这款工具,同时也可能包含了示例或教程,帮助用户更好地理解和...
在使用weblech-0.0.3.tar.gz这个压缩包时,你需要先将其解压,通常使用命令行工具如tar命令即可完成。解压后,你可以阅读文档(如果有的话)来了解如何运行和配置WebLech,然后根据指导启动工具进行网站下载。 总的...
WebLech是一个功能强大的Web站点下载与镜像免费开源工具。它支持按功能需求来下载web站点并能够尽可能模仿标准Web浏览器的行为。WebLech有一个功能控制台并采用多线程操作。
WebLech是一个功能强大的Web站点下载与镜像工具。它支持按功能需求来下载web站点并能够尽可能模仿标准Web浏览器的行为,次、可以直接获取所需要的静态网页,jsoup可提取所需要的数据和关键字,导入eclipse 或...
【weblech-0.0.3.zip】是一个开源爬虫项目,主要基于Java语言开发,专注于搜索引擎技术。这个压缩包包含的是`weblech`的0.0.3版本,是一个持续演进的Web抓取框架,旨在帮助开发者高效地获取、处理和存储互联网上的...
Weblech是一个开源的Java网络爬虫项目,它支持深度优先和广度优先的爬取策略。其主要特性包括动态URL过滤、内容提取、重定向处理和多线程爬取。开发者可以通过源代码学习到如何处理HTTP请求,解析HTML文档,以及...
3. **WebLech**: 作为一款功能强大的Web站点下载工具,WebLech能模拟浏览器行为,以多线程方式进行下载,同时还带有功能控制台,方便用户管理和监控爬取过程。 4. **Arale**: 专为个人使用设计,Arale不仅能下载...