`

scrapy的简单demo

阅读更多

    一个scrapy使用的demo,以后抓数据可以参考它

1.Items

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class RssItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title = scrapy.Field()
    link = scrapy.Field()
    description = scrapy.Field()
    lastBuildDate = scrapy.Field()
    generator = scrapy.Field()
    language = scrapy.Field()
    copyright = scrapy.Field()
    pubDate = scrapy.Field()
    items = scrapy.Field()

class NodeItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    description = scrapy.Field()
    author = scrapy.Field()
    comments = scrapy.Field()
    pubDate = scrapy.Field()
    guid = scrapy.Field()

class RowItem(scrapy.Item):
    name = scrapy.Field()
    sex = scrapy.Field()
    addr = scrapy.Field()
    email = scrapy.Field()


class GoodsItem(scrapy.Item):
    name = scrapy.Field()
    price = scrapy.Field()
    link = scrapy.Field()
    commnum = scrapy.Field()
    
class NewsLinkItem(scrapy.Item):
    name = scrapy.Field()
    link = scrapy.Field()

 

2.settings

# -*- coding: utf-8 -*-

# Scrapy settings for demo project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'demo'

SPIDER_MODULES = ['demo.spiders']
NEWSPIDER_MODULE = 'demo.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'demo (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'demo.middlewares.DemoSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'demo.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'demo.pipelines.DemoPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

 

 

3.pipline

 

# -*- coding: utf-8 -*-
import json
import codecs
from demo.dbconnect import DbUtil
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


class DemoPipeline(object):
    
    
    def __init__(self):
        self.file = codecs.open("mydata.json", 'wb', encoding='utf-8')
        self.dbutil = DbUtil('127.0.0.1','longlh','solong1980','orders')
    def process_item(self, item, spider):
        #print(item)
        #print(spider.name)
        if spider.name is 'DangDangSpider':
            for i in range(0,len(item['name'])):
                name = item['name'][i]
                link = item['link'][i]
                price = item['price'][i]
                commnum = item['commnum'][i]
                goods ={'name':name,'link':link,'price':price,'commnum':commnum}
                line = json.dumps(dict(goods), ensure_ascii=False)
                line = str(line) + '\n'
                self.file.write(line)
        if spider.name is 'webcrawl':
            for i in range(0,len(item['name'])):
                name = item['name'][i]
                link = item['link'][i]
                print(name)
                print(link)
                print("insert into tbtable(name,link) values('" + name + "','" + link + "')")
                self.dbutil.add("insert into tbtable(name,link) values('" + name + "','" + link + "')")
        return item
    
    def close_spider(self, spider):
        self.file.close()
        self.dbutil.close()

 

4.db util

 

# -*- coding: utf-8 -*-
import pymysql
import logging
from pymysql import charset

class DbUtil():
    def __init__(self, host, user, passwd, db, port=3306):
        self.host = host
        self.port = port
        self.user = user
        self.passwd = passwd
        self.db = db
        try:
            self.conn = pymysql.connect(host, user, passwd, db,charset='utf8')
        except Exception as e:
            logging.error(str(e))
            raise Exception("connect fail")
    def cursor(self):
        return self.conn.cursor()
    def select(self, sql):
        return self.cursor().execute(sql)
    def add(self, sql):
        self.conn.query(sql)
    def close(self):
        try:
            self.conn.commit()
        except Exception as e:
            print(str(e))
        finally:
            self.conn.close()

 

5.Spider

 

# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from demo.items import NewsLinkItem


class WebcrawlSpider(CrawlSpider):
    name = 'webcrawl'
    allowed_domains = ['sohu.com']
    start_urls = ['http://sports.sohu.com/nba.shtml']

    rules = (
        Rule(LinkExtractor(allow=('.*?/n.*?shtml'),allow_domains=('sohu.com')), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        i = NewsLinkItem()
        i['name'] = response.xpath('/html/head/title/text()').extract()
        i['link'] = response.xpath('//link[@rel="canonical"]/@href').extract()
        return i

 

分享到:
评论

相关推荐

    Python Scrapy 爬虫框架demo

    在"Python Scrapy 爬虫框架demo"中,我们可以看到一个完整的Scrapy项目实例,包括了后端数据库集成的相关逻辑。 1. **Scrapy的基本架构**: - Scrapy由多个组件构成,如Spiders、Item、Item Pipeline、Request/...

    scrapy windows环境搭建+demo

    以上就是在Windows环境下搭建Scrapy并创建一个简单的爬虫项目的过程,包括了多级URL的爬取、代理IP设置、User-Agent动态选择以及数据保存到JSON文件的相关知识点。通过这个过程,你可以学习到Scrapy框架的基础用法,...

    Scrapy爬取百度图片的Demo

    Scrapy是一个强大的Python爬虫框架,它为开发者提供了一套完整的工具链,使得构建网络爬虫变得简单高效。本项目“Scrapy爬取百度图片的Demo”是一个实用的实例,教你如何利用Scrapy来抓取百度图片搜索结果,并对爬取...

    点评.zip(写的一个scrapy的爬虫简单的demo)

    在这个"点评.zip"压缩包中,包含的是一个使用Scrapy构建的简单爬虫示例,该爬虫设计用于抓取大众点评网站上的商家信息,特别是商家名字和星级。 首先,让我们深入了解一下Scrapy的基础知识。Scrapy由多个组件组成,...

    基于Python和Echarts职位画像系统,用Scrapy抓取招聘数据,使用Django+echarts完成数据可视化.zip

    7. **数据可视化**: Echarts在Django后端中生成的图表可以通过API返回给前端,前端可以是一个简单的HTML页面,利用JavaScript调用API获取数据并渲染图表。用户可以通过交互式图表查看不同维度的职位数据,如地区分布...

    网络爬虫的简单demo

    这个"网络爬虫的简单demo"提供了一个基础的爬虫实例,主要用于教学和理解爬虫工作原理,尤其适合初学者入门。由于它主要针对静态网站,这意味着它可能无法有效地处理动态内容丰富的网站,因为这些网站往往依赖于...

    java网络爬虫实现简单Demo

    这个Java网络爬虫的简单Demo展示了基本的网络请求和HTML解析过程,但实际的网络爬虫可能涉及更复杂的逻辑和策略,如多线程、分布式爬虫、数据清洗与分析等。对于更复杂的需求,可以研究如Scrapy这样的高级框架,它...

    python基于scrapy抓取压缩包demo源码.zip

    总的来说,这个Python基于Scrapy抓取压缩包的demo源码展示了如何结合Scrapy框架和Python标准库处理压缩包中的数据。它涵盖了从下载压缩包、解压文件到解析内容和存储数据的整个流程,是学习Scrapy爬虫处理压缩包的一...

    【Python爬虫:Scrapy】 之 PyCharm 搭建Scrapy环境+创建Scrapy项目 实例

    2. 在`ScrapyDemo/spiders`目录下创建新的Python文件,如`spiderDemo.py`,编写爬虫逻辑: ```python import scrapy from ScrapyDemo.items import ArticleItem class CSDNArticle(scrapy.Spider): name = '...

    Scrapy爬虫框架教程(二)-- 爬取豆瓣电影TOP250

    经过上一篇教程我们已经大致了解了Scrapy的基本情况,并写了一个简单的小demo。这次我会以爬取豆瓣电影TOP250为例进一步为大家讲解一个完整爬虫的流程。 工具和环境 语言:python 2.7 IDE: Pycharm 浏览器:...

    python3.5.2scrapy安装教程1

    安装完成后,你可以通过一个简单的测试来验证Scrapy是否成功安装。在任意目录,按住Shift+右键,选择“在此处打开命令提示符窗口”,然后输入`scrapy startproject fourth`创建一个新的Scrapy项目。如果出现Scrapy...

    python抓取淘宝天猫网页商品详情Demo.zip

    5. **异步请求与Scrapy框架**:如果商品数量庞大,为了提高效率,可以使用异步请求库如`aiohttp`或使用专门的爬虫框架Scrapy。这些工具能并发处理多个请求,显著提升爬取速度。 6. **模拟登录**:对于需要登录后...

    一个简单的爬虫demo使用了一些Xpath技术

    【标题】:一个简单的爬虫demo使用了一些Xpath技术 【描述】:这个简单的爬虫示例演示了如何利用Xpath技术抓取网页上的信息。Xpath是一种在XML文档中查找信息的语言,同样适用于HTML文档,它允许我们高效地定位到...

    Python初学者-适合新手小白学习python练手的demo源码

    在31个demo源码中,初学者可能会遇到上述各种概念的实际应用,例如简单的输入输出练习、函数和模块的使用、面向对象编程的示例,以及文件操作和异常处理的例子。通过实践这些代码,新手可以更好地理解Python的工作...

    python实现的爬虫demo

    1. **简洁易读**: Python的语法非常简洁和易于理解,使得编写爬虫程序变得相对简单。与其他编程语言相比,Python代码通常更加可读,逻辑清晰,这样就可以更轻松地实现和维护爬虫程序。 2. **丰富的第三方库**: ...

    Python爬虫-使用Python开发的爬虫示例demo.zip

    总的来说,“Python爬虫-使用Python开发的爬虫示例demo.zip”涵盖了从基础的HTTP请求到高级的反爬策略,从简单的数据解析到复杂的数据存储,以及爬虫框架和分布式爬虫的概念。通过学习这个示例,你将能够掌握Python...

    python爬虫demo

    以下是一个简单的Python爬虫代码示例,使用了Requests和BeautifulSoup: ```python import requests from bs4 import BeautifulSoup def spider(url): response = requests.get(url) # 发送GET请求 response....

    在学习python过程中,随手实现的各种demo集合,涉及爬虫、sqlite、Tkinter、OCR、简单脚本、第三.zip

    在这个压缩包中,我们找到了一系列在学习Python过程中编写的demo,涵盖了多个重要主题,包括爬虫、SQLite数据库操作、图形用户界面Tkinter、光学字符识别(OCR)以及简单的脚本编写。接下来,我们将深入探讨这些知识点...

Global site tag (gtag.js) - Google Analytics