使用scrapy-redis 爬取京东图书信息 (案例)
前提环境:WIN10, pycharm, redis, scrapy-redis
需求:抓取京东图书信息
目标:使用 Scrapy_Redis 抓取京东图书的信息,有图书的大类别(b_category)、图书小类别的 url(s_href)、图书的小类别(s_cate)、图片(book_img)、书名(book_name)、作者(book_author)、出版社(book_press)、出版时间(book_publish_date)、价格(book_price)等
Step 1:创建爬虫 scrapy 项目
scrapy startproject JD # 创建 scrapy genspider jdbook jd.com # 生成
Step 2:jdbook.py 文件(主要爬虫)
# -*- coding: utf-8 -*- import scrapy from copy import deepcopy import json import urllib class JdbookSpider(scrapy.Spider): name = 'jdbook' allowed_domains = ['jd.com', 'p.3.cn'] start_urls = ['https://book.jd.com/booksort.html'] def parse(self, response): dt_list = response.xpath("//div[@class='mc']/dl/dt") # 大分类列表 for dt in dt_list: item = {} item["b_category"] = dt.xpath("./a/text()").extract_first() em_list = dt.xpath("./following-sibling::dd[1]/em") # 小分类列表 for em in em_list: item["s_href"] = em.xpath("./a/@href").extract_first() item["s_cate"] = em.xpath("./a/text()").extract_first() if item["s_href"] is not None: item["s_href"] = "https:" + item["s_href"] yield scrapy.Request( item["s_href"], callback=self.parse_book_list, meta={"item":deepcopy(item)} ) def parse_book_list(self, response): # 解析列表页 item = response.meta["item"] li_list = response.xpath("//div[@id='plist']/ul/li") for li in li_list: item["book_img"] = li.xpath(".//div[@class='p-img']//img/@src").extract_first() if item["book_img"] is None: item["book_img"] = li.xpath(".//div[@class='p-img']//img/@data-lazy-img").extract_first() item["book_img"] = "https:" + item["book_img"] if item["book_img"] is not None else None item["book_name"] = li.xpath(".//div[@class='p-name']/a/em/text()").extract_first().strip() item["book_author"] = li.xpath(".//span[@class='author_type_1']/a/text()").extract() item["book_press"] = li.xpath(".//span[@class='p-bi-store']/a/@title").extract_first() item["book_publish_date"] = li.xpath(".//span[@class='p-bi-date']/text()").extract_first().strip() item["book_sku"] = li.xpath("./div/@data-sku").extract_first() yield scrapy.Request( "https://p.3.cn/prices/mgets?skuIds=J_{}".format(item["book_sku"]), callback=self.parse_book_price, meta={"item":deepcopy(item)} ) # 列表页翻页 next_url = response.xpath("//a[@class='pn-next']/@href").extract_first() if next_url is not None: next_url = urllib.parse.urljoin(response.url, next_url) yield scrapy.Request( next_url, callback=self.parse_book_list, meta={"item":item} ) def parse_book_price(self, response): item = response.meta["item"] item["book_price"] = json.loads(response.body.decode())[0]["op"] print(item)
Step 3:修改 setting.py 文件
Step 3:执行: scrapy crawl jdbook
redis 查看是否存在

完整代码见:jdbook
完结撒花~~