Easy Web Scraping With Scrapy
Kevin Sahin
Posted on December 18, 2019
In the previous post about Web Scraping with Python we talked a bit about Scrapy. In this post we are going to dig a little bit deeper into it.
Scrapy is a wonderful open source Python web scraping framework. It handles the most common use cases when doing web scraping at scale:
- Multithreading
- Crawling (going from link to link)
- Extracting the data
- Validating
- Saving to different format / databases
- Many more
The main difference between Scrapy and other commonly used librairies like Requests / BeautifulSoup is that it is opinionated. It allows you to solve the usual web scraping problems in an elegant way.
The downside of Scrapy is that the learning curve is steep, there is a lot to learn, but that is what we are here for :)
In this tutorial we will create two different web scrapers, a simple one that will extract data from an E-commerce product page, and a more "complex" one that will scrape an entire E-commerce catalog!
Basic overview
You can install Scrapy using pip. Be careful though, the Scrapy documentation strongly suggests to install it in a dedicated virtual environnement in order to avoid conflicts with your system packages.
I'm using Virtualenv and Virtualenvwrapper:
mkvirtualenv scrapy_env
and
pip install Scrapy
You can now create a new Scrapy project with this command:
scrapy startproject product_scraper
This will create all the necessary boilerplate files for the project.
├── product_scraper
│ ├── __init__.py
│ ├── __pycache__
│ ├── items.py
│ ├── middlewares.py
│ ├── pipelines.py
│ ├── settings.py
│ └── spiders
│ ├── __init__.py
│ └── __pycache__
└── scrapy.cfg
Here is a brief overview of these files and folders:
- items.py is a model for the extracted data. You can define custom model (like a Product) that will inherit the scrapy Item class.
- middlewares.py Middleware used to change the request / response lifecycle. For example you could create a middle ware to rotate user-agents, or to use an API like ScrapingBee instead of doing the requests yourself.
- pipelines.py In Scrapy, pipelines are used to process the extracted data, clean the HTML, validate the data, and export it to a custom format or saving it to a database.
- /spiders is a folder containing Spider classes. With Scrapy, Spiders are classes that define how a website should be scraped, including what link to follow and how to extract the data for those links.
- scrapy.cfg is a configuration file to change some settings
Scraping a single product
In this example we are going to scrape a single product from a dummy E-commerce website. Here is the first the product we are going to scrape:
https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/
We are going to extract the product name, picture, price and description.
Scrapy Shell
Scrapy comes with a built-in shell that helps you try and debug your scraping code in real time. You can quickly test your XPath expressions / CSS selectors with it. It's a very cool tool to write your web scrapers and I always use it!
You can configure Scrapy Shell to use another console instead of the default Python console like IPython. You will get autocompletion and other nice perks like colorized output.
In order to use it in your scrapy Shell, you need to add this line to your scrapy.cfg file:
shell = ipython
Once it's configured, you can start using scrapy shell:
$ scrapy shell --nolog
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s] crawler <scrapy.crawler.Crawler object at 0x108147eb8>
[s] item {}
[s] settings <scrapy.settings.Settings object at 0x108d10978>
[s] Useful shortcuts:
[s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s] fetch(req) Fetch a scrapy.Request and update local objects
[s] shelp() Shell help (print this help)
[s] view(response) View response in a browser
In [1]:
We can start fetching a URL by simply:
fetch('https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/')
This will start by fetching the /robot.txt file.
[scrapy.core.engine] DEBUG: Crawled (404) <GET https://clever-lichterman-044f16.netlify.com/robots.txt> (referer: None)
In this case there isn't any robot.txt, that's why we can see a 404 HTTP code. If there was a robot.txt, by default Scrapy will follow the rule.
You can disable this behavior by changing this setting in settings.py:
ROBOTSTXT_OBEY = True
Then you should should have a log like this:
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/> (referer: None)
You can now see your response object, response headers, and try different XPath expression / CSS selectors to extract the data you want.
You can see the response directly in your browser with:
view(response)
Note that the page will render badly inside your browser, for lots of different reasons. This can be CORS issues, Javascript code that didn't execute, or relative URLs for assets that won't work locally.
The scrapy shell is like a regular Python shell, so don't hesitate to load your favorite scripts/function in it.
Extracting Data
Scrapy doesn't execute any Javascript by default, so if the website you are trying to scrape is using a frontend framework like Angular / React.js, you could have trouble accessing the data you want.
Now let's try some XPath expression to extract the product title and price:
In order to extract the price, we are going to use an XPath expression, we're selecting the first span after the div with the class my-4
In [16]: response.xpath("//div[@class='my-4']/span/text()").get()
Out[16]: '20.00$'
I could also use a CSS selector:
In [21]: response.css('.my-4 span::text').get()
Out[21]: '20.00$'
Creating a Scrapy Spider
With Scrapy, Spiders are classes where you define your crawling (what links / URLs need to be scraped) and scraping (what to extract) behavior.
Here are the different steps used by a spider to scrape a website:
- It starts by looking at the class attribute
start_urls
, and call these URLs with the start_requests() method. You could override this method if you need to change the HTTP verb, add some parameters to the request (for example, sending a POST request instead of a GET). * It will then generate a Request object for each URL, and send the response to the callback function parse() * The parse() method will then extract the data (in our case, the product price, image, description, title) and return either a dictionnary, an Item object, a Request or an iterable.
You may wonder why the parse method can return so many different objects. It's for flexibility. Let's say you want to scrape an E-commerce website that doesn't have any sitemap. You could start by scraping the product categories, so this would be a first parse method.
This method would then yield a Request object to each product category to a new callback method parse2()
For each category you would need to handle pagination Then for each product the actual scraping that generate an Item so a third parse function.
With Scrapy you can return the scraped data as a simple Python dictionary, but it is a good idea to use the built-in Scrapy Item class.
It's a simple container for our scraped data and Scrapy will look at this item's fields for many things like exporting the data to different format (JSON / CSV...), the item pipeline etc.
So here is a basic Product class:
import scrapy
class Product(scrapy.Item):
product_url = scrapy.Field()
price = scrapy.Field()
title = scrapy.Field()
img_url = scrapy.Field()
Now we can generate a spider, either with the command line helper:
scrapy genspider myspider mydomain.com
Or you can do it manually and put your Spider's code inside the /spiders directory.
There are different types of Spiders in Scrapy to solve the most common web scraping use cases:
-
Spider
that we will use. It takes a start_urls list and scrape each one with aparse
method. -
CrawlSpider
follows links defined by a set of rules -
SitemapSpider
extract URLs defined in a sitemap - Many more
# -*- coding: utf-8 -*-
import scrapy
from product_scraper.items import Product
class EcomSpider(scrapy.Spider):
name = 'ecom_spider'
allowed_domains = ['clever-lichterman-044f16.netlify.com']
start_urls = ['https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/']
def parse(self, response):
item = Product()
item['product_url'] = response.url
item['price'] = response.xpath("//div[@class='my-4']/span/text()").get()
item['title'] = response.xpath('//section[1]//h2/text()').get()
item['img_url'] = response.xpath("//div[@class='product-slider']//img/@src").get(0)
return item
In this EcomSpider class, there are two required attributes:
-
name
which is our Spider's name (that you can run usingscrapy runspider spider_name
) -
start_urls
which is the starting URL
The allowed_domains
is optionnal but important when you use a CrawlSpider that could follow links on different domains.
Then I've just populated the Product fields by using XPath expressions to extract the data I wanted as we saw earlier, and we return the item.
You can run this code as follow to export the result into JSON (you could also export to CSV)
scrapy runspider ecom_spider.py -o product.json
You should then get a nice JSON file:
[
{
"product_url": "https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/",
"price": "20.00$",
"title": "Taba Cream",
"img_url": "https://clever-lichterman-044f16.netlify.com/images/products/product-2.png"
}
]
Item loaders
There are two common problems that you can face while extracting data from the Web:
- For the same website, the page layout and underlying HTML can be different. If you scrape an E-commerce website, you will often have a regular price and a discounted price, with different XPath / CSS selectors.
- The data can be dirty and need some kind of post processing, again for an E-commerce website it could be the way the prices are displayed for example ($1.00, $1, $1,00 )
Scrapy comes with a built-in solution for this, ItemLoaders.
It's an interesting way to populate our Product object.
You can add several XPath expression to the same Item field, and it will test it sequentially. By default if several XPath are found, it will load all of them into a list.
You can find many examples of input and output processors in the Scrapy documentation.
It's really useful when you need to transorm/clean the data your extract.
For example, extracting the currency from a price, transorming a unit into another one (centimers in meters, Celcius degres in Fahrenheit) ...
In our webpage we can find the product title with different XPath expressions: //title
and //section[1]//h2/text()
Here is how you could use and Itemloader in this case:
def parse(self, response):
l = ItemLoader(item=Product(), response=response)
l.add_xpath('price', "//div[@class='my-4']/span/text()")
l.add_xpath('title', '//section[1]//h2/text()')
l.add_xpath('title', '//title')
l.add_value('product_url', response.url)
return l.load_item()
Generally you only want the first matching XPath, so you will need to add this output_processor=TakeFirst()
to your item's field constructor.
In our case we only want the first matching XPath for each field, so a better approach would be to create our own ItemLoader and declare a default output_processor to take the first matching XPath:
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
def remove_dollar_sign(value):
return value.replace('$', '')
class ProductLoader(ItemLoader):
default_output_processor = TakeFirst()
price_in = MapCompose(remove_dollar_sign)
I also added a price_in
which is an input processor to delete the dollar sign from the price. I'm using MapCompose
which is a built-in processor that takes one or several functions to be executed sequentially. You can add as many functions as you like for . The convention is to add _in
or _out
to your Item field's name to add an input or output processor to it.
There are many more processors, you can learn more about this in the documentation
Scraping multiple pages
Now that we know how to scrape a single page, it's time to learn how to scrape multiple pages, like the entire product catalog.
As we saw earlier there are different kinds of Spiders.
When you want to scrape an entire product catalog the first thing you should look at is a sitemap. Sitemap are exactly built for this, to show web crawlers how the website is structured.
Most of the time you can find one at base_url/sitemap.xml
. Parsing a sitemap can be tricky, and again, Scrapy is here to help you with this.
In our case, you can find the sitemap here: https://clever-lichterman-044f16.netlify.com/sitemap.xml
If we look inside the sitemap there are many URLs that we are not interested by, like the home page, blog posts etc:
<url>
<loc>
https://clever-lichterman-044f16.netlify.com/blog/post-1/
</loc>
<lastmod>2019-10-17T11:22:16+06:00</lastmod>
</url>
<url>
<loc>
https://clever-lichterman-044f16.netlify.com/products/
</loc>
<lastmod>2019-10-17T11:22:16+06:00</lastmod>
</url>
<url>
<loc>
https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/
</loc>
<lastmod>2019-10-17T11:22:16+06:00</lastmod>
</url>
Fortunately, we can filter the URLs to parse only those that matches some pattern, it's really easy, here we only to have URL that
have /products/
in their URLs:
class SitemapSpider(SitemapSpider):
name = "sitemap_spider"
sitemap_urls = ['https://clever-lichterman-044f16.netlify.com/sitemap.xml']
sitemap_rules = [
('/products/', 'parse_product')
]
def parse_product(self, response):
# ... scrape product ...
You can run this spider as follow to scrape all the products and export the result to a CSV file:
scrapy runspider sitemap_spider.py -o output.csv
Now what if the website didn't have any sitemap? Once again, Scrapy has a solution for this!
Let me introduce you to the... CrawlSpider
.
The CrawlSpider will crawl the target website by starting with a start_urls
list. Then for each url, it will extract all the links based on a list of Rule
.
In our case it's easy, products has the same URL pattern /products/product_title
so we only need filter these URLs.
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from product_scraper.productloader import ProductLoader
from product_scraper.items import Product
class MySpider(CrawlSpider):
name = 'crawl_spider'
allowed_domains = ['clever-lichterman-044f16.netlify.com']
start_urls = ['https://clever-lichterman-044f16.netlify.com/products/']
rules = (
Rule(LinkExtractor(allow=('products', )), callback='parse_product'),
)
def parse_product(self, response):
# .. parse product
As you can see, all these built-in Spiders are really easy to use. It would have been much more complex to do it from scratch.
With Scrapy you don't have to think about the crawling logic, like adding new URLs to a queue, keeping track of already parsed URLs, multi-threading...
Conclusion
In this post we saw a general overview of how to scrape the web with Scrapy and how it can solve your most common web scraping challenges. Of course we only touched the surface and there are many more interesting things to explore, like middlewares, exporters, extensions, pipelines!
If you've been doing web scraping more "manually" with tools like BeautifulSoup / Requests, it's easy to understand how Scrapy can help save time and build more maintainable scrapers.
I hope you liked this Scrapy tutorial and that it will motivate you to experiment with it.
For further reading don't hesitate to look at the great Scrapy documentation.
You can also check out our web scraping with Python tutorial to learn more about web scraping.
Happy Scraping!
Posted on December 18, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.