In this article we’re going to build a scraper for an actual freelance gig where the client wants a Python program to scrape data from Stack Overflow to grab new questions (question title and URL). Scraped data should then be stored in MongoDB. It’s worth noting that Stack Overflow has an API, which can be used to access the exact same data. However, the client wanted a scraper, so a scraper is what he got.
- 01/03/2014 – Refactored the spider. Thanks, @kissgyorgy.
- 02/18/2015 – Added Part 2.
- 09/06/2015 – Updated to the latest version of Scrapy and PyMongo – cheers!
If you’re running OSX or a flavor of Linux, install Scrapy with pip (with your virtualenv activated):
If you are on Windows machine, you will need to manually install a number of dependencies. Please refer to the official documentation for detailed instructions as well as this Youtube videothat I created.
Once Scrapy is setup, verify your installation by running this command in the Python shell:
If you don’t get an error then you are good to go!
Next, install PyMongo with pip:
Now we can start building the crawler.
Let’s start a new Scrapy project:
This creates a number of files and folders that includes a basic boilerplate for you to get started quickly:
The items.py file is used to define storage “containers” for the data that we plan to scrape.
StackItem() class inherits from
Item (docs), which basically has a number of pre-defined objects that Scrapy has already built for us:
Let’s add some items that we actually want to collect. For each question the client needs the title and URL. So, update items.py like so:
Create the Spider
Create a file called stack_spider.py in the “spiders” directory. This is where the magic happens – e.g., where we’ll tell Scrapy how to find the exact data we’re looking for. As you can imagine, this is specific to each individual web page that you wish to scrape.
Start by defining a class that inherits from Scrapy’s
Spider and then adding attributes as needed:
The first few variables are self-explanatory (docs):
namedefines the name of the Spider.
allowed_domainscontains the base-URLs for the allowed domains for the spider to crawl.
start_urlsis a list of URLs for the spider to start crawling from. All subsequent URLs will start from the data that the spider downloads from the URLS in
Next, Scrapy uses XPath selectors to extract data from a website. In other words, we can select certain parts of the HTML data based on a given XPath. As stated in Scrapy’s documentation, “XPath is a language for selecting nodes in XML documents, which can also be used with HTML.”
You can easily find a specific Xpath using Chrome’s Developer Tools. Simply inspect a specific HTML element, copy the XPath, and then tweak (as needed):
$x – i.e.,
Again, we basically tell Scrapy where to start looking for information based on a defined XPath. Let’s navigate to the Stack Overflow site in Chrome and find the XPath selectors.
Right click on the first question and select “Inspect Element”:
Now grab the XPath for the
As you can tell, it just selects that one question. So we need to alter the XPath to grab allquestions. Any ideas? It’s simple:
//div[@class="summary"]/h3. What does this mean? Essentially, this XPath states: Grab all
<h3> elements that are children of a
<div> that has a class of
Notice how we are not using the actual XPath output from Chrome Developer Tools. In most cases, the output is just a helpful aside, which generally points you in the right direction for finding the working XPath.
Now let’s update the stack_spider.py script:
Extract the Data
We still need to parse and scrape the data we want, which falls within
<div class="summary"><h3>. Again, update stack_spider.py like so:
We are iterating through the
questions and assigning the
Ready for the first test? Simply run the following command within the “stack” directory:
Along with the Scrapy stack trace, you should see 50 question titles and URLs outputted. You can render the output to a JSON file with this little command:
We’ve now implemented our Spider based on our data that we are seeking. Now we need to store the scraped data within MongoDB.
Store the Data in MongoDB
Each time an item is returned, we want to validate the data and then add it to a Mongo collection.
The initial step is to create the database that we plan to use to save all of our crawled data. Open settings.py and specify the pipeline and add the database settings:
We’ve set up our spider to crawl and parse the HTML, and we’ve set up our database settings. Now we have to connect the two together through a pipeline in pipelines.py.
Connect to Database
First, let’s define a method to actually connect to the database:
Here, we create a class,
MongoDBPipeline(), and we have a constructor function to initialize the class by defining the Mongo settings and then connecting to the database.
Process the Data
Next, we need to define a method to process the parsed data:
We establish a connection to the database, unpack the data, and then save it to the database. Now we can test again!
Again, run the following command within the “stack” directory:
NOTE: Make sure you have the Mongo daemon –
mongod– running in a different terminal window.
Hooray! We have successfully stored our crawled data into the database:
This is a pretty simple example of using Scrapy to crawl and scrape a web page. The actual freelance project required the script to follow the pagination links and scrape each page using the
CrawlSpider (docs), which is super easy to implement. Try implementing this on your own, and leave a comment below with the link to the Github repository for a quick code review. Need help? Start with this script, which is nearly complete. Then view Part 2 for the full solution!
You can download the entire source code from the Github repository. Comment below with questions. Thanks for Reading!
Happy New Year!