Intro; Table of Contents; About the Author; About the Technical Reviewer; Acknowledgments; Introduction; Chapter 1: Getting Started; Website Scraping; Projects for Website Scraping; Websites Are the Bottleneck; Tools in This Book; Preparation; Terms and Robots; robots.txt; Technology of the Website; Using Chrome Developer Tools; Set-up; Tool Considerations; Starting to Code; Parsing robots.txt; Creating a Link Extractor; Extracting Images; Summary; Chapter 2: Enter the Requirements; The Requirements; Preparation; Navigating Through "Meat & fishFish"; Selecting the Required Information
Text of Note
Extracting the DataWhere to Put the Data?; Why Items?; Running the Spider; Exporting the Results; To CSV; To JSON; To Databases; MongoDB; SQLite; Bring Your Own Exporter; Filtering Duplicates; Silently Dropping Items; Fixing the CSV File; CSV Item Exporter; Caching with Scrapy; Storage Solutions; File System Storage; DBM Storage; LevelDB Storage; Cache Policies; Dummy Policy; RFC2616 Policy; Downloading Images; Using Beautiful Soup with Scrapy; Logging; (A Bit) Advanced Configuration; LOG_LEVEL; CONCURRENT_REQUESTS; DOWNLOAD_DELAY; Autothrottling; COOKIES_ENABLED; Summary
Text of Note
Finding CommentsConver ting a Soup to HTML Text; Extracting the Required Information; Identifying, Extracting, and Calling the Target URLs; Navigating the Product Pages; Extracting the Information; Using Dictionaries; Using Classes; Unforeseen Changes; Exporting the Data; To CSV; Quick Glance at the csv Module; Line Endings; Headers; Saving a Dictionary; Saving a Class; To JSON; Quick Glance at the json module; Saving a Dictionary; Saving a Class; To a Relational Database; To an NoSQL Database; Installing MongoDB; Writing to MongoDB; Per formance Improvements; Changing the Parser
Text of Note
Outlining the ApplicationNavigating the Website; Creating the Navigation; The requests Library; Installation; Getting Pages; Switching to requests; Putting the Code Together; Summary; Chapter 3: Using Beautiful Soup; Installing Beautiful Soup; Simple Examples; Parsing HTML Text; Parsing Remote HTML; Parsing a File; Difference Between find and find_all; Extracting All Links; Extracting All Images; Finding Tags Through Their Attributes; Finding Multiple Tags Based on Property; Changing Content; Adding Tags and Attributes; Changing Tags and Attributes; Deleting Tags and Attributes
Text of Note
Parse Only What's NeededSaving While Working; Developing on a Long Run; Caching Intermediate Step Results; Caching Whole Websites; File-Based Cache; Database Cache; Saving Space; Updating the Cache; Source Code for this Chapter; Summary; Chapter 4: Using Scrapy; Installing Scrapy; Creating the Project; Configuring the Project; Terminology; Middleware; Pipeline; Extension; Selectors; Implementing the Sainsbury Scraper; What's This allowed_domains About?; Preparation; Using the Shell; def parse(self, response); Navigating Through Categories; Navigating Through the Product Listings
0
8
8
8
8
SUMMARY OR ABSTRACT
Text of Note
Closely examine website scraping and data processing: the technique of extracting data from websites in a format suitable for further analysis. You'll review which tools to use, and compare their features and efficiency. Focusing on BeautifulSoup4 and Scrapy, this concise, focused book highlights common problems and suggests solutions that readers can implement on their own. Website Scraping with Python starts by introducing and installing the scraping tools and explaining the features of the full application that readers will build throughout the book. You'll see how to use BeautifulSoup4 and Scrapy individually or together to achieve the desired results. Because many sites use JavaScript, you'll also employ Selenium with a browser emulator to render these sites and make them ready for scraping. By the end of this book, you'll have a complete scraping application to use and rewrite to suit your needs. As a bonus, the author shows you options of how to deploy your spiders into the Cloud to leverage your computer from long-running scraping tasks. What You'll Learn Install and implement scraping tools individually and together Run spiders to crawl websites for data from the cloud Work with emulators and drivers to extract data from scripted sites Who This Book Is For Readers with some previous Python and software development experience, and an interest in website scraping.