Scrapy gives us access to two main spiders classes, the generic spider which we have used lots of time before in other videos plus this CrawlSpider that works in a slightly different way. We can give it a rule set and get it to follow links automatically, passing the ones that we want matched back to our parse function with a callback. This makes incredibly easy full website data scraping. In this video I will explain to you how to use the CrawlSpider, what the Rule and LinkExtrator do and how to use them, and also demo how it works.
Support Me:
# Patreon: [ Ссылка ] (NEW)
# Amazon UK: [ Ссылка ]
# Hosting: Digital Ocean: [ Ссылка ]
# Gear Used: [ Ссылка ] (NEW)
-------------------------------------
Disclaimer: These are affiliate links and as an Amazon Associate I earn from qualifying purchases
-------------------------------------
Ещё видео!