This project is to develop a custom scraping application that will regularly (daily) scrape 30 websites by connecting to a proxy network (either StormProxy or Crawlera) and then load the scraped data to Microsoft Azure Storage.
The application must be able to navigate detection issues.
The most complicated site that needs to be scraped is: [url removed, login to view]
It is complicated because the scraping application needs to select one location at a time and scrape the site. And then go back and select a different location and scrape the site. There are 3000 locations that need to be individually selected and then the site scraped.