I need a web scraper written for the following url:
[login to view URL]
All pages will need to be retrieved not just page one. The data on this site changes and page 2 does not always exist, however we need to scrape additional pages if they exist.
The number of rows will vary, the rows will be separated by line segments.
The output should be a pipe (|) delimited file with the following column mappings:
origin_city --> data located in the "Pickup" column before the (,) but after the (-), if there is no (-) then it is the data before the (,)
origin_state --> data located in the "Pickup" column after the (,)
ship_date --> the data from the "Pickup On" column changed to the YYYY-MM-DD format, if it says "Ready" change to the current days date in the YYYY-MM-DD format
destination_city --> data located in the "DESTINATION" column before the (,)
destination_state --> data located in the "DESTINATION" column after the (,)
receive_date --> leave blank
trailer_type --> data located in the "Truck" column
load_size --> put the word "Full"
weight --> leave blank
length --> leave blank
width --> leave blank
height --> leave blank
trip_miles --> leave blank
pay_rate --> leave blank
contact_phone --> leave blank
contact_name --> leave blank
tarp_required --> leave blank
comment --> data located in the "Pickup" column, below the origin_city and origin_state, data starts with the word "NOTES" all data including and after the word "NOTES" will need to be added
if data below the origin_city and origin_state has the comment "HOLD UP UNTIL FURTHER NOT", don't include that data
load_number --> leave blank
The first line of the output should contain all of the column headers.
Any field that contain no data should be left blank.
Please do not use words like "null" or "blank" in blank columns.
Below is a sample output of the first 5 columns using sample data:
The deliverable will be a Perl .pl file that must run on
Ubuntu Linux and must use Modern::Perl. The Perl .pl file
should be called '[login to view URL]' and the output file should be
called '[login to view URL]'
It will be scheduled in cron to run unattended every 15 minutes.
Please specify what language/OS/modules you plan to use.
Also, please include the word "raccoon" in your bid so I know that
you read this description.
Thank you for your invitation! I can provide you a Perl script that will use WWW::Mechanize and HTML::TreeBuilder::LibXML to parse a target page.
9 freelance font une offre moyenne de $155 pour ce travail
⭐⭐⭐⭐⭐ How do you do client. My name is Valentine Web scraping expert who knows the value of time, working hard and always working on time. I already completed some project similar yours such as Bet 365,facebook tool. Plus
Hello, Hi, Nice to meet you! I have read your requirements carefully and I am very interesting for your project. I am confident of this project as I'm a professional Scraping expert with over 5 years of experience. gi Plus
Hi, sir I am a Web scraping expert. I have many experiences like your project. I am very confident to complete this task in a few days. I am ready for your project and I will provide the best service for you. Please f Plus
Hello There. How are you doing?. I have read the description, I have great experience doing similar jobs related to these skills Linux, Perl, Web Scraping. Please start the chat so we can have detailed discussion. Than Plus
I have done a lot of webscraping and looking at the description you mentioned, I realized what makes this project different from others. I can scrape a dynamically loaded page and even if there are more than 1 page in Plus
Hi, dear! I've read your description carefully and can do it in high quality.(raccoon) I have many excellent experiences in Linux application development and web scraping. And also I'm very familiar with web scraping Plus
raccoon dog is running with php Hi, I've read your complete requirements and I will produce a notable result for you because I have done those project before. I want to discuss complete details with you to understand y Plus