I need to create a web scraper to gather a list of events listed on a single specific website. The scraper will convert events found on a website's online calendar into files containing the detail of those events.
At this URL you will find a "calendar" listing upcoming events:
[login to view URL]
The scraper must be written in Perl or Ruby. The program should take two command line arguments.
* The first argument should be an integer number of days into the future to scan. For example, if this argument were zero, it would indicate that only today's events should be scanned or if this argument were 2 it would indicate that all events occurring today, tomorrow, and the day after should be scanned.
* The second command line argument should be the name of a local directory where output files are placed.
Each "event" found on the calendar should cause the creation of one output file. The output file name should consist of the date on which the event "occurs" (in the format YYYY-MM-DD) as well as any additional characters to make the output filename unique and the file suffix should be ".yaml". An example of a valid filename might be "[login to view URL]" to indicate the second event occurring on February 17, 2019. The output file format should be YAML. An example of a valid output file is attached.
Each YAML file is built up from scraping the event detail page such as this one: [login to view URL]
In the attached example, the scraped data elements are circled in red. Please note that the detail page shown in the example actually has multiple events on it and should generate multiple output YAML files (one for each date/time).
For example, suppose that the script you create is called [login to view URL] and you invoke it with this command line on a Linux server:
./[login to view URL] 2 /tmp/scrapefiles
It would generate perhaps 6 yaml files in the /tmp/scrapefiles directory.
I'm one of the best Perl web scraping experts here that's why I can provide you a working script in less than a day for just $100. It will read two command line params and output yaml files just like you want. Yo Plus
18 freelance font une offre moyenne de $157 pour ce travail
Hi, the project description is clear to me. I'm ready to come out a Perl script for you. _______________________________________
Hi there,I am Web Scraping expert from Bosnia & Herzegovina,Europe. I have carefully gone through with your requirements and I would like to help you with this project ! I can start immediately and finish it within th Plus
Hi there! I see you are looking for a Perl expert who can build a web-scrapper for you. Here I am! I can offer you 10+ years of working experience and a wide range of projects completed successfully by me. Here are so Plus
Hi, I am Ruby developer and DevOps. Skills: - Ruby - System Admin - Docker, Virtualbox , Nanobox, Kubernetes - Hosting & Maintaining any platform - Git, Bitbucket - MySQL , PostgreSQL - Web Scrap Plus
Hi there, The requirements look quite clear and straightforward to implement. I just have a single question. Assuming we've started the scraper on March 6 as [login to view URL] 4 /some-path That means events on March 6, Plus
Hello, I will create the web scraper in Ruby, please send me a message so we can discuss more, i have 8+ years of experience, i have done similar job for many web applications. Thanks!
Hello, I have gone through the JD, i can work on the scraper, i have done similar job before, i have 9+ years of experience in ROR, please send me a message so we can discuss further. Thank you!
Hello, After reading your project details I believe I'm suitable for this project. As I'm expert on it with more than 7 years experience. Please feel free to contact me. I am looking forward to hear from you. Plus
Hi, I have gone through your requirement to scrape lots of websites. I am EXPERT in building scraping tools /scripts. Hence, I can SURELY work on your project. I am having 4 YEARS of EXPERIENCE in developing PHP-PYTHON Plus
Hi I'm Colin - Ruby developer I have worked in many scraper project written in ruby with the supports of gems like Nokorigi, Headless Chromium/Firefox, PhantomJS, or simple HTTP I hope we can work together in Plus
Hello there, I have been writing scraper scripts & desktop apps for my clients for quite a while now, either on here or upwork. You can look at the one of the latest scraping jobs I've done for another American f Plus
Hi, glad to see your project. This is Kang from Shanghai, China. I'm a web application developer who has a lot of experiences in developing web scraping tools. I've carefully read your description and clearly unders Plus
Hello, I can do what you are looking for. I am using ruby for web scraping (Watir and/or Mechanize gems).
Certifications & Achievements • Certified ScrumMaster® • Certified PRINCE2® Project Manager • ExStartup Founder with reasonable Exit Product/Project Management Experience • Agile Coach for cultivating Agile Cult Plus
Hi, We have gone through the details. We can automate things as well as scrap the content from the website for you and will result the output as well in yaml format for further import but we can do this script Plus
Hi , I can achieve this using perl. I have 7byears of experience in perl. I would also like to know the platform you are using to run the script. let me know your preferences. thanks,