HTTP SCAN analyzes recursively HTML pages and reports all the links it finds to a text file: html, mail, jpg, mpeg, mp3, etc...
Http Scan extracts links through HTML pages in the neighborhood of the initial URL. The html links found are added in a download queue. THttpScan downloads each related page, extracts the links found, and so on...
- the LinkScan property limits the scanning to the initial site or the initial URL path,
- the LinkReport property lets report only links owned by the current site, or the links under the subfolders of the initial link.
- the DepthSearchLevel property allows to limit the level of pages scanned, starting from the initial page, especially when the scanning is not limited to a web site.
By using the LinkScan and LinkReport properties combined with an high DephSearchLevel value, you can easily scan a whole site or only a subdirectory from a web site.
Events occur for each link found and each page read, returning URL, meta tags, document type, referrer, host name...
According to the line speed, thousands of links may be extract from a starting URL in a few minutes.
Most common parameters can be simply set from the parameters.
pm for more informations
11 freelance font une offre moyenne de $143 pour ce travail
Dear Hiring Manager, I'm a expert in html. I'm very interested in your job post involving these skills.I have created some project on html such as: get information on web page.I can code on c#,php,c++. I believe my sk Plus
Hi, we are team of VB .Net developers and business analyst. 4 years of expertise in VB.net. Thanks team sdlc
Work 4 years as a Software Developer. Familiar with Enterprise technologies (Documentum, different Application Servers) and many programming languages. Technical Skills: Documentum ( 2 years ) DC, WebTop, Plus