I need an application that spiders my web sites indexes the text. And then goes out to the web to find possible duplicate (copied) pages of content.
I think the most easy solution is to have the appiclation store text in a (mysql database) and update the database after a new spider on my websites. It then can use the info to find copies of the content.
If you have other ideas how to tacle this problem I'm open to suggestions.
1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.
2) Deliverables must be in ready-to-run condition, as follows (depending on the nature of the deliverables):
a) For web sites or other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment.
b) For all others including desktop software or software the buyer intends to distribute: A software installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request.
3) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site per the coder's Seller Legal Agreement).
The application in meant to run on an PHP enabled windows 2000 server