Stanford InfoLab Publication Server

The SCAM Approach To Copy Detection in Digital Libraries

Shivakumar, N. and Garcia-Molina, H. (1995) The SCAM Approach To Copy Detection in Digital Libraries. Technical Report. Stanford InfoLab. (Publication Note: D-lib Magazine, November 1995.)

BibTeXDublinCoreEndNoteHTML

[img]
Preview
PDF
46Kb

Abstract

Scenario 1 Your local publishing company Books'R'Us decides to publish on the Internet its latest book in an effort to cut down on printing costs and book distribution expenses. Customers pay for the digital books using sophisticated electronic payment mechanisms such as DigiCash, First Virtual or InterPay. When the payment is received, the book distribution server at Books'R'Us sends a digital version of the book electronically to the paying customer. Books'R'Us expects to make higher profits on the digital book due to lower production and distribution costs, and larger markets on the Internet. It turns out, however, that very few books are sold since digital copies of the Books'R'Us book had been widely circulated on UseNet newsgroups, bulletin boards, and had been available for free on alternate ftp sites and Web servers. Books'R'Us retract their digital publishing commitment blaming the ease of re-transmission of digital items on the Internet, and return to traditional paper based publishing. Scenario 2 Sheng wants to buy a new Pentium portable, and hence wants to read articles on the different brands available and their reviews before choosing a brand to buy. She searches information services like Dialog, Lycos, Gloss and Webcrawler, and follows UseNet newsgroups to find articles on the different portables available and finds nearly 1500 articles. When she starts reading the articles, she finds that most articles are really duplicates or near-duplicates of one another and did not contribute any new information to her search. She realizes this is because most databases maintain their own local copies of different articles in perhaps different formats (Word, Postscript, HTML), or have perhaps mirror sites that contain the same set of articles. Sheng then trudges through the articles one-by-one wishing that somebody would build a system that can remove exact or near-duplicates automatically so that she only needs to read each distinct article. Around article number 150, Sheng decides not to buy a certain brand since from the articles she learns that that brand had had problems with its color display since its release. But she has to continue looking at articles on that model since they are already a part of the result set. She adds to her wish list a dynamic search system in which she could discard any articles that have more than a certain overlap with some article she had previously discarded. In this article, we will give a brief overview of some proposed mechanisms that address each of the problems illustrated by the above two scenarios.

Item Type:Techreport (Technical Report)
Uncontrolled Keywords:SCAM, Copy detection, Plagiarism, Copyright
Subjects:Computer Science > Digital Libraries
Projects:Digital Libraries
Related URLs:Project Homepagehttp://www-diglib.stanford.edu/diglib/pub/
ID Code:92
Deposited By:Import Account
Deposited On:25 Feb 2000 16:00
Last Modified:08 Dec 2008 13:10

Download statistics

Repository Staff Only: item control page