Stanford InfoLab Publication Server

Finding replicated web collections

Cho, J. and Shivakumar, N. and Garcia-Molina, H. (1999) Finding replicated web collections. Technical Report. Stanford InfoLab. (Publication Note: ACM International Conference on Management of Data (SIGMOD 2000) Dallas, Texas, May 14-19, 2000)

BibTeXDublinCoreEndNoteHTML

[img]
Preview
PDF
334Kb

Abstract

Many web documents (such as JAVA FAQs) are being replicated on the Internet. Often entire document collections (such as hyperlinked Linux manuals) are being replicated many times. In this paper, we make the case for identifying replicated documents and collections to improve web crawlers, archivers, and ranking functions used in search engines. The paper describes how to efficiently identify replicated documents and hyperlinked document collections. The challenge is to identify these replicas from an input data set of several tens of millions of web pages and several hundreds of gigabytes of textual data. We also present two real-life case studies where we used replication information to improve a crawler and a search engine. We report these results for a data set of 25 million web pages (about 150 gigabytes of HTML data) crawled from the web.

Item Type:Techreport (Technical Report)
Uncontrolled Keywords:Web, database, mirror, replica, copy detection, clustering
Subjects:Computer Science > Databases and the Web
Projects:Digital Libraries
Related URLs:Project Homepagehttp://www-diglib.stanford.edu/diglib/pub/
ID Code:394
Deposited By:Import Account
Deposited On:25 Feb 2000 16:00
Last Modified:27 Dec 2008 21:11

Download statistics

Repository Staff Only: item control page