Projects In Brief
Tools to search particular information sources, and return the results to users.
Stanford Digital Library PalmPilot Infrastructure :
Provides support for error and event logging, memory management, and communication infrastructure to aid the quick development of digital library applications for 3Com's PalmPilot personal digital assistant (PDA).
allows users of palm-sized computers to explore the World-Wide Web.
Screens as small as the Palm Pilot, and bandwidth as narrow as cheap
radio links require a radical rethinking of user interfaces to information
repositories, such as the World-Wide Web. This project develops such
new approaches to browsing.
The Simple Digital Library Interoperability Protocol (SDLIP; pronounced S-D-Lip) is a protocol
for integrating multiple, heterogeneous information sources. It was developed jointly by Stanford University, UC
Berkeley, UC Santa Barbara, the San Diego Supercomputer Center, and the California Digital Library Project. Clients
use SDLIP to request searches to be performed over information sources. The result documents are returned synchronously,
or they are streamed from service to client as they become available. Implementations can be constructed over HTTP
or CORBA based transports. In fact, any search service can be accessible through both kinds of transports at the
same time. Implementations for IETF's
HTTP-based DASL protocol, and for CORBA are available.
Detailed information about SDLIP is available,
as well as a streaming video
To help users search over heterogeneous information services that
support non-uniform query languages, our approach is to allow users to compose Boolean queries in one
unified front-end language and translate them to the native formats
according to the targets' syntax and capabilities.
addresses the problem of search engine overload, and
the problem of multmedia Web elements not being searchable.
We develop value metrics that allow us to search and filter documents
based on 'document value', rather than on query/text similarity alone.
This project searches for new kinds of indicators that measure document
value, it develops methods for accumulating value data for large numbers
of documents, and it experiments with new ways of using value information
to improve user interactions with information.
Our WebBase project explores how tens of millions of Web pages can be
effectively collected, stored, searched, and mined. As part of this project
we are building smart crawlers, and a storage system that holds pages
obtained from the Web. The WebBase will be a tool for researchers building
unique indexes into the Web. Researchers will be able to have the system
deliver Web pages to feature analysis programs at very high data rates.
WebBase will then build special indexes over the computed page features.
These indexes can subsequently be used for search. In addition, the WebBase
project is exploring how Web content can be multicast over high-speed