Yang, Cheng (2002) Efficient Acoustic Index for Music Retrieval with Various Degrees of Similarity. In: ACM Multimedia, 2002, December 1-6, 2002, Juan Les Pins, France.
Content-based music retrieval research has mostly focused on symbolic data rather than acoustical data. Given that there are no general-purpose transcription algorithms that can convert acoustical data into musical scores, new methods are needed to do music retrieval on acoustical data. In this paper, we review some existing methods on content-based music retrieval, discuss different definitions of music similarity, and present a new framework to perform music indexing and retrieval. The framework is based on an earlier prototype we developed, with significant improvements. In our framework known as MACSIS, each audio file is broken down into small segments and converted into feature vectors. All vectors are stored in a high-dimensional indexing structure called LSH, a probabilistic indexing scheme that makes use of multiple hashing instances in parallel. At retrieval time, small segments of audio matches are retrieved from the index and pieced together using the Hough Transform technique, and results are used as the basis to rank candidate matches.
|Item Type:||Conference or Workshop Item (Paper)|
|Uncontrolled Keywords:||content-based music retrieval; acoustic index; music similarity|
|Related URLs:||Project Homepage||http://infolab.stanford.edu/midas/midas.html|
|Deposited By:||Import Account|
|Deposited On:||07 Jan 2003 16:00|
|Last Modified:||25 Dec 2008 10:25|
Repository Staff Only: item control page