音乐查询是信息检索领域的一个研究热点.分析了现有方法应用范围局限性和低效率的原因,讨论了用户哼唱和多声道MIDI特征序列的表达方法,提出新的基于内容的多声道MIDI检索模型HMRM(Homophonic MIDI Retrieval Model).新模型将多声道音乐表达成一个隐Markov模型,计算哼唱对象与库中对象的相似度并排序返回查询结果.实验表明,对于不同哼唱水平的用户,HMRM的识别率平均能达到73.3%,具有较高的查准率.
Music retrieval is a hot point in information retrieval field. This study analyzes the limitsof existing methods and the reasons for their low efficiency, implements the technique to represent humming and homophonic music as feature-sequences. It proposes a new content-based Homophonic Music RetrievalModel (HMRM). HMRM represents music data as Hidden Markov Model, calculates the similarity between user's humming and the objectsin database , returns the top-5 pieces to users in rank-order. Experiments show that the averagesuccessful rate reachesup to 73.3 %, which means a relatively high precision.