因为网是巨大的,网页经常被更新,一个搜索引擎维持的索引不得不周期性地刷新网页。这是极其,因为搜索引擎需要爬网和下载网,消费的资源分页使它的索引清醒。把网更新的技术基于现在,我们在场在为维持网仓库的新颖的网服务者和搜索引擎之间的一个合作纲要。网服务者提供通过标准的 XML 定义描述网络站点的元数据。在更新网页前,爬虫访问元数据文件。如果元数据显示页没被修改,那么,爬虫不更新它。这个纲要能因此节省带宽资源。一个原始模型基于纲要被实现。纲要的费用和效率被分析。
Because the web is huge and web pages are updated frequently, the index maintained by a search engine has to refresh web pages periodically. This is extremely resource consuming because the search engine needs to crawl the web and download web pages to refresh its index, Based on present technologies of web refreshing, we present a cooperative schema between web server and search engine for maintaining freshness of web repository. The web server provides metadata defined through XML standard to describe web sites. Before updating the web page the crawler visits the meta-data files. If the meta-data indicates that the page is not modified, then the crawler will not update it. So this schema can save bandwidth resource. A primitive model based on the schema is implemented. The cost and efficiency of the schema are analyzed.