This paper focuses on the recognition rate comparison for competing recognition algorithms, which is a common problem of many pattern recognition research areas. The paper firstly reviews some traditional recognition rate comparison procedures and discusses their limitations. A new method, the posterior probability calculation(PPC) procedure is then proposed based on Bayesian technique. The paper analyzes the basic principle, process steps and computational complexity of the PPC procedure. In the Bayesian view, the posterior probability represents the credible degree(equal to confidence level) of the comparison results. The posterior probability of correctly selecting or sorting the competing recognition algorithms is derived, and the minimum sample size requirement is also pre-estimated and given out by the form of tables. To further illustrate how to use our method, the PPC procedure is used to prove the rationality of the experiential choice in one application and then to calculate the confidence level with the fixed-size datasets in another application. These applications reveal the superiority of the PPC procedure, and the discussions about the stopping rule further explain the underlying statistical causes. Finally we conclude that the PPC procedure achieves all the expected functions and be superior to the traditional methods.
This paper focuses on the recognition rate comparison for competing recognition algorithms, which is a common problem of many pattern recognition research areas. The paper firstly reviews some traditional recognition rate comparison procedures and discusses their limitations. A new method, the posterior probability calculation(PPC) procedure is then proposed based on Bayesian technique. The paper analyzes the basic principle, process steps and computational complexity of the PPC procedure. In the Bayesian view, the posterior probability represents the credible degree(equal to confidence level) of the comparison results. The posterior probability of correctly selecting or sorting the competing recognition algorithms is derived, and the minimum sample size requirement is also pre-estimated and given out by the form of tables. To further illustrate how to use our method, the PPC procedure is used to prove the rationality of the experiential choice in one application and then to calculate the confidence level with the fixed-size datasets in another application. These applications reveal the superiority of the PPC procedure, and the discussions about the stopping rule further explain the underlying statistical causes. Finally we conclude that the PPC procedure achieves all the expected functions and be superior to the traditional methods.