In this article, the authors propose a methodology for estimating covariate-specific ROC curves, integrating robustness, heteroscedasticity, and stochastic ordering; the proposed framework is demonstrated on accuracy of face recognition and fingerprint matching and also has potential in other domains of application like medical diagnostics.
Biometric traits, such as fingerprints, facial images, and teeth impressions, are often used in forensic analysis to identify crime suspects. Matching such biometric traits is not perfect, and recent reports have indicated the need for quantifiable measures of error rates for (these) possible matches. Often, comparisons between two sets of a trait are scored with a higher score indicating a higher likelihood that the sets are a match. Adjustment of the cutoff for which a match is declared yields a trade-off between false positive and false negative decisions that can be represented by an ROC curve. In this paper, the authors study modeling of such ROC curves conditional on covariates, for example, demographic information about source subjects, quality properties of the underlying biometric measurements, or characteristics of forensic examiners; quantifying how error rates vary in dependence of such covariates is often considerably more meaningful in biometrics and forensics than the “raw” error rates based on the pooled data. The authors herein develop a framework for estimating covariate-specific ROC curves that integrates robustness, heteroscedasticity, and stochastic ordering. The latter is of specific relevance in the given application since biometric recognition systems are typically calibrated to assign higher scores to matching pairs than to nonmatching pairs. The proposed methodology is demonstrated on accuracy of face recognition and fingerprint matching and also has potential in other domains of application like medical diagnostics. (Published Abstract Provided)