Findings and methodology are presented for an assessment of the facial image processing software developed by Carnegie Mellon University (CMU).
The assessment was divided into three phases, one for each of the three algorithms (Face Detection, Face Recognition, and Periocular Face Reconstruction). Results were computed with a software framework to maximize the consistency of the assessments. The process was designed to provide a one-to-one comparison between algorithms by processing both the benchmarks and the algorithm being assessed, using the same dataset and metrics. Summary results for the assessment are provided for each phase. In each case, the metric values are broken out by algorithm and dataset. The CMU-generated, CNN-based software generally out-performed the benchmark comparisons, although there were a few exceptions that are noted. The assessment found the CMU Ultron performance to be comparable to tinyFace. PittPatt performance is inferior to the Ultron/tinyFace, and YOLO has the lowest performance. In the recognition domain, the CMU-developed Native CMU outperformed across three datasets without any tuning. 28 references and extensive tables and figures