U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

The great methods bake-off: Comparing performance of machine learning algorithms

NCJ Number
307858
Journal
Journal of Criminal Justice Volume: 82 Dated: 2022
Author(s)
Alex Kigerl; Zachary Hamilton; Melissa Kowalski; Xiaohan Mei
Date Published
2022
Annotation

This paper reports on a comparison of regression models to more advanced techniques for the development of risk assessments through the use of algorithms, and notes that findings demonstrate that sample size trumps algorithm type.

Abstract

Risk assessments have been constructed using a variety of algorithms, from bivariate associations, to regression, to advanced machine learning (ML) approaches. While promising greater accuracy, agencies are hesitant to adopt tools using newer ML approaches, noting concerns of bias and transparency. Research is needed to identify optimal scenarios for algorithm use in assessment development. We compared regression models (logistic, boosted, and penalized) to more advanced, techniques (neural networks, support vector machines, random forests, and K-nearest neighbors); while also introducing 'stacking', a method that combines algorithms to create an optimized model. Using a multi- state sample of 258,464 youth assessments, we varied prediction scenarios by sample size and base rate. While performance generally improved with greater sample size, a set of 'top performing' algorithms was identified. Among top performers, a 'saturation point' was observed, where algorithm type had little impact when samples exceeded 5000 subjects. In an era of big data and artificial intelligence, it is tantalizing to explore new approaches. While we do not hasten exploration, our findings demonstrate that sample size trumps algorithm type. Agencies and providers should consider this finding when adopting or developing tools, as algorithms that offer transparency may also be top performers. (Published Abstract Provided)