NCJ Number
91776
Date Published
1983
Length
71 pages
Annotation
A critical review of conceptual and methodological problems associated with the construction of empirically based sentencing guidelines focuses on three steps: collecting data on past sentencing practices, analyzing this data to produce a model, and translating the model into a prescriptive instrument.
Abstract
The author initially describes the Gottfredson-Wilkins model of sentencing guidelines which is used by the U.S. Parole Board and many jurisdictions. Two elements distinguish this model: it provides for a range of months or years to be served before release from prison and it gives judges the option of departing from this range in explicitly justified circumstances. In essence, the model makes explicit a policy that the Parole Board had been following, giving great weight to offense seriousness and prior record. There is not at the present time anything that could be called a theory about how judges actually decide prison terms or sentences, and most researchers with a view toward formulating guidelines indiscriminately have collected as much data as they could find. They have also given little thought to sample size and kinds of cases selected for study or validating the statistical model. Model development presents many technical problems, including the need to keep separated at least two outcomes of the sentencing decision (the in-out and the how long decisions), whether to use a single guideline instrument or an offense-specific approach, and what variables to include. In moving from model to sentencing guidelines, the guidelines developed to date have overlaid the empirical results with policy considerations, particularly in the decision to incarcerate and the width of prescribed 'normal' ranges. The paper discusses techniques for assessing the structure and impact of guidelines as well as the future role of empirical research in this field. Tables, 68 footnotes, and approximately 70 references are supplied.