U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

Scoring in a Computer Age

NCJ Number
176888
Journal
Polygraph Volume: 28 Issue: 1 Dated: 1999 Pages: 77-81
Author(s)
J Wygant
Date Published
1999
Length
5 pages
Annotation
The extensive use of computer-aided polygraph chart evaluation raises the questions of when manual scoring is necessary, and what is the best means of resolving a discrepancy between a manual score and a computer result.
Abstract
Examiners are not apt to find answers to these questions in any research, except to the extent that the value of manual scoring is well established by numerous studies over the past 25 years. Despite widespread acceptance of the concept of scoring, actual practice falls into four general categories. An examiner may manually score all charts from every examination, even the most obvious. The examiner may score only the charts that are not obvious or may score only the exams for which he/she cannot otherwise reach a "global impression" of truth or lie. Finally, the examiner may score nothing, relying entirely on "global impressions" or on a computer. Fortunately for the profession, informal surveys of examiners confirm that most score everything manually, even when using a computer. Any examiner who frequently disagrees with the computer results may want to consult colleagues to review a few examinations. If everyone is getting results that differ from the examiner who administered the test, that examiner must analyze why this is happening. Occasional differences can derive from several sources. In using computer- aided evaluation, the examiner must not mix issues; the computer software was written with the assumption that the examiner is submitting a single-issue test, or at least one in which the issue questions address something in which lying to one presumes lying to all. Another consideration that is within the examiner's control is the care exercised in manually removing distortion from the computer's consideration. Another common source of differences is the method by which the charts are evaluated. Examiners are generally limited to comparing an issue question to a control on either side of it. To jump much further away from the issue question makes the comparison more difficult and is usually not attempted unless there are no better options. This paper discusses what an examiner should do when faced with a difference between manual and computer evaluation. 2 tables and 2 references

Downloads

No download available

Availability