This article reports the results of an effort to enable computers to segment U.S. adjudicatory decisions into sentences.
The project created a data set of 80 court decisions from four different domains. Findings indicate that legal decisions are more challenging for existing sentence boundary detection systems than for non-legal texts. Existing sentence boundary detection systems are based on a number of assumptions that do not hold for legal texts; hence their performance is impaired. The project indicates that a general statistical sequence labeling model is capable of learning the definition more efficiently. The project trained a number of conditional random fields models that outperform the traditional sentence boundary detection systems when applied to adjudicatory decisions. (publisher abstract modified)
Downloads
Similar Publications
- Strontium Isotope Ratios of Human Hair Record Intra-City Variations in Tap Water Source
- On the testing of Hardy-Weinberg proportions and equality of allele frequencies in males and females at biallelic genetic markers
- Differentiation of the Regioisomeric 2 3 and 4-Trifluoromethylphenylpiperazines (TFMPP) by GC-IRD and GC-MS