Building on multi-image training, this study took a novel approach to improving own-race and other-race face recognition by testing the role of learning context on accuracy.
People recognize faces of their own race more accurately than faces of other racesa phenomenon known as the "Other-Race Effect" (ORE). Previous studies have shown that training with multiple variable images improves face recognition. In the current study, learning context was either contiguous, with multiple images of each identity seen in sequence, or distributed, with multiple images of an identity randomly interspersed among different identities. In two experiments, East Asian and Caucasian participants learned own-race and other-races faces either in a contiguous or distributed order. In Experiment 1, people learned each identity from four highly variable face images. In Experiment 2, identities were learned from one image, repeated four times. Both experiments found a robust other-race effect. The effect of learning context, however, differed depending on the variability of the learned images. The distributed presentation yielded better recognition when people learned from single repeated images (Exp. 1), but not when they learned from multiple variable images (Exp. 2). Overall, performance was better with multiple-image training than repeated single image training. The study concluded that multiple-image training and distributed learning can both improve recognition accuracy, but via distinct processes. The former broadens perceptual tolerance for image variation from a face when there are diverse images available to learn. The latter effectively strengthens the representation of differences among similar faces when there is only a single learning image. (publisher abstract modified)