Abstract
Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition.
© 2003 Optical Society of America
Full Article | PDF ArticleMore Like This
Eduard Vazquez, Theo Gevers, Marcel Lucassen, Joost van de Weijer, and Ramon Baldrich
J. Opt. Soc. Am. A 27(3) 613-621 (2010)
Dongmei Liu, Faliang Chang, and Chunsheng Liu
J. Opt. Soc. Am. A 33(8) 1430-1441 (2016)
Tai Sing Lee and David Mumford
J. Opt. Soc. Am. A 20(7) 1434-1448 (2003)