Seminar Friday, September 23, 2016

Speaker:
Melchi M. Michel, PhD

Assistant Professor
Department of Psychology
Rutgers University

Title:
Visual Memory and Information Integration across Saccadic Eye Movements

Abstract:
Maintaining a continuous, stable perception of the visual world relies on the ability to integrate information from previous fixations with the current one. One essential component of this integration is trans-saccadic memory (TSM), memory for information across shifts in gaze. Low TSM capacity may play a limiting role in tasks requiring efficient trans-saccadic integration, such as multiple-fixation visual search tasks. Another essential component is the ability to integrate information about individual objects across the changes in retinal image location that accompany gaze shifts. I will discuss two recent projects that investigate the role of retinotopic shifts and visual memory in trans-saccadic integration.

In the first project, we used results from rate-distortion theory to derive a memory-limited ideal observer model of visual search. We then used this model, in combination with two visual search tasks, to estimate TSM capacity and to evaluate its relationship to conventional visual short-term memory (VSTM). Our results suggest that TSM plays a important role in visual search tasks, that the effective capacity of TSM may be larger than that of VSTM, and that the TSM capacity of human observers significantly limits performance in multiple-fixation visual search tasks.

In a second project, we used a spatiotemporal reverse correlation technique to determine the timecourse of spatial information integration across a saccadic eye movement. Human observers were asked to make a discrimination judgment regarding a dynamic visual stimulus whose physical location remained fixed across a gaze shift. We found strong evidence that observers anticipate the future retinal location of the target and start integrating from this location approximately 100ms prior to the onset of the eye movement. Across a series of experiments, we demonstrate that this anticipatory integration is attention- based and predictive, that it is spatially precise, and that it can occur in parallel across multiple attentional targets.