Lowell Thompson
Credentials: Ph.D. Student, Neuroscience
LUCID Research Project: Visual Perception
Research Interests: Psychophysics, Electrophysiology, Neural Network Modeling
Research Page: Lowell Thompson
Research Advisors: Bas Rokers and Ari Rosenberg
The brain integrates multiple different sensory signals to create the perception of 3D motion in depth. I’m interested in how the brain extracts, weighs, and combines these signals to create an optimal motion percept. My research takes advantage of a prevalent agnosia discovered by the Rokers Vision Laboratory where a human subject is unable to discriminate approaching from receding stimuli in specific areas of the visual field. We hope to uncover the neural mechanisms responsible for this agnosia, and shed light on 3D motion signal integration as a whole by combining human behavioral data with electrophysiological recordings in non-human primates from the Rosenberg Lab.
Psychophysics
A large portion of my current research endeavors involves evaluating human and non-human primate behavior pertinent to 3D motion perception. To do so, I make use of classic psychophysical research techniques and signal processing models. Stimuli for these tasks are generated using various 3D display technologies, MATLAB, and OpenGL. I selectively manipulate cues to 3D motion that a single eye can process (monocular cues) as well as those requiring binocular integration (binocular cues). The scope of this research is threefold: First, we have already evaluated the sensitivity of observers to individual cues throughout the visual field. Secondly, we have modeled behavior when provided both sets of cues (natural viewing) using a maximum likelihood estimation based on the individual cue sensitivities. Lastly, work is currently ongoing to determine whether sensitivity to these cues throughout the visual field is gravity-centered or retinotopic.
Electrophysiology
Our behavioral results inform our investigations into the underlying neural mechanisms of 3D motion processing. The same behavioral tasks we use to quantify behavioral sensitivity are performed by non-human primates while we collect single unit and local field potential measurements using electrophysiology in cortical regions of interest. Middle Temporal area (MT) and the Medial Superior Temporal area (MST) are both implicated in 3D motion processing; the extent to which each area contributes to the processing and integration of both monocular and binocular cues, however, remains relatively unknown.
Neural Network Modeling
One of the goals of my Ph.D. research via my involvement in the LUCID training program, is to develop a performance optimized neural network model of the dorsal visual stream. This model will be specific to 3D motion perception. Such a model can provide insight into the underlying computations that are occurring in higher-order cortical areas such as MT and MST, which are often difficult to interpret with neural data alone.