The Auditory Research Group, led by Dr. Baldwin, is dedicated to research in all areas of Applied Auditory Cognition. This includes the design of auditory displays, (particularly collision avoidance and navigation systems for air and ground), communication systems, and strategies for improving speech intelligibility in adverse listening conditions and among those with hearing impairments. The laboratory includes two acoustically shielded chambers for recording and testing of auditory stimuli, a host of neurophysiological and physiological recording equipment and a suite of high quality sound generation, digital recording, analysis, and presentation equipment and software.
Our Driving Simulation facilities include high-fidelity, motion-based simulator, and several lower fidelity desk-top simulators for rapid-prototyping. The motion-based simulator is equipped with a digital dashboard and a touch screen ancillary display console for examining new visual display as well as two custom designed seat pans for presenting vibrotactile signals.
MRES is a group of social and behavioral scientists dedicated to applying our methodological skills to real world problems. The MRES (pronounced mysteries) lab consults with government, educational, and private organizations as well as conducts independent research. Our collective interests cover most social and behavioral areas with primary focus on clinical, criminal justice, health, education, and science policy related concerns.
The Visual Attention and Cognition Lab, led by Dr. Matt Peterson, is concerned with how attention, working memory, and eye movements interact to affect cognition and perception in both well-controlled laboratory settings and more complex environments. Topics of interest include how environmental factors capture attention, how memory guides visual search, how attention affects scene perception, and how working memory is affected by eye movements. Our lab uses a variety of methods to study cognition, including psychophysical methods, high-speed eye tracking, EEG, brain-computer interfaces that utilize machine-learning algorithms to match patterns in ERP signals, transcranial direct-current stimulation (tDCS), and salivary cortisol measures of stress.
The ability to see how other people move is essential for many aspects of daily life - from things as simple as avoiding collisions to detecting suspicious behavior or recognizing someone else's emotions. The research efforts of the Perception & Action Neuroscience Group (PANGlab) led by Dr. James Thompson are focused on examining how we recognize human movement and make sense of other peoples' actions, and how we code our own actions in relation to the external environment. We investigate these issues using a combination of behavioral paradigms, virtual reality, functional magnetic resonance imaging (fMRI), and electroencephalography (EEG). The goal of the group's research is to further the understanding of how we see and act with others as part of everyday life, in specialized settings such as surveillance, and in conditions in which human movement recognition may be impaired.
Why do people make errors? How do people interact with robots? We collect data on how and why people make errors and how they interact with robots. We then build theoretical models of people making errors and people interacting with robots, not only so that we can understand people, but also so that we can help prevent errors and help people interact with robots better. Our theories are instantiated in ways that make predictions of what people will do in the future, and this information can then be used to change people's behavior.
The SREC Lab focuses on research in social attention and embodied cognition and its application to Social Robotics and Design Thinking. With regard to Social Robotics, the goal is to unravel what sort of information humans use when judging the degree of intentionality underlying the actions of social agents (i.e., robots) and how attributing a mind to others influences attention, perception and performance. With regard to Design Thinking, the SREC lab is interested in the role of embodied cognition involved in designing and in particular how perception and action processes interact during design thinking. In order to investigate these questions, we use behavioral measures, eye tracking and EEG.