MRES is a group of social and behavioral scientists dedicated to applying our methodological skills to real world problems. The MRES (pronounced mysteries) lab consults with government, educational, and private organizations as well as conducts independent research. Our collective interests cover most social and behavioral areas with primary focus on clinical, criminal justice, health, education, and science policy related concerns.
The Visual Attention and Cognition Lab, led by Dr. Matt Peterson, is concerned with how attention, working memory, and eye movements interact to affect cognition and perception in both well-controlled laboratory settings and more complex environments. Topics of interest include how environmental factors capture attention, how memory guides visual search, how attention affects scene perception, and how working memory is affected by eye movements. Our lab uses a variety of methods to study cognition, including psychophysical methods, high-speed eye tracking, EEG, brain-computer interfaces that utilize machine-learning algorithms to match patterns in ERP signals, transcranial direct-current stimulation (tDCS), and salivary cortisol measures of stress.
The HeART lab is focused on several topics related to Human Factors, such as vigilance, trust, human-machine teaming, and team performance. Under vigilance thrust, the lab is interested in factors that underlie performance decrements in sustained attention tasks by examining cognitive resource utilization using non-invasive imaging techniques to monitor cerebral blood flow velocity (Transcranial Doppler Sonography). With regrd to trust, the HeART lab is interested in factors that influence trust, such as automation failure and trust repair. With regard to human-machine teaming, the HeART lab is interested in examining unique and newly emerging interaction structures between man and machine. Finally, the HeART lab is also interested in the dynamics of team collaboration and cooperation.
The ability to see how other people move is essential for many aspects of daily life - from things as simple as avoiding collisions to detecting suspicious behavior or recognizing someone else's emotions. The research efforts of the Perception & Action Neuroscience Group (PANGlab) led by Dr. James Thompson are focused on examining how we recognize human movement and make sense of other peoples' actions, and how we code our own actions in relation to the external environment. We investigate these issues using a combination of behavioral paradigms, virtual reality, functional magnetic resonance imaging (fMRI), and electroencephalography (EEG). The goal of the group's research is to further the understanding of how we see and act with others as part of everyday life, in specialized settings such as surveillance, and in conditions in which human movement recognition may be impaired.
Why do people make errors? How do people interact with robots? We collect data on how and why people make errors and how they interact with robots. We then build theoretical models of people making errors and people interacting with robots, not only so that we can understand people, but also so that we can help prevent errors and help people interact with robots better. Our theories are instantiated in ways that make predictions of what people will do in the future, and this information can then be used to change people's behavior.
The SREC Lab focuses on research in social attention and embodied cognition and its application to Social Robotics and Design Thinking. With regard to Social Robotics, the goal is to unravel what sort of information humans use when judging the degree of intentionality underlying the actions of social agents (i.e., robots) and how attributing a mind to others influences attention, perception and performance. With regard to Design Thinking, the SREC lab is interested in the role of embodied cognition involved in designing and in particular how perception and action processes interact during design thinking. In order to investigate these questions, we use behavioral measures, eye tracking and EEG.