The Auditory Research Group, led by Dr. Baldwin, is dedicated to research in all areas of Applied Auditory Cognition. This includes the design of auditory displays, (particularly collision avoidance and navigation systems for air and ground), communication systems, and strategies for improving speech intelligibility in adverse listening conditions and among those with hearing impairments. The laboratory includes two acoustically shielded chambers for recording and testing of auditory stimuli, a host of neurophysiological and physiological recording equipment and a suite of high quality sound generation, digital recording, analysis, and presentation equipment and software.
Our Driving Simulation facilities include high-fidelity, motion-based simulator, and several lower fidelity desk-top simulators for rapid-prototyping. The motion-based simulator is equipped with a digital dashboard and a touch screen ancillary display console for examining new visual display as well as two custom designed seat pans for presenting vibrotactile signals.
The Applied Performance Research Lab focuses on how human performance is helped or hindered by the design of tools that help us accomplish everyday tasks. Our current research falls into four categories: improving performance in the medical domain, the influence of interruptions on performance, multitasking and display performance, and the design and use of educational software to aid in performance. In the past, we have been involved in work on performance in aviation, cognitive workload, highway transportation, and software comprehension.
Drs. Parasuraman and Greenwood have long investigated how the aging process affects the mind and brain in the context of normal genetic variation. We have recently begun to investigate ways to ameliorate cognitive decline late in life. The ultimate goal of training in healthy older people is to improve cognitive functioning in daily life. We have shown such transfer of training in concert with changes in functional connectivity and white matter integrity in healthy older people. We are currently investigating the elements of successful training – working memory, attention, executive functioning – in young people. We also investigate use of non-invasive brain stimulation as a means of heightening effects of training on cognition, regional brain activation, functional connectivity, and white matter integrity.
Neuroergonomics research in the Arch Lab is associated with the Center of Excellence in Neuroergonomics, Technology, and Cognition (CENTEC). The Center of Excellence in Neuroergonomics, Technology, and Cognition (CENTEC) was officially launched at George Mason University on July 15, 2010, with Raja Parasuraman as Director. CENTEC is funded by the Air Force Office of Scientific Research (AFOSR) and by the Air Force Research Laboratory (AFRL) for a period of five years (July 15, 2010 - July 14, 2015).
MRES is a group of social and behavioral scientists dedicated to applying our methodological skills to real world problems. The MRES (pronounced mysteries) lab consults with government, educational, and private organizations as well as conducts independent research. Our collective interests cover most social and behavioral areas with primary focus on clinical, criminal justice, health, education, and science policy related concerns.
The Visual Attention and Cognition Lab, led by Dr. Matt Peterson, is concerned with how attention, working memory, and eye movements interact to affect cognition and perception in both well-controlled laboratory settings and more complex environments. Topics of interest include how environmental factors capture attention, how memory guides visual search, how attention affects scene perception, and how working memory is affected by eye movements. Our lab uses a variety of methods to study cognition, including psychophysical methods, high-speed eye tracking, EEG, brain-computer interfaces that utilize machine-learning algorithms to match patterns in ERP signals, transcranial direct-current stimulation (tDCS), and salivary cortisol measures of stress.
The goals of the Cerebral Hemodynamics Lab are 1) to create a better understanding of how we utilize and maintain cognitive resources in order to reduce accidents in professions that require system monitoring, and 2) to investigate the use of new displays and automation for military systems.
This research group is focused on human-automation interaction and the effects of different levels and types of automation (Parasuraman, Sheridan, & Wickens, 2000) on human operator attention, decision-making, and other aspects of cognition. We are also examining how adaptive automation can be designed effectively so as to be sensitive to changes in operational context and human operator workload. Of particular interest is the development of delegation interfaces (Miller & Parasuraman, 2007) as a form of adaptable automation.
The TRUMAN Lab, led by Dr. Ewart de Visser and Dr. Tyler Shaw, is concerned with how different automated agents interact with humans and the ways in which these agents may affect performance, trust, reliance, and compliance during a task. Topics of interest include adaptive aiding, calibrated trust, human-robot interaction, trust cues, human performance and complacency during nonoptimal conditions, performance within supervisory control human-machine systems, effects of imperfect automation on human trust, individual differences in trust, how varying levels of risk affects trust, and ways to create extreme trust calibration using trust cues.
The ability to see how other people move is essential for many aspects of daily life - from things as simple as avoiding collisions to detecting suspicious behavior or recognizing someone else's emotions. The research efforts of the Perception & Action Neuroscience Group (PANGlab) led by Dr. James Thompson are focused on examining how we recognize human movement and make sense of other peoples' actions, and how we code our own actions in relation to the external environment. We investigate these issues using a combination of behavioral paradigms, virtual reality, functional magnetic resonance imaging (fMRI), and electroencephalography (EEG). The goal of the group's research is to further the understanding of how we see and act with others as part of everyday life, in specialized settings such as surveillance, and in conditions in which human movement recognition may be impaired.
Why do people make errors? How do people interact with robots? We collect data on how and why people make errors and how they interact with robots. We then build theoretical models of people making errors and people interacting with robots, not only so that we can understand people, but also so that we can help prevent errors and help people interact with robots better. Our theories are instantiated in ways that make predictions of what people will do in the future, and this information can then be used to change people's behavior.
The SREC Lab focuses on research in social attention and embodied cognition and its application to Social Robotics and Design Thinking. With regard to Social Robotics, the goal is to unravel what sort of information humans use when judging the degree of intentionality underlying the actions of social agents (i.e., robots) and how attributing a mind to others influences attention, perception and performance. With regard to Design Thinking, the SREC lab is interested in the role of embodied cognition involved in designing and in particular how perception and action processes interact during design thinking. In order to investigate these questions, we use behavioral measures, eye tracking and EEG.