Project Description:
This project addresses a critical bottleneck in human-robot collaboration research: the lack of transparency in robotic decision-making during cooperative tasks. The Neuromotor Control and Learning (NMCL) Lab at the University of Maryland conducts studies using VTEAM, a virtual reality environment where human participants and a simulated humanoid robot (Baxter) take turns solving Rush Hour-style puzzle games of varying difficulty. While the system enables flexible experimental setup, a major limitation is that human participants often struggle to understand why the robot selects certain moves, impairing trust and effective teamwork. To overcome this challenge, we developed an Explainable Artificial Intelligence (XAI) system that enables Baxter to communicate its reasoning through multiple modalities. Our solution extends the existing Unity3D-based VTEAM framework by allowing researchers to configure when and how explanations are delivered—before, during, or after the robot’s actions—and to control the mode of explanation, including text pop-ups, speech output using the ESpeak API, and graphical overlays with directional arrows on the game board. Users can also select the frequency of explanations (every turn, intermittent turns, or on-demand) and the explanation quality (good, neutral, or bad) to support varied experimental conditions. The enhanced interface empowers researchers to systematically study cognitive load, trust dynamics, and user comprehension under different XAI configurations. By making the robot’s decision-making process visible and tunable, our project advances the NMCL lab’s capability to investigate critical questions in cognitive-motor collaboration, laying the foundation for broader applications of explainable robotics in fields such as autonomous driving, aviation, and surgical assistance.