Skip to main content
Projects / Demonstration projects / Multimodal AI Socratic Instructor for Smartphone

Multimodal AI Socratic Instructor for Smartphone (Supplement)

Over the past three years, our NSF RETTL project team has developed a free LiDAR-based smartphone app called LiDAR Motion Visualizer. This app enables students to see real-time representations (position-time and velocity-time graphs) of their motion as they move back and forth in front of flat surfaces (e.g., walls or trees). For this NAIRR initiative, we will extend our research by creating a phone-based AI ‘Socratic instructor’ that can prompt and guide users to make sense of and respond appropriately to instantaneously-generated feedback. What the users see now on the phone screens are position-time and velocity-time visual graphs updated every 400 milliseconds as the users locomote and move during a series of embodied learning tasks. AI will enable us to make the feedback targeted, personalized, multi-modal, and just-in-time based on the best pedagogical practices.

Colleen Megowan-Romanowicz (Principal Investigator, colleen@modelinginsntruction.org), Mina Johnson-Glenberg (Co-Principal Investigator, mina.johnson@asu.edu), Rebecca Vieyra (Co-Principal Investigator, rebecca.elizabeth.vieyra@gmail.com), Chrystian Vieyra (Co-Principal Investigator, chrys.vieyra@gmail.com)

We are developing an AI tutor that assists students in learning physics through interactive, game-based learning. The project uses continuous integration (CI) of new data, leveraging machine learning to analyze gameplay and provide personalized feedback. As students interact with the tutor, we continuously collect performance data, gameplay patterns, and learning behaviors, integrating this information to dynamically update the AI models. This ensures the tutor adapts to individual student needs and improves over time.

A key research focus is the development of AI models that evolve through continuous data integration. By using large language model (LLM)-based multimodal algorithms, we enhance the AI tutor’s capabilities, making it responsive to both individual student progress and broader educational trends. The continuous integration of updated models and algorithms ensures that the AI tutor evolves in real time, offering increasingly customized learning experiences.

This project opens opportunities for collaboration with education technology developers, AI researchers, and content creators to further enhance the AI tutor’s capabilities and reach.

By applying AI to physics education, this project contributes to the field of educational technology and domain science. The AI tutor serves as a testbed for broader applications of AI in STEM education, demonstrating the potential for real-time, data-driven learning tools to improve student outcomes.

The proposed AI system called MAISIS (Multimodal AI Socratic Instructor for Smartphone) will use verbal, affective, and embodied motion inputs as well as information on the student’s self-generated attempts to correctly match motion graphs from the motion app to prompt and guide thinking and learning in a way that foregrounds the construction of a causal conceptual model of motion. The AI will construct a multi-dimensional cognitive model of where the learner is in understanding kinematics and provide nuanced feedback that will predict, explain, and respond to the progressively more complex challenges the app presents based on how the user performs in real time. The need for this AI model has emerged from our studies on app use with middle school and high school students, and we expect it to have the greatest impact.

Learn more about how the Multimodal AI Socratic Instructor for Smartphone project at https://www.lidar-motion.net/

This work is supported by supplemental funding to National Science Foundation Grant No. (#2114586).