Exploring AI Agents in Education
Project Background
In collaboration with Dr. Todd Cherner at UNC's School of Education, I conducted an exploratory project examining how avatar-based AI agents might enhance educational interactions compared to traditional text-based chatbots. This curiosity-driven initiative stemmed from our interest in whether increased social presence through visual avatars would meaningfully change user engagement with AI systems.
We presented our findings at the SITE 2025 conference through a roundtable discussion titled "Chatbots were so last year: Welcome to the Age of AI Agents!" Our work examined the evolution from early pattern-matching chatbots like ELIZA to modern LLM-powered systems like ChatGPT, and finally to avatar-based AI agents that add a visual, human-like dimension to these interactions.
Platform Comparison
We developed and tested three AI agents using different commercial platforms, each prompted with similar knowledge bases but offering distinct user experiences:
MEITE Mia (Soul Machines)
-
Frontend: Fully animated 3D character with fluid movements, dynamic facial expressions, and continuous subtle movements (breathing, blinking) even when not speaking
-
Backend: Configured for warmth and professionalism with a structured knowledge base about the MEITE program
AI Todd (HeyGen)
-
Frontend: Photo-realistic human appearance with smooth facial expressions and excellent speech-lip synchronization
-
Backend: Programmed as "Todd's Assistant" with UNC-specific resources and an emphasis on empathetic yet professional interactions
MEITE Max (D-ID)
-
Frontend: Realistic human-like appearance with limited range of expressions based on pre-recorded video
-
Backend: Similar knowledge base but more rigid response patterns and limited conversational context handling
To systematically evaluate these platforms, we developed a consideration matrix examining functionality (performance, responsiveness), quality (relevance, credibility), behavior (conversation patterns, mannerisms), and aesthetics (visual appearance, voice characteristics).
Key Findings & Considerations
Although we did not conduct a research study, we showcased the platforms to a wide variety of people, including scholars at SITE 2025. Their feedback essentially revealed that Soul Machines provided greater affordances for social presence due to its animated nature, despite the more photorealistic appearance of the other platforms. Soul Machines and HeyGen demonstrated similar levels of conversational fluency, while D-ID showed weaker performance across our evaluation criteria.
Throughout our exploration, we remained mindful of important ethical considerations, particularly the risks of overly anthropomorphizing AI systems. We recognize that creating human-like avatars can lead to unrealistic expectations about AI capabilities and potentially blur important distinctions between human and machine interaction.
The project remains exploratory in nature, focused on understanding the potential and limitations of these emerging technologies rather than advocating for specific implementations. We're particularly interested in contexts where enhanced social presence might be beneficial versus situations where traditional interfaces—or human interaction—would be more appropriate.
Future Directions
This initial exploration has opened several avenues for continued research:
-
Designing formal studies to measure differences in engagement and learning outcomes between traditional chatbots and AI agents
-
Exploring domain-specific applications within educational contexts
-
Developing ethical guidelines and best practices for implementing AI agents that maximize benefits while minimizing concerns
-
Continuing to evaluate new platforms and capabilities as this rapidly evolving technology advances
