RAGatar: Enhancing LLM-driven Avatars with RAG for Knowledge-Adaptive Conversations in Virtual Reality
Zusammenfassung: We present a virtual reality system that enables users to seamlessly switch between general conversations and domain-specific knowledge retrieval through natural interactions with AI-driven avatars. By combining MetaHuman technology with self-hosted large language models and retrieval-augmented generation, our system demonstrates how immersive AI interactions can enhance learning and training applications where both general communication and expert knowledge are required.
To be released soon. Demo presented at IEEE VR 2025 in Saint-Malo
ZURÜCK