Why I Built This
UnaMentis started because the voice interfaces to the major AI models stopped working for my use case. I wanted truly in-depth, hands-free, intelligent conversation on complex topics, and the providers kept crippling that capability. Rather than just being frustrated, I started building my own solution in December 2025. Within about a week I realized this was heading somewhere with real and substantial value.
The value wasn't just "another ed-tech app." It was about something near and dear to my heart: helping people at an individual level reach their goals and improve their understanding of things, outside the constraints of traditional education. AI should adapt to human learners, not the reverse. That conviction drives everything about this project.
The Vision
UnaMentis enables natural, interactive learning conversations that adapt to individual learners. The platform delivers structured curriculum through voice, allowing seamless interruptions for questions and tangents while maintaining session continuity. Learning on the go, while driving, doing chores, walking, anytime you want to learn.
Key Features
Voice-First Architecture
Built explicitly for spoken interaction with natural turn-taking, interruption handling, and voice activity detection that creates fluid conversations.
Flexible Infrastructure
Swap STT, TTS, and LLM providers without code changes. Supports OpenAI, Anthropic, ElevenLabs, Deepgram, and self-hosted alternatives.
UMCF Curriculum Format
Purpose-built JSON specification for conversational learning, mapping to educational standards (IEEE LOM, SCORM, xAPI) while remaining voice-optimized.
Self-Hosting Options
Full self-hosting capabilities with support for Ollama, llama.cpp, and Piper enable organizations to maintain complete data control and privacy.
Founded By
Richard Amerman