MiniCPM-o: Gemini 2.5 Flash-Level MLLM for Mobile Devices with Vision, Speech, and Full-Duplex Multimodal Live Support
OpenBMB has introduced MiniCPM-o, a new multimodal large language model (MLLM) designed for mobile devices. This model is positioned as a Gemini 2.5 Flash-level equivalent, offering robust capabilities in vision, speech, and full-duplex multimodal live interactions. MiniCPM-o aims to bring advanced AI functionalities directly to users' smartphones, enabling a seamless and interactive experience across various modalities.
OpenBMB has unveiled MiniCPM-o, a cutting-edge multimodal large language model (MLLM) specifically engineered for mobile phone applications. This innovative model is touted as achieving a performance level comparable to Gemini 2.5 Flash, signifying its advanced capabilities in processing and understanding diverse data types. MiniCPM-o integrates support for vision, allowing it to interpret and respond to visual inputs, and speech, enabling voice-based interactions. A key feature is its full-duplex multimodal live support, which suggests the model can engage in real-time, continuous, and bidirectional communication across these different modalities. This development aims to enhance the user experience on mobile devices by providing sophisticated AI assistance that can understand and interact with users through a combination of visual and auditory cues in a live setting.