Cohere for AI Releases Open-Source Aya Vision Models for Computer Vision-Based Tasks
#news #newstoday #tech #technews #latestnews #techupdates #newsupdates
Cohere For AI, the firm’s open research division, released new state-of-the-art (SOTA) vision models on Tuesday. Dubbed Aya Vision, the artificial intelligence (AI) models are available in two parameter sizes. The company’s latest frontier models address the inconsistent performance of existing large language models (LLMs) across different languages, especially for multimodal tasks. Aya Vision models can generate outputs in 23 languages and can perform both text-based and image-based tasks. However, it cannot generate images. Cohere has made the AI models available on open-source repositories as well as via WhatsApp.
Cohere Releases Aya Vision AI Models
In a blog post, the AI firm detailed the new vision models. Aya Vision is available in 8B and 32B parameter sizes. These models can generate text, translate text and images across 23 languages, analyse images and answer queries about them, as well as caption images. Both models can be accessed via Cohere’s Hugging Face page and on Kaggle.
Additionally, general users can try out Cohere’s models via a dedicated WhatsApp chat account that can be accessed here. The company says the Aya Vision models are useful for instances when people come across images or artworks they would like to learn more about.
Based on the company’s internal testing, the Aya Vision 8B model outperforms Qwen2.5-VL 7B, Gemini Flash 1.5 8B, and Llama 3.2 11B Vision models on the AyaVisionBench and m-WildVision benchmarks. Notably, the AyaVisionBench benchmark was also developed by Cohere, and its details have been shared in the public domain.
Coming to the Aya Vision 32B model, the company claimed that it outperformed Llama 3.2 90B Vision and Qwen2-VL 72B models on the same benchmarks.
To achieve frontier performance, Cohere claimed that several algorithmic innovations were developed. The Aya Vision models were fed synthetic annotations, developers scaled up multilingual data through translation and rephrasing, and multiple multimodal models were merged in separate steps. The developers observed that in each step, the performance was significantly improved.
Notably, developers can access the open weights of the Aya Vision models from Kaggle and Hugging Face, however, these models are available with a Creative Commons Attribution Non Commercial 4.0 license. It allows for academic and research-based usage but prohibits commercial use cases.
For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2025 hub.
Check out our Latest News and Follow us at Facebook
Original Source