OneAIChat Unveils Multimodal AI Aggregator Platform With GPT-4, Gemini and Other Models

OneAIChat, an Indian startup, unveiled its new multimodal artificial intelligence (AI) aggregation platform on Tuesday. The Mangalore-based startup is offering a single platform through which users can access multiple large language models (LLMs) at the same time. The company says this will help users seamlessly interact and compare answers from various AI models. Leveraging the capabilities of multiple models, the platform offers output in text, images, and video formats. The platform will require purchasing a single subscription plan to access it.

The OneAIChat platform has been pre-launched today as a web-based service. The aggregator platform features OpenAI’s GPT-4, Google Gemini, Anthropic’s Claude 3, as well as AI models from Cohere and Mistral. The company did not specify which LLMs from Mistral were being used. The company says the platform will be accessible globally. At the time of writing this, we were not able to access the website as it appears to be suffering from an outage.

There are some platform-specific features that users can take advantage of. OneAIChat has introduced a Focus Categories feature that will allow users to enter topic-specific queries from AI models. It is unclear whether the company has added specific LLMs for certain topics or whether it curates answers from all of them together. Some of the categories highlighted by the startup include health, audio/music, faith, marketing, video, art & design, and mathematics.

Apart from this, OneAIChat said that its platform is aimed at streamlining content creation. The AI models enable the generation of blog articles, product listings, social media posts, essays, and more. Notably, these offerings would come straight from the AI models themselves. Further, being a multimodal platform, it also offers images, videos, and audio clip generation. However, the company did not specify the AI models that will handle video and music generation.

OneAIChat’s platform will charge a single subscription fee to allow the usage of all the AI models. However, the pricing details have not been revealed yet. Details of the models being offered on the subscription are also not known. Given that all of the above mentioned AI models, except Mistral and Cohere, have both free and paid versions, the cost-saving through the subscription could not be determined. Mistral offers open-source AI models that do not require any subscription to run, whereas Cohere is only available to paid users.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Releases Open Source MLX Framework for Efficient Machine Learning on Apple Silicon

Apple recently released MLX — or ML Explore — the company’s machine learning (ML) framework for Apple Silicon computers. The company’s latest framework is specifically designed to simplify the process of training and running ML models on computers that are powered by Apple’s M1, M2, and M3 series chips. The company says that MLX features a unified memory model. Apple has also demonstrated the use of the framework, which is open source, allowing machine learning enthusiasts to run the framework on their laptop or computer.

According to details shared by Apple on code hosting platform GitHub, the MLX framework has a C++ API along with a Python API that is closely based on NumPy, the Python library for scientific computing. Users can also take advantage of higher-level packages that enable them to build and run more complex models on their computer, according to Apple.

MLX simplifies the process of training and running ML models on a computer — developers were previously forced to rely on a translator to convert and optimise their models (using CoreML). This has now been replaced by MLX, which allows users running Apple Silicon computers to train and run their models directly on their own devices.

Apple shared this image of a big red sign with the text MLX, generated by Stable Diffusion in MLX
Photo Credit: GitHub/ Apple

 

Apple says that the MLX’s design follows other popular frameworks used today, including ArrayFireJax, NumPy, and PyTorch. The firm has touted its framework’s unified memory model — MLX arrays live in shared memory, while operations on them can be performed on any device types (currently, Apple supports the CPU and GPU) without the need to create copies of data.

The company has also shared examples of MLX in action, performing tasks like image generation using Stable Diffusion on Apple Silicon hardware. When generating a batch of images, Apple says that MLX is faster than PyTorch for batch sizes of 6,8,12, and 16 — with up to 40 percent higher throughput than the latter.

The tests were conducted on a Mac powered by an M2 Ultra chip, the company’s fastest processor to date — MLX is capable of generating 16 images in 90 seconds, while PyTorch would take around 120 seconds to perform the same task, according to the company.

Other examples of MLX in action include generating text using Meta’s open source LLaMA language model, as well as the Mistral large language model. AI and ML researchers can also use OpenAI’s open source Whisper tool to run the speech recognition models on their computer using MLX.

The release of Apple’s MLX framework could help make ML research and development easier on the company’s hardware, eventually allowing developers to bring better tools that could be used for apps and services that offer on-device ML features running efficiently on a user’s computer.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version