Meta AI Gets Upgraded With Llama 3 to Add New Features, Better Integration

Meta AI is getting a massive upgrade. On Thursday, the social media giant announced two new Llama 3 artificial intelligence (AI) models, the Llama 3 8B and 70B, which are said to offer improved capabilities compared to the predecessor. Alongside, the company also upgraded its native AI assistant with Llama 3 models. Meta AI can now respond more efficiently, generate images faster, and even animate images, as per the company. Accessibility of the chatbot is also being improved by integrating it into different interfaces. It is also being expanded to more countries.

In a newsroom post, the company said that Meta AI is now powered by Llama and can be used for free as long as a user has an account on any of its platforms. The social media giant announced the chatbot in Meta Connect 2023 and soon began adding it to Facebook, Instagram and WhatsApp in the US. The AI assistant was recently expanded to India and now the tech giant has revealed that it is expanding it to more regions. The countries where Meta AI is being rolled out include Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe.

While the AI assistant is getting better in terms of the efficiency of responses and image generation quality as a result of Llama 3 integration, Meta is also adding new capabilities as well as wider integration of the chatbot. On features, Meta AI will now be able to generate images in real-time. This means once you begin typing your prompt, you can see the AI generating an image. As you keep typing and describing the image better, the image also keeps changing based on the prompt. While this does make the generation speed faster, it also allows users to preview the image and make real-time changes for better results.

Another new feature is image animation. The tech giant is also offering image editing capabilities. If you do not like the generated image, you can ask the AI to make changes to it or iterate on it in a new style. Further, users will also be able to animate the image and turn it into GIF.

Meta is also making the chatbot available at more user touch-points. Users will now be able to find the AI assistant in the Facebook feed under posts. A small Meta AI logo with a couple of suggested queries will pop up and the user can ask questions regarding the topics shown in the video. It is also being integrated into search across all platforms. This will allow users to go to the search bar on Facebook, Instagram, WhatsApp, or Messenger and type a question, and the AI will answer it. Real-time events such as flight booking prices and stock market updates can also be queried. Notably, Meta’s AI uses both Google and Bing to show results, however, users cannot control which search engine it shows.

Finally, the social media giant has also launched a new meta.ai website where users can jump to have a conversation with the chatbot or ask it to solve a math problem or generate content. This new platform is likely for those who want to use the AI assistant but do not want to open their social media account for it, especially in a professional setting. Users will be able to save conversations here for future reference as well.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Releases Open Source MLX Framework for Efficient Machine Learning on Apple Silicon

Apple recently released MLX — or ML Explore — the company’s machine learning (ML) framework for Apple Silicon computers. The company’s latest framework is specifically designed to simplify the process of training and running ML models on computers that are powered by Apple’s M1, M2, and M3 series chips. The company says that MLX features a unified memory model. Apple has also demonstrated the use of the framework, which is open source, allowing machine learning enthusiasts to run the framework on their laptop or computer.

According to details shared by Apple on code hosting platform GitHub, the MLX framework has a C++ API along with a Python API that is closely based on NumPy, the Python library for scientific computing. Users can also take advantage of higher-level packages that enable them to build and run more complex models on their computer, according to Apple.

MLX simplifies the process of training and running ML models on a computer — developers were previously forced to rely on a translator to convert and optimise their models (using CoreML). This has now been replaced by MLX, which allows users running Apple Silicon computers to train and run their models directly on their own devices.

Apple shared this image of a big red sign with the text MLX, generated by Stable Diffusion in MLX
Photo Credit: GitHub/ Apple

 

Apple says that the MLX’s design follows other popular frameworks used today, including ArrayFireJax, NumPy, and PyTorch. The firm has touted its framework’s unified memory model — MLX arrays live in shared memory, while operations on them can be performed on any device types (currently, Apple supports the CPU and GPU) without the need to create copies of data.

The company has also shared examples of MLX in action, performing tasks like image generation using Stable Diffusion on Apple Silicon hardware. When generating a batch of images, Apple says that MLX is faster than PyTorch for batch sizes of 6,8,12, and 16 — with up to 40 percent higher throughput than the latter.

The tests were conducted on a Mac powered by an M2 Ultra chip, the company’s fastest processor to date — MLX is capable of generating 16 images in 90 seconds, while PyTorch would take around 120 seconds to perform the same task, according to the company.

Other examples of MLX in action include generating text using Meta’s open source LLaMA language model, as well as the Mistral large language model. AI and ML researchers can also use OpenAI’s open source Whisper tool to run the speech recognition models on their computer using MLX.

The release of Apple’s MLX framework could help make ML research and development easier on the company’s hardware, eventually allowing developers to bring better tools that could be used for apps and services that offer on-device ML features running efficiently on a user’s computer.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Meta to Release Open Source AI Model, Llama, to Compete Against OpenAI, Google’s Bard

Meta is releasing a commercial version of its open-source artificial intelligence model Llama, the company said on Tuesday, giving start-ups and other businesses a powerful free-of-charge alternative to pricey proprietary models sold by OpenAI and Google.

The new version of the model, called Llama 2, will be distributed by Microsoft through its Azure cloud service and will run on the Windows operating system, Meta said in a blog post, referring to Microsoft as “our preferred partner” for the release.

The model, which Meta previously provided only to select academics for research purposes, also will be made available via direct download and through Amazon Web Services, Hugging Face and other providers, according to the blog post and a separate Facebook post by Meta CEO Mark Zuckerberg.

“Open source drives innovation because it enables many more developers to build with new technology,” Zuckerberg wrote. “I believe it would unlock more progress if the ecosystem were more open.”

Making a model as sophisticated as Llama widely available and free for businesses to build atop threatens to upend the early dominance established in the nascent market for generative AI software by players like OpenAI, which Microsoft backs and whose models it already offers to business customers via Azure.

The first Llama was already competitive with models that power OpenAI’s ChatGPT and Google’s Bard chatbot, while the new Llama has been trained on 40 percent more data than its predecessor, with more than 1 million annotations by humans to fine-tune the quality of its outputs, Zuckerberg said.

“Commercial Llama could change the picture,” said Amjad Masad, chief executive at software developer platform Replit, who said more than 80 percent of projects there use OpenAI’s models.

“Any incremental improvement in open-source models is eating into the market share of closed-source models because you can run them cheaply and have less dependency,” said Masad.

The announcement follows plans by Microsoft’s largest cloud rivals, Alphabet’s Google and Amazon, to give business customers a range of AI models from which to choose.

Amazon, for instance, is marketing access to Claude – AI from the high-profile startup Anthropic – in addition to its own family of Titan models. Google, likewise, has said it plans to make Claude and other models available to its cloud customers.

Until now, Microsoft has focused on making technology available from OpenAI in Azure.

Asked why Microsoft would support an offering that might degrade OpenAI’s value, a Microsoft spokesperson said giving developers choice in the types of models they use would help extend its position as the go-to cloud platform for AI work.

Internal memo

For Meta, a flourishing open-source ecosystem of AI tech built using its models could stymie rivals’ plans to earn revenue off their proprietary technology, the value of which would evaporate if developers could use equally powerful open-source systems for free.

A leaked internal Google memo titled “We have no moat, and neither does OpenAI” lit up the tech world in May after it forecast just such a scenario.

Meta is also betting that it will benefit from the advancements, bug fixes and products that may grow out of its model becoming the go-to default for AI innovation, as it has over the past several years with its widely-adopted open source AI framework PyTorch.

As a social media company, Zuckerberg told investors in April, Meta has more to gain by effectively crowd-sourcing ways to reduce infrastructure costs and maximize creation of new consumer-facing tools that might draw people to its ad-supported services than it does by charging for access to its models.

“Unlike some of the other companies in the space, we’re not selling a cloud computing service where we try to keep the different software infrastructure that we’re building proprietary,” Zuckerberg said.

“For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make.”

Releasing Llama into the wild also comes with risks, however, as it supercharges the ease with which unscrupulous actors may build products with little regard for safety controls.

In April, Stanford researchers took down a chatbot they had built for $600 using a version of the first Llama model after it generated unsavory text.

Meta executives say they believe public releases of technologies actually reduce safety risks by harnessing the wisdom of the crowd to identify problems and build resilience into the systems.

The company also says it has put in place an “acceptable use” policy for commercial Llama that prohibits “certain use cases,” including violence, terrorism, child exploitation and other criminal activities.

© Thomson Reuters 2023


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version