Meta Reportedly Working on AI-Enabled Camera-Integrated Earphones Dubbed Camerabuds

Meta is reportedly working on a unique wearable form factor powered by artificial intelligence (AI). Dubbed Camerabuds, the device will be a pair of earphones with cameras attached at both ends. The cameras are said to pick up information from their surroundings and answer the queries of the user. The report claims that the project is currently in its initial stage and the company has not even finalised the design of the device. Further, the engineering team has also expressed several concerns with such a wearable.

According to a report by The Information (via Engadget), Meta is exploring a new wearable device that can identify objects and translate languages using AI. The device will essentially be a pair of earphones or headphones but they will have a couple of cameras equipped to each of the earbuds facing outwards. These cameras will pick up the information that the AI will process in real time to answer the wearer’s questions.

The device will use multimodal AI with real-time processing, similar to the company’s Meta Ray-Ban smart glasses (our review). Based on the information shared, it appears that the Camerabuds will be a more portable device with the main focus being two-way communication. It is not clear if verbal communication will be lag-free or the same as its AI-powered smart glasses.

While this is the plan, the report mentions that Meta CEO Mark Zuckerberg is not happy with any of the designs so far, despite seeing several iterations of them. Further, it is said that the engineering team has also expressed several concerns with such a device. Some of the concerns include battery life, heat dissipation, privacy-related issues, challenges around users who have long hair, and more.

The report also pointed out that Meta has a track record of planning big hardware projects and abruptly shutting them down. For instance, it killed its smart speaker project called Portal in 2022. Earlier, it also cancelled its camera-fitted smartwatch project. However, Meta has witnessed significant success with its smart glasses and likely wants to double down on it with another similar product. Notably, the company is also heavily investing in AI.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


HP Envy Move All-in-One Desktop Review: A Blend of Unique and Mundane



PS Plus Game Catalog Adds Red Dead Redemption 2, Crime Boss: Rockay City, Deceive Inc. and More in May



Check out our Latest News and Follow us at Facebook

Original Source

WhatsApp Begins Testing Camera Zoom Control Feature, Sticker Creation Shortcuts

WhatsApp is rolling out two new features to beta testers on iOS that add new capabilities to the popular messaging platform. The first feature makes it much easier to zoom when using the in-app camera on WhatsApp, while the second feature allows users to quickly create new stickers from their Camera Roll, or use Meta AI to generate stickers, via new shortcuts. Both of these features are expected to make their way to all users on iOS and Android smartphones.

Feature tracker WABetaInfo spotted a new zoom control feature on WhatsApp beta for iOS 24.9.10.75. Users who have signed up to receive beta versions of WhatsApp for iOS via TestFlight can now update to the latest version to access a new camera zoom control button that will let them switch between different zoom options when clicking images or recording videos on WhatsApp.

WhatsApp’s new zoom controls (left) and sticker shortcuts
Photo Credit: WABetaInfo

 

WhatsApp currently allows users to pinch in and out on the viewfinder while using the in-app camera or, swipe up while holding down the capture button. Both of these options aren’t as straightforward or intuitive as using a dedicated zoom button, which is expected to roll out to all users in the future.

Earlier this week, the Meta-owned messaging service rolled out another feature to beta testers with the WhatsApp beta for iOS 24.9.10.74 update, according to details shared by WABetaInfo. Users who have installed this version will see Create and Use AI shortcuts when the sticker selection panel is open on WhatsApp.

The first shortcut will allow users to use an image from their Camera Roll to generate a new WhatsApp sticker using the application’s built-in sticker editor. Gadgets 360 was able to confirm that this option is also available to some users on the stable version of WhatsApp. The Use AI shortcut, on the other hand, allows users to generate stickers with Meta AI, the company’s artificial intelligence (AI) service.

Other features recently spotted in testing on beta versions of WhatsApp include a new ‘Recently Online’ list that — as the name suggests — lists contacts who were previously active on the app. Meanwhile, the service recently updated its colour palette on iOS, displaying green buttons and text throughout the app. It could eventually add support for alternative colour accents on the app, which were first spotted in development in January.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta AI Gets Upgraded With Llama 3 to Add New Features, Better Integration

Meta AI is getting a massive upgrade. On Thursday, the social media giant announced two new Llama 3 artificial intelligence (AI) models, the Llama 3 8B and 70B, which are said to offer improved capabilities compared to the predecessor. Alongside, the company also upgraded its native AI assistant with Llama 3 models. Meta AI can now respond more efficiently, generate images faster, and even animate images, as per the company. Accessibility of the chatbot is also being improved by integrating it into different interfaces. It is also being expanded to more countries.

In a newsroom post, the company said that Meta AI is now powered by Llama and can be used for free as long as a user has an account on any of its platforms. The social media giant announced the chatbot in Meta Connect 2023 and soon began adding it to Facebook, Instagram and WhatsApp in the US. The AI assistant was recently expanded to India and now the tech giant has revealed that it is expanding it to more regions. The countries where Meta AI is being rolled out include Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe.

While the AI assistant is getting better in terms of the efficiency of responses and image generation quality as a result of Llama 3 integration, Meta is also adding new capabilities as well as wider integration of the chatbot. On features, Meta AI will now be able to generate images in real-time. This means once you begin typing your prompt, you can see the AI generating an image. As you keep typing and describing the image better, the image also keeps changing based on the prompt. While this does make the generation speed faster, it also allows users to preview the image and make real-time changes for better results.

Another new feature is image animation. The tech giant is also offering image editing capabilities. If you do not like the generated image, you can ask the AI to make changes to it or iterate on it in a new style. Further, users will also be able to animate the image and turn it into GIF.

Meta is also making the chatbot available at more user touch-points. Users will now be able to find the AI assistant in the Facebook feed under posts. A small Meta AI logo with a couple of suggested queries will pop up and the user can ask questions regarding the topics shown in the video. It is also being integrated into search across all platforms. This will allow users to go to the search bar on Facebook, Instagram, WhatsApp, or Messenger and type a question, and the AI will answer it. Real-time events such as flight booking prices and stock market updates can also be queried. Notably, Meta’s AI uses both Google and Bing to show results, however, users cannot control which search engine it shows.

Finally, the social media giant has also launched a new meta.ai website where users can jump to have a conversation with the chatbot or ask it to solve a math problem or generate content. This new platform is likely for those who want to use the AI assistant but do not want to open their social media account for it, especially in a professional setting. Users will be able to save conversations here for future reference as well.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Llama 3 AI Models With 8B and 70B Parameters Launched, Said to Outperform Google’s Gemini 1.5 Pro

Meta introduced the next generation of its artificial intelligence (AI) models, Llama 3 8B and 70B, on Thursday. Shortened for Large Language Model Meta AI, Llama 3 comes with improved capabilities over its predecessor. The company also adopted new training methods to optimise the efficiency of the models. Interestingly, with Llama 2, the largest model was 70B, but this time the company said its large models will contain more than 400 billion parameters. Notably, a report last week revealed that Meta will unveil its smaller AI models in April and its larger models later in the summer.

Those interested in trying out the new AI models are in luck as Meta is taking a community-first approach with the Llama 3. The new foundation models will be open source just like previous models. Meta stated in its blog post, “Llama 3 models will soon be available on AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake, and with support from hardware platforms offered by AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm.”

The list includes all major cloud, hosting, and hardware platforms, which should make it easier for enthusiasts to get their hands on the AI models. Further, Meta has also integrated Llama 3 with its own Meta AI that can be accessed via Facebook Messenger, Instagram, and WhatsApp in supported countries.

Coming to the performance, the social media giant shared benchmark scores of Llama 3 for both its pre-trained and instruct models. For reference, pre-trained is the general conversational AI whereas the instruct models are aimed at completing specific tasks. The pre-trained model of Llama 3 70B outscored Google’s Gemini 1.0 Pro in the MMLU (79.5 vs 71.8), BIG-Bench Hard (81.3 vs 75.0), and DROP (79.7 vs 74.1) benchmarks, wheres the 70B Instruct model outscored the Gemini 1.5 Pro model in MMLU, HumanEval, and GSM-8K benchmarks, based on data shared by the company.

Meta has opted for a decoder-only transformer architecture for the new AI models but has made several improvements over the predecessor. Llama 3 now uses a tokeniser with a vocabulary of 128K tokens, and the company has adopted grouped query attention (GQA) to improve inference efficiency. GQA helps in improving the attention of the AI so it does not move outside of its designated context when answering queries. The social media giant has pre-trained the models with more than 15T tokens, which it claims to have sourced from publicly available data.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Showcases Next-Generation AI Chipset to Build Large-Scale AI Infrastructure

Meta, on Wednesday, unveiled its next-generation Meta Training and Inference Accelerator (MTIA), its family of custom-made chipsets for artificial intelligence (AI) workloads. The upgrade to its AI chipset comes almost a year after the company introduced the first AI chips. These Inference Accelerators will power the tech giant’s existing and future products, services, and the AI that lies within its social media platforms. In particular, Meta highlighted that the capabilities of the chipset will be used to serve its ranking and recommendation models.

Making the announcement via its blog post, Meta said, “The next generation of Meta’s large-scale infrastructure is being built with AI in mind, including supporting new generative AI (GenAI) products and services, recommendation systems, and advanced AI research. It’s an investment we expect will grow in the years ahead as the compute requirements to support AI models increase alongside the models’ sophistication.”

The new AI chip offers significant improvements in both power generation and efficiency due to improvements in its architecture, as per Meta. The next generation of MTIA doubles the compute and memory bandwidth compared to its predecessor. It can also serve Meta’s recommendation models that it uses to personalise content for its users on its social media platforms.

On the hardware of the chipset, Meta said that the system has a rack-based design that holds up to 72 accelerators where three chassis contain 12 boards and each of them houses two accelerators. The processor clocks at 1.35GHz which is much faster than its predecessor at 800MHz. It can also run at a higher output of 90W. The fabric between the accelerators and the host has also been upgraded to PCIe Gen5.

The software stack is where the company has made major improvements. The chipset is designed to be fully integrated with PyTorch 2.0 and related features. “The lower level compiler for MTIA takes the outputs from the frontend and produces highly efficient and device-specific code,” the company explained.

The results so far show that this MTIA chip can handle both the low complexity (LC) and high complexity (HC) ranking and recommendation models that are components of Meta’s products. Across these models, there can be a ~10x-100x difference in model size and the amount of compute per input sample. Because we control the whole stack, we can achieve greater efficiency compared to commercially available GPUs. Realizing these gains is an ongoing effort and we continue to improve performance per watt as we build up and deploy MTIA chips in our systems.

With the rise of AI, many tech companies are now focusing on manufacturing customised AI chipsets that can cater to their particular needs. These processors offer massive compute power over servers which enables them to bring products such as generalist AI chatbots and AI tools for specific tasks.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

[Exclusive] WhatsApp Starts Testing Meta AI in India With Select Users

WhatsApp, the widely used messaging app owned by Meta, is finally joining the AI club. The Meta AI icon is finally showing up for some users in India in the main chat list. Meta AI is powered by the company’s Large Language Model Meta AI, or long for Llama. It is an advanced artificial intelligence technology developed by Meta. For WhatsApp users, the Meta AI can have conversations about anything, from answering a query or making recommendations to just chit-chatting with AI.

Gadgets 360 got brief access to this AI-enabled feature on Wednesday, after which it vanished, prompting that it may be available for a limited time. The screenshots below show that the Meta AI chat opens with a verified badge and says “#with Llama#”. The chat pop-up says, “Ask Meta AI anything,” and has a number of suggestive prompts on the screen. The prompts are stacked in a carousel format and can be swiped to show more suggestions. As visible in the screenshot, prompts like “imagine a car race on Mars”, “imagine a holographic bus”, “Healthy life goals”, and more can be given. Notably, the Meta AI icon is placed in the top right corner along with the Camera and New Chat options. The icon reminds me a lot of Microsoft’s Cortana assistant. 

The Meta AI feature is available in limited countries and only supports English. When starting a chat conversation with Meta AI, the platform notifies, “Messages from Meta AI and other characters are generated by artificial intelligence (AI), using a service from Meta, in response to the prompts you send to the AI.” The chat platform also clarifies that Meta AI can only read and reply to chats mentioning @Meta AI, which means the tool doesn’t have access to other chats. The prompt adds, “As always, your personal messages and calls remain end-to-end encrypted, meaning not even WhatsApp or Meta can see or listen to them.”

How to start a chat with AI from Meta on WhatsApp

Firstly, the Meta AI feature within WhatsApp can answer questions, offer recommendations, and can also talk about interests. To start a chat:

  • Tap the circular icon on the top right of the main chat list on your WhatsApp
  • Read and accept the terms (if prompted)
  • Select a suggested prompt from the screen or type your own
  • Tap the send button, and you’ve initiated the conversation

Interestingly, WhatsApp also takes feedback from users on Meta AI. Users can tap and hold the AI-generated responses and tap on ‘Good response’ or ‘Bad response’. Users can also type a reason and submit it.

Meta on the WhatsApp FAQ page mentions that “some messages generated by AIs might not be accurate or appropriate.”

Meta recently announced Llama 2, the next generation of its large language model for generative AI assistants, and it is expected to release Llama 3 in the coming weeks.

Check out our Latest News and Follow us at Facebook

Original Source

WhatsApp Spotted Working on AI-Powered Image Editor, Ask Meta AI Feature

WhatsApp is currently working on a feature that will let users edit their images using an editing tool that is powered by artificial intelligence (AI), according to details shared by a feature tracker. Users might be able to quickly modify an image’s background, restyle it, or ‘expand’ it using AI, when the feature is rolled out to users in the future. Meanwhile, the company is also working on a feature that will let users ask questions to the company’s ‘Meta AI’ service directly from the search bar.

According to feature tracker WABetaInfo, a feature tracker with a good track record of unearthing new features on the messaging app before they are rolled out, the latest WhatsApp beta for Android 2.24.7.13 update contains code for an AI-powered image editor. The feature, which is still in development, cannot currently be tested by users who have signed up to receive beta versions of the app.

WhatsApp’s upcoming AI-powered features
Photo Credit: WABetaInfo

 

In a screenshot published by WABetaInfo, an early version of the feature is visible on the interface that is shown when sending images on WhatsApp for Android. A green icon located at the top, to the left of the HD icon is visible, and tapping it displays three options Backdrop, Restyle, and Expand. As the feature is still being developed, it is currently unclear what each of these features does.

Meanwhile, the more recent WhatsApp beta for Android 2.24.7.14 version contains details of another feature discovered by WABetaInfo. The feature tracker has spotted the ability to use the search bar at the top of the app to ask queries to Meta AI — the company’s generative AI assistant for Meta products designed to compete with OpenAI’s ChatGPT.

It’s worth noting that both these features are still being developed so you won’t be able to test them out, even after updating to the latest version of the app. These features are likely to be refined and improved, then rolled out to testers on the beta channel, before they are rolled out to all users. They are also expected to make their way to users on iOS, providing feature parity across both mobile platforms.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Rolls Out ‘Look and Ask With Meta AI’ Feature on Ray-Ban Smart Glasses, Announces Early Access Programme

Meta is now allowing select customers to put on its pair of Ray-Ban smart glasses and try out new bespectacled AI-powered experiences as part of an early access programme. The Facebook parent has announced initial user tests for the smart glasses and intends to gather feedback on new features ahead of wider release. Meta is also introducing updates to improve the Ray-Ban smart glasses experience, which is powered by the Meta AI assistant, bringing smarter and more helpful responses.

Earlier this month, Meta announced a host of new features to its AI services across platforms. In an update to the same blog post on Tuesday, the company introduced a few new Meta Ray-Ban smart glass features. Those who sign up for early access can try out multimodal AI-powered capabilities, which allow the smart glasses to perceive visual information by looking and answering related queries.

According to Meta, its AI assistant on the glasses can take a picture of what you’re seeing, either via voice command or the dedicated capture button. It can also come up with a witty caption for the photo. Users could pick up an object while wearing the Meta Ray-Ban smart glasses and ask for information on the same, or look at a sign in a different language and ask the AI-powered glasses to translate it to English. The company, however, has warned users that its multimodal AI might make mistakes and will be improved over time with the help of feedback.

Meta CEO Mark Zuckerberg demonstrated the look and ask feature on the AI smart glasses in an Instagram post. In the video, taken from the first-person perspective from the glasses, Zuckerberg picks out a striped, dark shirt and asks Meta AI to suggest pants to go with it.

Additionally, Meta says it is rolling out Bing-powered real time information capabilities on Meta AI-powered smart glasses. “You can ask Meta AI about sports scores or information on local landmarks, restaurants, stocks and more,” the company said in its update.

The look and ask with Meta AI feature on the glasses takes a picture when prompted to “look” and delivers an audio-based response to the related query. Do note that all pictures taken and processed by AI are stored and used to train Meta AI and other Meta products, which would likely spark a privacy concern. Meta says that the information collected, used and retained will comply with Meta’s Privacy Policy.

The early access programme is now live for Ray-Ban Meta smart glasses owners in the US and interested users can enroll for the same on the Meta View app on iOS and Android. To sign up, tap the settings button in the bottom right of the Meta View app, swipe down and tap Early Access. You’d also have to make sure that the smart glasses and the Meta View app have received the latest update.

Ray-Ban Meta smart glasses were launched in September, alongside the Meta Quest 3 and other Meta products. The glasses are powered by Qualcomm Snapdragon AR1 Gen1 Platform SoC and come with a 12-megapixel sensor, an LED light, and 32GB of inbuilt storage.

Ray-Ban Meta smart glasses with standard lenses are priced at $299 (roughly Rs. 24,999), while the pair with Polarized lenses and transition lenses cost $329 (roughly Rs. 27,400) and $379 (roughly Rs. 31,500), respectively. The glasses are available to buy in 15 countries, including the US, Canada, Australia, and European markets. Meta has not announced a launch date for the Indian market yet.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Brings Standalone Text-to-Image Generation Tool to Web; AI Enhancements to Instagram, Facebook

Meta unveiled a host of new enhancements for its AI experiences across Facebook, Instagram, Messenger, and WhatsApp on Wednesday. The company’s virtual assistant, Meta AI, which was launched in September, will now give more detailed and accurate responses to queries. The Facebook parent is also expanding its text-to-image generation tool, Imagine, as a standalone AI experience on Web, outside of chats.

In its Newsroom post announcing new AI updates, Meta detailed a standalone Imagine tool for image generation. Initially only embedded within Meta’s messaging platforms, Imagine can now be accessed on the Web for free. “Today, we’re expanding access to imagine outside of chats, making it available in the US to start at imagine.meta.com,” Meta said in the blog. The image creation tool runs on the company’s image foundation model, Emu. The tool will initially be available in the US.

Imagine with Meta is free to use on the Web
Photo Credit: Meta

Meta is also bringing new updates and capabilities to core AI experiences on its platforms. The Meta AI virtual assistant is now more helpful, the company claims, generating more detailed responses on mobile and more accurate summaries of search results. “We’ve even made it so you’re more likely to get a helpful response to a wider range of requests,” the blog said. A Meta AI interaction can be triggered by starting a new message and selecting “Create an AI chat” on Meta’s messaging platforms, or by typing “@MetaAI” in a group chat followed by the query.

Outside of chats, Meta AI’s large language model will bring new experiences on Facebook and Instagram like options for AI-generated post comment suggestions, community chat topic suggestions in groups and more.

Imagine with Meta, the text-to-image generation tool, is also getting a new ‘reimagine’ feature on Messenger and Instagram that lets your friends riff on a Meta AI-generated image shared by you in messages and create entirely new images. Additionally, the company is also rolling out Instagram Reels in Meta AI chats, wherein the AI assistant will recommend and share reels for relevant video requests. AI-powered improvements are coming to Facebook, too. Meta is working on AI features that would draft birthday greetings, edit feed posts, write up a dating profile, or set up a new group.

Meta will also roll out invisible watermarking to its new Imagine with Meta AI image generation tool to boost AI transparency and curb misleading AI-generated content in the coming weeks. “The invisible watermark is applied with a deep learning model. While it’s imperceptible to the human eye, the invisible watermark can be detected with a corresponding model,” the blog said. The watermark will withstand image manipulation like cropping, editing or screenshotting.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version