Snapchat Introduces Editable Chats, Emoji Reactions and More Features

Snapchat is introducing new features that will allow its users to interact with the platform in newer ways and improve existing experiences. There are a couple of artificial intelligence (AI)-powered features as well, and all of these are gradually being rolled out. Some of these features will only be available to the Snapchat+ paid subscribers. The social media platform is adding features such as Editable Chats, Emoji Reactions, My AI Reminders, an AI-powered feature to generate custom outfits for their Bitmoji avatar, and more.

Snapchat New Features

The social media platform announced the new features in a newsroom post. Making the announcement, the company said, “Every day, Snapchatters create more than 5 billion Snaps on average to communicate visually with their friends. Now, we’re adding new features to help Snapchatters connect even more quickly, express themselves in new ways, and use My AI to stay organized amid busy lives and schedules.”

Editable Chats is a new addition that will allow users to edit their messages after sending them for up to five minutes. The feature works similarly to WhatsApp’s message editing feature. This is currently only available to Snapchat+ users but it may later be expanded to all users.

Snapchat users can currently only react to a message using Bitmoji. However, with the Emoji Reactions feature, they will be able to use any emoji to react to messages. Another interesting feature is My AI Reminders which will allow users to set reminders by sending one quick message to My AI containing the name of the event, date, and time. Once sent, My AI will start an in-app countdown and will share a notification when the counter hits zero. Snap Map is also getting a minor upgrade. Users will be able to react to other users on the map with emojis.

Apart from these, two new AI features are also coming to Snapchat. First is an AI-powered Bitmoji outfit generator. Users can now write a short description of the outfit they want and the AI will generate a selection of similar garments. These outfits can have unique patterns, colours, designs, and more. The second feature was rolled out a few days ago and dubbed the ‘90s AI Lens. The AI Lens is essentially an AI-powered filter with creative effects that are auto-applied once an image is clicked. This particular one brings in the ‘90s aesthetics.

Check out our Latest News and Follow us at Facebook

Original Source

Apple CEO Tim Cook Hints at “Some Very Exciting” Generative AI Announcements Soon

Apple might reveal its artificial intelligence (AI) plans earlier than expected. It was believed that the Cupertino-based tech giant would unveil the AI features it is building during its Worldwide Developers Conference scheduled for June 10. However, CEO Tim Cook has now said that information about generative AI may be shared with users soon, as per a report. With Apple’s Let Loose event coming up on May 7, there is a slim possibility that the company will hint at the features it will be introducing later this year.

According to a report by CRN, Cook made the statements at the company’s quarterly earnings call. Apple has reportedly suffered a revenue decline of 4 percent year-on-year to bring it to $90.8 billion (roughly Rs. 7.5 lakhs crores). Addressing the stakeholders at the beginning of the call, Cook said, “We continue to feel very bullish about our opportunity in generative AI we are making significant investments, and we’re looking forward to sharing some very exciting things with our customers soon.”

The announcement highlights the iPhone maker’s intentions to go big on the AI trend. The Apple CEO also highlighted that the company’s innovation with its processors and neural engines gave it a strategic advantage over its rivals in integrating the technology into the devices. He also reportedly spoke about the “unwavering focus on privacy” hinting that the AI features will likely be powered on-device.

In the last few months, Apple’s AI ambitions have made headlines multiple times. The company has acquired at least two different companies, Darwin AI and Datakalab, working in the AI space. Apart from that, researchers employed by the tech giant have also published several papers on AI models with computer vision, on-device operations, and multimodal capabilities.

Earlier reports have also suggested some of the AI-powered features users might see later this year. The Safari browser is expected to play a key role, as it is rumoured to get an ‘Intelligent Search’ feature that will summarise articles and web pages opened. Another AI-powered web eraser feature has also surfaced that can delete banner ads and other elements in a web page based on users’ preferences. These features are expected to be showcased at WWDC 24 when Apple unveils iOS 18 and macOS 15.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Crypto Price Today: Bitcoin Price Falls Below $60,000, Dogecoin, Shiba Inu, Other Altcoins See Gains



Ubisoft’s Free-to-Play Shooter XDefiant Sets May 21 Release Date, Reveals Seasonal Roadmap



Check out our Latest News and Follow us at Facebook

Original Source

Anthropic Launches Claude iOS App to Bring the AI Assistant to the iPhone

Anthropic’s artificial intelligence (AI)-powered chatbot Claude is now making its way to the iPhone. The company announced the launch of its iOS app on Wednesday and said it is generally available globally. This is the first time the AI assistant has left the web interface and received a dedicated smartphone app. Alongside, it also announced a new Teams subscription plan for businesses that will allow corporates to purchase Claude’s access for the entire staff. Notably, Anthropic released Claude 3 AI models in March.

In a post on X (formerly known as Twitter), Anthropic announced the launch of the iOS app. The mobile app functions the same way as the web interface, and we found it quite optimised and user-friendly. The iPhone app comes with features such as seamless sync with web chats that allow you to pick up the conversation on the app after leaving it midway on the web interface.

Claude iOS app
Photo Credit: Anthropic

 

The iPhone app also comes with vision capabilities. With permission from the user, the app can access the camera and the photo library of the iPhone to offer real-time analysis of the images. Users can potentially click a picture of an object and ask the AI to identify it.

While the app can be downloaded for free, it does come with the same restrictions as the free web interface of the platform. You only get access to Haiku or Sonnet AI models and there is a daily message limit which varies depending on the load on the server. Notably, there are three variants of Claude AI — Haiku, Sonnet, and Opus. Haiku is the fastest but the least intelligent, Sonnet is slower but more intelligent, and Opus does both.

If you do not want to hit the daily limit, you will have to pay $20 (roughly Rs. 1,700) a month for the Pro subscription. This opens access to all three AI models, a higher number of chats, and priority access during high-traffic periods. Alongside its iOS app, Anthropic has also introduced a new Team plan for enterprises, which businesses can purchase for their staff. The plan offers everything in the Pro tier as well as a higher usage rate than Pro and access to its 200K context window mode, which is designed to process long documents. The Teams plan comes at the price of $30 (roughly Rs. 2,500) per user per month with a minimum of five seats.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Introduces Med-Gemini Family of Multimodal Medical AI Models, Claimed to Outperform GPT-4

Google introduced its new family of artificial intelligence (AI) models focused on the medical domain on Tuesday. Dubbed Med-Gemini, these AI models are not available for people to use, but the tech giant has published a pre-print version of its research paper which highlights its capabilities and methodologies. The company claims that the AI models surpass GPT-4 models in benchmark testing. One of the notable features of this particular AI model is its long-context abilities that allow it to process and analyse health records and research papers.

The research paper is currently in the pre-print stage and is published on arXiv, an open-access online repository of scholarly papers. Jeff Dean, Chief Scientist, Google DeepMind and Google Research, said in a post on X (formerly known as Twitter), “I’m very excited about the possibilities of these models to help clinicians deliver better care, as well as to help patients better understand their medical conditions. AI for healthcare is going to be one of the most impactful application domains for AI, in my opinion.”

Med-Gemini AI models are built on top of Gemini 1.0 and Gemini 1.5 LLM. There are a total of four models — Med-Gemini-S 1.0, Med-Gemini-M 1.0, Med-Gemini-L 1.0, and Med-Gemini-M 1.5. All of the models are multimodal and can provide text, image, and video outputs. The models are also integrated with web search, which the company claims has been improved through self-training to make the models “more factually accurate, reliable, and nuanced” when showing results for complex clinical reasoning tasks.

Further, the AI model is fine-tuned for improved performance during long-context processing, claims the company. A higher quality long-context processing would mean the chatbot can provide more accurate and pinpointed answers even when the questions are not perfectly queried or when it has to process a long document of medical records.

As per data shared by Google, Med-Gemini AI models have outperformed OpenAI’s GPT-4 models in the GeneTuring dataset on text-based reasoning tasks. Med-Gemini-L 1.0 has also scored 91.1 percent accuracy on MedQA (USMLE), even outperforming its own older model Med-PaLM 2 by 4.5 percent. Notably, the AI model is not available in public or in beta testing. The company likely will improve the model further before bringing it into the public domain.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Amazon Q AI Assistant Now Available for Enterprise Customers, Amazon Q Apps Out in Preview

Amazon announced the wider availability of its latest artificial intelligence (AI)-powered assistant for enterprise, Amazon Q, on Tuesday. The e-commerce giant first announced its business-focused chatbot in November 2023, promising both generative and analytical assistance based on the business’ in-house data. Later, Amazon made the AI tool available to a limited number of users. Now, the company has made it generally available and has added some new features to it. Notably, enterprises will now get two separate chatbots — Amazon Q Developer and Amazon Q Business — for separate sets of tasks.

In a newsroom post, Amazon’s cloud computing platform Amazon Web Services (AWS) announced that both Amazon Q Developer and Business chatbots are now available for enterprises. This will be available to those customers who use AWS services. The AI bot trains and learns from the business’ data and workflows and can help with more coding-related and business analytics-related tasks.

The Amazon Q Developer is designed as an assistant for software developers and it can help them by “performing tedious and repetitive tasks” such as coding, testing, upgrading applications, troubleshooting, security scanning and fixes as well as optimising AWS resources. The company says the saved time can be used by professionals to develop unique experiences for end users and deploy them faster.

For analytical assistance, the tech giant is making its Amazon Q Business available. This tool is designed to accept and process natural language queries and prompts. Based on the knowledge from the business’ database stored in AWS servers, it can answer questions, generate summaries and content, write reports, and provide data-driven analysis. Amazon Q is also being added to Amazon QuickSight, AWS’s unified Business Intelligence service for the cloud.

Apart from these, Amazon also has another feature which is not yet available to all users. Dubbed Amazon Q Apps, it allows employees to build AI-powered apps based on the company’s data. Even employees who have no prior knowledge of coding can build apps through it by providing a simple description of the app and the tasks it should perform. The AI chatbot can then create the app end-to-end. This feature is currently in preview.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Safari Browser to Reportedly Get Major AI Upgrade With Article Summarisation, Web Eraser Features

Apple’s Safari web browser is reportedly getting a major upgrade and could soon sport artificial intelligence (AI)-powered features. Earlier this month, a report claimed that Safari could be one of the first Apple apps to get AI features. Now, another report has stated that the company is currently running internal tests on multiple new features for the Safari 18 build that will come with iOS 18 and macOS 15. Additionally, the tech giant is also said to be working on a system-wide visual lookup feature.

According to a report by AppleInsider, the next update to Safari could come with a minor interface revamp, features to summarise articles, blocking content on a web page, and even an AI-powered assistant. Citing unnamed people familiar with the development, the report stated that Apple is currently evaluating the performance and viability of these features. The publication also shared images of the features.

Safari browser’s new features

The most notable feature mentioned in the report is Intelligent Search. The feature is said to draw on Apple’s on-device AI technology, specifically the language-learning model Ajax, to summarise web pages and articles. Based on the examples shared, the AI feature summarises the text in topic headlines and small paragraphs describing the topics. Notably, similar features are also offered by Google with its Gemini AI and Microsoft’s Copilot. It is not known whether the Intelligent Assistant will also offer other features such as text generation.

Safari browser’s article summarisation feature
Photo Credit: AppleInsider

 

Another AI-powered feature Apple is working on for its Safari 18 browser is being called Web Eraser, as per the report. This is a content-blocking tool that can remove any element from a web page. The tool can be used to delete banner ads, images, and even text from a page. It is said that the Safari browser remembers the elements removed by a user even when the session is over. Opening the same page next time automatically adds the same effect and gives the option to revert to the unblocked view.

Safari’s web eraser feature
Photo Credit: AppleInsider

 

Apart from these, the browser is also getting a minor interface change. A new page controls menu is reportedly being added to the address bar. This menu will contain the options to activate the above mentioned features as well as include various other tools that are currently spread across different menus in Safari. Based on the screenshot, it also features the ‘Aa’ option and the zoom feature. Notably, the new Safari browser’s interface is being kept the same in both iOS 18 and macOS 15 versions, suggesting that the tech giant might be unifying the browser experience across both devices.

Safari browser’s new controls page
Photo Credit: AppleInsider

 

Finally, the report highlighted that Apple is working on an enhancement to its visual lookup feature which currently exists within the Photos app and identifies plants, pets and landmarks from photos. The report claims Apple is trying to make this feature system-wide so it works on any screen including the web pages of Safari. The under-development enhancement is said to be powered by AI.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OneAIChat Unveils Multimodal AI Aggregator Platform With GPT-4, Gemini and Other Models

OneAIChat, an Indian startup, unveiled its new multimodal artificial intelligence (AI) aggregation platform on Tuesday. The Mangalore-based startup is offering a single platform through which users can access multiple large language models (LLMs) at the same time. The company says this will help users seamlessly interact and compare answers from various AI models. Leveraging the capabilities of multiple models, the platform offers output in text, images, and video formats. The platform will require purchasing a single subscription plan to access it.

The OneAIChat platform has been pre-launched today as a web-based service. The aggregator platform features OpenAI’s GPT-4, Google Gemini, Anthropic’s Claude 3, as well as AI models from Cohere and Mistral. The company did not specify which LLMs from Mistral were being used. The company says the platform will be accessible globally. At the time of writing this, we were not able to access the website as it appears to be suffering from an outage.

There are some platform-specific features that users can take advantage of. OneAIChat has introduced a Focus Categories feature that will allow users to enter topic-specific queries from AI models. It is unclear whether the company has added specific LLMs for certain topics or whether it curates answers from all of them together. Some of the categories highlighted by the startup include health, audio/music, faith, marketing, video, art & design, and mathematics.

Apart from this, OneAIChat said that its platform is aimed at streamlining content creation. The AI models enable the generation of blog articles, product listings, social media posts, essays, and more. Notably, these offerings would come straight from the AI models themselves. Further, being a multimodal platform, it also offers images, videos, and audio clip generation. However, the company did not specify the AI models that will handle video and music generation.

OneAIChat’s platform will charge a single subscription fee to allow the usage of all the AI models. However, the pricing details have not been revealed yet. Details of the models being offered on the subscription are also not known. Given that all of the above mentioned AI models, except Mistral and Cohere, have both free and paid versions, the cost-saving through the subscription could not be determined. Mistral offers open-source AI models that do not require any subscription to run, whereas Cohere is only available to paid users.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Signs Deal With Financial Times to Use Its Content for Training AI Models

The Financial Times has signed a deal with OpenAI to license its content for the development of AI models and allow ChatGPT to answer queries with summaries attributable to the newspaper, the latest media tie-up for the Microsoft-backed startup.

Financial terms of the agreement, announced on Monday, were not disclosed. It follows similar deals by OpenAI over the past few months with the Associated Press, global news publisher Axel Springer, France’s Le Monde and Spain-based Prisa Media.

The latest deal will help the startup enhance the ChatGPT chatbot with archived content from the FT and the firms will work together to develop new AI products and features for FT readers, the newspaper and OpenAI said in a statement.

The summaries generated by ChatGPT off FT content will also link back to the newspaper, according to the companies.

“We’re keen to explore the practical outcomes regarding news sources and AI through this partnership,” said FT Group CEO John Ridding.

ChatGPT, which kickstarted the GenAI boom in late 2022, can mimic human conversation and perform tasks such as creating summaries of long text, writing poems and even generating ideas for a theme party.

Some outlets are already using generative AI for their content. BuzzFeed has said it will use AI to power personality quizzes on its site, and the New York Times used ChatGPT to create a Valentine’s Day message-generator last year.

© Thomson Reuters 2024


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google RealFill, an AI-Powered Generative Image Completion Model, Spotted in Trademark Listing

Google RealFill could be the tech giant’s latest bid to innovate artificial intelligence (AI)-powered image generation into a user-focused application. Recently, a research paper and a website by the name RealFill was spotted online which performs image completion and inpainting based on reference images and creates a target image. The company appears to have also applied for trademarks for its designed logo for the AI model-based product. Notably, the new AI model uses computer vision and pattern recognition algorithms and was trained using random masking techniques.

A Github page and a pre-print paper of the AI model were spotted recently by Android Authority. The publication also found trademark applications filed under the name of Google LLC in the US Patent and Trade Office (USPTO) and the European Union Intellectual Property Office (EUIPO) listings. Based on them, it appears the tech giant has not only reached the end of the research phase for the AI model but also has plans to introduce it as a commercial product.

According to its GitHub page, RealFill has been described as a “novel generative approach for image completion that fills in missing regions of an image with the content that should have been there.” Essentially, the AI model can scan through multiple images of a subject in the same setting and then use that reference to generate a pre-specified image. As a tool, it can be used when a user has clicked multiple pictures of an object but failed to get the perfect shot. AI can process those images and generate an image that does not even exist.

RealFill is a generative AI model that uses computer vision to understand the subject and the environment of reference photos and can process various aspects of it including technical specifications such as dimensions, colours, and shapes, as well as contextual understanding of the various objects. Using this information, it can then create a target image in a new plane and fill in details that might not have been present in the reference images.

While it is difficult to predict Google’s plans with RealFill, last year the company shipped a feature in the Google Pixel 8 series dubbed Best Take that could process multiple shots of a group photo and allow users to pick the best expression from each image to create the final photo. Compared to RealFill that feature appears very basic, but based on its application, the tech giant might be readying the AI model to turn it into a feature for Pixel phones.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


IIT-Madras Begins Draft Work on ‘Metaverse India Policy and Standards’ with Industry Veterans



Check out our Latest News and Follow us at Facebook

Original Source

EyeEm Stock Photo Marketplace Updates Its Policy to Use Uploaded Content to Train AI Models

EyeEm, the stock photo marketplace, has updated its terms and conditions to specify that the company which was acquired by Spain-based Freepik in 2023, can use the uploaded content on the platform to train artificial intelligence (AI) and machine learning (ML) models. The platform has also told its users that in case they do not consent to this, they should not add any photo to the EyeEm community and delete all the existing images. However, another section of the terms and conditions highlights that if an account is deleted, the company will not payout the accumulated share of photographers.

According to a report by TechCrunch, the company notified its users of the change via email. The email reportedly highlighted that EyeEm was adding a new clause to its terms and conditions which would provide it the rights to leverage users’ content to train AI. The email also specified that users had 30 days to opt out by removing all of their content. Notably, the publication also reported that EyeEm’s photo library contained 160 million photos and had 1.5 lakhs users at the time of acquisition.

The Section 8.1 of the Terms & Conditions page of EyeEm contains information around ‘Grants of Rights’ for its community including getting non-exclusive, worldwide, transferable and sublicensable right for its commercial activities. It now includes a new paragraph that states, “This specifically includes the sublicensable and transferable right to use your Content for the training, development and improvement of software, algorithms and machine learning models. In case you do not agree to this, you should not add your Content to EyeEm Community.”

While the company does give an opt-out, it requires them to delete all the content on the platform. Notably, many users are struggling to find a batch delete option for images, so those users are stuck deleting the images one by one. However, even if they delete the images at their end, the images already available in the marketplace or shared with its distribution platforms will not go away immediately. In Section 13, the company states, “Complete deletion from EyeEm Market and distribution partner platforms may take up to 180 days from the date of your deletion request.”

But there is more. In the same section, the company also adds, “All license agreements entered into before complete deletion and the rights of use granted thereby remain unaffected by the request for deletion or the deletion.” This means even the deletion of the content does not confirm revoking the licence granted to the platform if the user entered any licence agreements.

And finally, there is another caveat. In Section 10.3, pertaining to Licence Share and Payout, EyeEm states, “If your account is deleted, you lose the right to payouts for all accumulated and future Licenses Shares. Please therefore make sure that you submit a Payment Request prior to the deletion of your account.”

Based on the highlighted sections above, it can be gauged that the company has placed multiple conditions for users who do not want to consent to their images being used by the AI. The over-email notification, a short period of 30 days, and multiple hurdles including difficulties in deletion process such as requirement to submit a form to the company for marketplace removal, and risking the payout if the account is deleted is sure to create confusion and difficulty for users to opt-out.

Notably, the European Union, in its General Data Protection Regulation (GDPR)’s description of consent mentions, “Consent must be unambiguous, which means it requires either a statement or a clear affirmative act. Consent cannot be implied and must always be given through an opt-in, a declaration or an active motion, so that there is no misunderstanding that the data subject has consented to the particular processing.”


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version