Google Gemini to Get Spotify Integration via New Extension for AI Assistant: Report

Google Gemini might soon be able to interact with music and audiobook streaming platform Spotify. The tech giant introduced the Gemini assistant for compatible smartphones earlier this year but its functionality was very limited. While the artificial intelligence (AI) chatbot could perform generative AI tasks, it was not as integrated with Google apps and third-party apps as its predecessor, the Google Assistant. It has since added support for various Google apps and even rolled out a YouTube Music extension in May. Now, a report claims that a Spotify extension could be rolled out soon.

Google Gemini to Reportedly Get a Spotify Extension

According to a report by Android Authority, evidence of the Spotify extension was found within the Google app. In an app teardown exercise, the publication found strings of code that refer to the existence of the feature. It also appears that users might be able to sign into their Spotify account using the Gemini AI assistant. Notably, the code was found in version 15.22.29.29.arm64 of the Google app for Android devices.

Starting to play on Spotify
Spotify requires sign in

As seen above, these were the strings of code found by the publication. In both examples, the word ‘robin’ is said to refer to Gemini. Google has been referring to its native AI model by this name ever since it was known as Bard. In the first string, it clearly refers to “Starting to play on Spotify” which is likely the text users will see once the AI processes the prompt.

In the next line, “Spotify requires sign in” likely refers to the condition when the user is not logged into their account. It is currently unclear whether this means users will have to manually log into the account by opening the Spotify app, or by providing their login credentials.

While the code within the Google app is a sign that the feature is in development, it is unlikely that the feature will be rolled out anytime soon. It appears the tech giant is currently just running preliminary tests with the codes.

If the efforts are successful, the Spotify extension will first be released to beta testers, well before a stable version is released. The entire process could easily take months. However, this is merely speculation, and we will have to wait for Google to provide an official update.

Check out our Latest News and Follow us at Facebook

Original Source

Google AI Overviews Now Showing for Just 15 Percent of Searched Queries: Report

Google recently found itself in hot water for the AI Overviews feature after it reportedly began showing incorrect and unhelpful answers for searched queries. The tech giant also issued a statement claiming it was working to address the issues. Now, a new report has found that the visibility of AI Overviews has dropped drastically, and is appearing for just 15 percent of the searched queries. Further, most of the time, the artificial intelligence feature reportedly only appears in a truncated format.

Google’s AI Overviews witness a sharp drop in visibility

According to a report by enterprise SEO platform BrightEdge, the drop in the appearance of AI Overviews in Google Search results began in mid-April. At present, the AI feature is said to be showing up for just 15 percent of the queries. On top of reducing the appearance, it was also found in the report that the majority of the AI-collated responses are showing up in a truncated format where a collapsed view of the answer can be seen.

The dwindling visibility becomes even more prominent when compared to the feature’s pre-release appearance. Before its official launch at the Google I/O event, AI Overview, which was then known as Search Generated Experience (SGE) and was an opt-in feature, appeared in 84 percent of searched queries, as per the report. One likely reason for the reduced visibility is believed to be the recent issues with the AI feature.

Google’s AI Overviews faced criticism for hallucinations

Google launched AI Overviews in the US for all users in early May, and soon after, some users began reporting incorrect and odd responses to some queries. For instance, an X (formerly known as Twitter) user posted screenshots where upon searching for “cheese not sticking to pizza”, the AI suggested using non-toxic glue to the sauce. In the following days, many users found similar issues with the feature.

Last week, the tech giant responded to the criticism and claimed it was working “quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies.” A separate report also claimed that Google was manually turning off AI Overviews for web queries.

Check out our Latest News and Follow us at Facebook

Original Source

YouTube Testing a Dream Screen Feature for Shorts That Will Generate Images for Green Screens

YouTube is testing a new artificial intelligence (AI) feature for its minute-long vertical video format, Shorts. Dubbed Dream Screen, this feature will add a custom green screen image generated by AI to the videos. This feature is likely aimed at users who prefer to use creative backgrounds to stand out in the videos or be thematically aligned with the content of the video. This is an experimental feature by the video streaming giant and is currently available only to a select group of Shorts creators.

YouTube Shorts gets an AI background feature

The Google-owned video-sharing platform posted about this feature on its support page on Monday. The company said, “We’re experimenting with a new feature, Dream Screen, that uses AI to generate image green screen backgrounds for Shorts.” YouTube did not specify which AI model was being used for this feature.

Users, once they get access to Dream Screen, can write a text prompt describing what they want in the background. Highlighting one example, the post stated users can request a “fancy hotel pool on a tropical island,” and the AI will instantly generate it. Once generated, it can be added to the background of the video.

The short post described the feature but did not explain several aspects of it. For example, it did not mention whether the background needs to be added at the recording stage or if it can also be added to a pre-recorded video. Further, it is not known whether users will require a real green screen for the effect to show, or it can be digitally added similar to Google Meet. Notably, the company also did not specify any restrictions in generating AI images. It is possible that these directions will show up on the Dream Screen feature page.

Being an experimental feature (features Google is currently testing or is in beta), Dream Screen is only available to a select group of Shorts creators. However, the company has highlighted that more creators will get this tool later in 2023. Separately, the video streaming platform has made its Playables feature available to all users. It offers access to more than 75 different free-to-play games which was previously only available to YouTube Premium subscribers.

Check out our Latest News and Follow us at Facebook

Original Source

Asus ROG Zephyrus G16 (2024) Updated With AMD Ryzen AI 9 Processor: Specifications

Asus announced a refreshed version of its ROG Zephyrus G16 gaming laptop at the ongoing Computex 2024 on Tuesday. Under the hood, the laptop now gets the recently launched AMD Ryzen AI 9 HX 370 processor with an in-built Neural Processing Unit (NPU) and artificial intelligence (AI) capabilities. Asus says the refreshed ROG Zephyrus G16 (2024) is a “true AI PC” and is equipped to deal with AI-enabled applications.

Asus ROG Zephyrus G16 (2024) features, specifications

The refreshed Asus ROG Zephyrus G16 (2024) comes with a 16-inch OLED display with a 2.5K resolution and a 120Hz refresh rate. In a blog post, Asus announced that the laptop is equipped with ​G-Sync technology and supports Dolby Vision HDR. It also gets a MUX Switch, enabling the user to choose between different GPU performance modes.

It now gets an AMD Ryzen AI 9 HX 370 chip under the hood, paired with Nvidia GeForce RTX 4070 mobile GPU, 32GB LPDDR5X RAM, and 2TB PCIe 4.0 NVMe SSD storage. Being an AI PC, it features a dedicated Copilot key along with its AI functions (in preview). In total, it promises to deliver up to 402 TOPS AI processing power — 31 TOPS CPU and iGPU, 50 TOPS (NPU), and up to 321 TOPS (GPU).

The laptop has a ​4-speaker system and supports Dolby Atmos. It also gets ​AI noise-cancelling technology and ​Hi-Res certification for headphone users. Asus says the refreshed ROG Zephyrus G16 (2024) is Wi-Fi 7 and Bluetooth 5.4 enabled. There is also a 1080p full-HD IR camera with Windows Hello support.

In terms of connectivity, it comes with USB 3.2 Gen 2 Type-A and Type-C ports, as well as a single USB 4 Type-C port. There is also a dedicated 3.5mm headphone jack, an HDMI 2.1 port, and an SD card reader. It is backed by a 90Wh 4-cell Li-ion battery and can be charged with a 200W adapter.

The Asus ROG Zephyrus G16 (2024) measures 35.4cm x 24.6cm x 1.49cm in dimensions and weighs 1.85kg.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Tecno Camon 30 5G Series Gets Upgraded With AI Assistant Ella-GPT That Supports Over 70 Languages

Tecno Camon 30 5G series is getting an artificial intelligence (AI)-powered upgrade, a month after it was launched in India. The company announced on Tuesday that it is expanding its Ella-GPT assistant which debuted with the Phantom V Flip 5G, to more smartphones. Both the standard Tecno Camon 30 5G and the Camon 30 Premier 5G, already offer two generative AI features — Ask AI and AI Generate. Notably, the smartphones are equipped with MediaTek Dimensity chipsets and run on Android 14-based HiOS 14 out-of-the-box.

Tecno Camon 30 Series Updated With AI Features

Ella-GPT is an AI assistant powered by OpenAI’s GPT 3.5 and fine-tuned by ChatGPT. The Ella-GPT can perform all the general tasks that an AI chatbot can. It can answer questions, generate text, offer almost real-time translations, and help generate ideas to create more content.

The chatbot supports more than 70 languages and also accepts voice as input. The company says the AI assistant is adept at handling users’ day-to-day tasks and offering personalised assistance. Notably, the AI assistant was first launched with the Tecno Phantom V Flip 5G in 2023.

Apart from Ella-GPT, the Tecno Camon 30 5G series also features two other AI features. AI Ask allows users to draft messages and check pre-written text for grammatical errors. It can be used to generate content across different formats. The feature also integrates with the Google Chrome browser to offer generative capabilities when browsing different websites.

The Camon 30 5G series also has the AI Generate feature on the Notepad app which can be used to generate unique images from random strokes and outlines. The generated images are shown in sketch style.

Tecno Camon 30 5G, Camon 30 Premier 5G Specifications

The Tecno Camon 30 5G features a 6.78-inch full-HD+ AMOLED screen with a 120Hz refresh rate. On the other hand, the Tecno Camon 30 Premier 5G sports a 6.77-inch 1.5K LTPO AMOLED screen with a refresh rate of 120Hz. While the MediaTek Dimensity 7020 SoC powers the former, the latter is equipped with the Dimensity 8200 Ultimate chipset.

For optics, both smartphones carry a 50-megapixel primary camera. Additionally, the standard model gets a 2-megapixel depth sensor, whereas the Premier model carries a 50-megapixel telephoto sensor with 3X optical zoom and a 50-megapixel ultra-wide-angle camera. On the front, both handsets feature a 50-megapixel camera for selfies.

Check out our Latest News and Follow us at Facebook

Original Source

Gigabyte AI Top Unveiled at Computex 2024, to Enable End-to-End Local AI Training

Gigabyte unveiled its end-to-end artificial intelligence (AI) solution to train large language models (LLMs) locally on a device during AMD’s Computex 2024 event. The full-stack AI solution includes AI Top Utilities, AI Top Hardware, and AI Top Tutor, which encompass various aspects of training open-source AI models. AI Top Utilities is a training software with support for multiple open-source AI models, whereas AI Top Hardware offers the company’s AI-focused products. The AI Top Tutor is for those who require assistance in understanding how to make the most out of this solution.

Gigabyte AI Top offers solutions to train on-device AI

Making the announcement during the Computex 2024 event, the company unveiled AI Top as an all-encompassing solution that aims to “Train Your Own AI on Your Desk”. In a press release, the company detailed the various aspects of the offering which includes three divisions — software support, hardware support, and consultation and technical support. Notably, the announcement comes after the Gigabyte AI PC was introduced at Consumer Electronics Show (CES) 2024.

Gigabyte AI Top Utility
Photo Credit: Gigabyte

 

The AI Top Utility is a digital interface that allows local AI model training using new workflows. The company claims the software offers a user-friendly interface and real-time progress monitoring. It supports multiple open-source AI models with up to 236 billion parameters. The company claims the platform is more cost-effective and shows faster results compared to the cloud counterpart. It can also offload data to system memory and SSDs to surpass the limitations of VRAM size.

Next is the AI Top Hardware, which is essentially hardware offerings from the company. It features a series of AI optimised products that are power efficient and can handle AI training workloads. These hardware solutions also include upgradeable components. One of the primary hardware in this series includes AI Top Motherboard (TRX50) with configurable form factor, memory type and slots, graphics interface and more. AI Top Graphics Card, SSD, and PSU are also included.

The last offering in this solution is the AI Top Tutor. Positioning it as “on-desk AI coaching”, this is essentially the company’s AI-powered consultation and technical support system that offers insights, set-up guidance, and troubleshooting help. The company claims the coaching system will empower both beginners and professionals in starting on-device AI projects.

The company has not announced the pricing or availability of the Gigabyte AI Top solutions.

Check out our Latest News and Follow us at Facebook

Original Source

AMD Ryzen 9000, Ryzen AI 300 Series Processors With AI Capabilities Unveiled

AMD announced the next generation of its processors to fuel the rising artificial intelligence (AI) wave at its Computex 2024 event on Sunday. The company unveiled four new Ryzen 9000 series chipsets for gamers and heavier workflows and two new Ryzen AI 300 series chipsets to power the AI PCs. These CPUs are built on AMD’s latest Zen 5 architecture and come with integrated GPUs and Neural Processing Units (NPUs). The chipmaker claimed that the Ryzen 9000 series desktop processors could deliver 16 percent better performance than the predecessors.

AMD Ryzen AI 300 Series Processors Unveiled

The Ryzen AI 300 series features the Ryzen AI 9 HX 370 and the Ryzen AI 9 365 CPUs. Following the naming convention introduced in 2022, the HX appears in the name of the top-of-the-tier desktop processor. The Ryzen AI 9 HX 370 chipset comprises 12 high-performance Zen 5 cores and 24 threads with a max clock speed of 5.1GHz. It features the Radeon 890M graphics and 36MB cache.

Meanwhile, the Ryzen AI 9 365 chipset has 10-core high-performance Zen 5 cores and 12 threads with a max clock speed of 5.0GHz. It gets the Radeon 880M graphics and has a 34MB cache. Apart from using Zen 5 architecture for the CPU, AMD also used the XDNA2 architecture to build the NPU that powers the AI experiences for desktop users. Both processors have 50 Tera operations per second (TOPS) capable NPUs.

These chipsets will be available in July and can be seen in some of the Copilot+ PCs showcased at Microsoft’s Surface event. The first of these will be the Asus Vivobook S 15 and HP OmniBook.

AMD Ryzen 9000 Series chipsets unveiled for gamers, creators

During the keynote session, AMD also introduced the Ryzen 9000 series chipsets which include the Ryzen 9 9950X, Ryzen 9 9900X, Ryzen 7 9700X, and Ryzen 5 9600X CPUs. The Ryzen 9 9950X is the most powerful processor in the series and it gets 16 high-performance Zen 5 cores and a 32 thread CPU with 80MB of L2+L3 cache. It has a base clock speed of 4.3GHz and a max clock speed of 5.7GHz.

Designed for gamers, AMD says the Ryzen 9000 series desktop processors can deliver high frame rates, smooth gameplay, and improved performance across a wide range of AAA and esports titles. The company also claimed that the processors will offer faster 3D rendering, animation design, and product visualisation. These will also be launched in July. However, AMD has not revealed prices for any of the chipsets.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Instagram Reportedly Testing 5-Second Unskippable Ad Breaks Feature



Check out our Latest News and Follow us at Facebook

Original Source

Amazon Fire TV Devices to Get AI-Powered Search Feature for Personalised Content Recommendations

Amazon Fire TV devices are getting an artificial intelligence (AI)-powered feature upgrade that will make it easier for users to discover new shows and movies. The e-commerce giant is integrating its in-house AI model to power its search feature. The company says it will allow users to search for content based on genre, plot, and more. Calling it a personalised content recommendation feature, Amazon has begun rolling out the AI search feature in the US in the English language.

Amazon Fire TV’s AI Search Feature: How it Works

In a newsroom post, Amazon cited Nielsen’s 2023 State of Play report to highlight that the “average streaming customer spends more than 10 minutes searching for options each time they access their streaming services”.

Most people have likely faced the situation when they are in the mood to watch something new but struggle to find something that captures their attention. The process can even take a while, considering most streaming platforms have a large catalogue of movies and TV shows. The tech giant is solving the problem by adding AI capabilities to its Search feature.

The update doesn’t change the appearance of the search feature. However, owners of a Fire TV device with Fire OS 6 or later, will now be able to find relevant results for complex search queries such as “Show me psychological thrillers with surprise endings.” This is possible due to one of Amazon’s in-house large language models. The company did not specify which AI model was used to enable the feature.

Where to Use Amazon Fire TV’s New AI Search Feature

With this new capability, users will be able to search for movies and shows based on topics, genres, plots, characters, actors, and even by quotes. The feature will show results from Prime Video as well as other subscription libraries of the user (such as Netflix, Disney+ Hotstar etc) so they only see the content which is already available to them for free. The AI search feature is Alexa-supported, so users can also verbally look for content recommendations instead of typing them.

The feature is rolling out to users in the US in English on select Fire TV devices running Fire OS 6 and later and will roll out to all eligible devices in the coming weeks. Amazon did not share any timelines for a global release of this feature or support for additional languages.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Jack Dorsey-Backed Bitcoin Miner ‘Ocean’ Moves Headquarters to El Salvador



Vivo S19, Vivo S19 Pro With 50-Megapixel Front Cameras, 80W Fast Charging Launched: Price, Specifications



Check out our Latest News and Follow us at Facebook

Original Source

Anthropic Rolls Out ‘Tool Use’ Designed to Provide More Accurate Responses For Claude-3 AI Models

Anthropic is rolling out a new feature for Claude-3, its family of artificial intelligence (AI) models. Dubbed ‘Tool Use’ (or Function Calling), this feature enables Claude to interact with external tools and Application Programming Interfaces (APIs) to perform a wide variety of tasks. This way, the AI chatbot can perform tasks which are more specific to the user, such as finding the best meeting based on attendee availability and analysing large financial data to offer future predictions and actionable insights.

The new Tool Use feature is an AI Agent, similar to GPTs by OpenAI or recently announced Gems and Copilot (via Copilot Studio) by Google and Microsoft, respectively. Essentially, these are mini chatbots that can be created by adding an external database to make them specialists in one particular task, unlike the generalist chatbots which can do a little bit of everything but have limited accuracy.

In Anthropic’s case, however, these AI agents work slightly differently than their competitors. Instead of using natural language prompts, users must utilise an API (for the data) and code its functionality into Claude. While this might not be possible for everyone, those with sufficient knowledge of coding can create powerful function-calling tools for various purposes.

The AI firm does not have access to the internet and is trained on offline data. So, users can also use Tool Use to add information about a recent sporting event or a workplace conference and get it to analyse the same. The tool is now generally available across all the Claude-3 models on the Anthropic Messages API, Amazon Bedrock, and Google Cloud’s Vertex AI.

In a blog post, the AI firm highlighted several business-related use cases for the tool. These range from finding particular details from a large database of invoices to reducing data entry workload to responding to technical customer queries instantly by accessing product details. But these can also be used for personal use.

For example, since Tool Use also accepts images as input, users can share a large album of their pictures in different outfits along with details of outfits they want to purchase and ask the AI whether these would look good on them. They can also ask more complex queries such as suggesting a 5-day office outfit suggestion. Interestingly, the company claims Claude-3 can handle hundreds of simple tools and a smaller number of complex tools simultaneously.

With this release, most of the major AI firms have offered AI agents with their chatbots. If you wish to experience these agents for free, the GPT Store is a good place to start as OpenAI has made it available globally, free of cost.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Could Introduce AI Notification Summary, Conversational Voice for Siri at WWDC 2024: Report

Apple is working on introducing an artificial intelligence (AI)-powered notification summary feature when it unveils iOS 18 at the Worldwide Developers Conference (WWDC) 2024, as per a report. The Cupertino-based tech giant is likely to unveil several “practical” AI features when it hosts its annual developer-focused event on June 10. Another planned feature will reportedly make Siri more conversational as the AI might add natural and emotive speech to the virtual assistant. Apart from these, AI functionality could also make its way to Apple’s Safari, Photos, and Notes apps.

Apple to Upgrade Siri With AI Features

AppleInsider reports that the tech giant is planning to upgrade Siri in a major way by incorporating AI capabilities in it. Citing people familiar with the matter, the publication states that Apple has internally named its AI initiatives project Greymatter and it is working on introducing a feature called “Greymatter Catch Up”. It is essentially a notification summarisation feature, but the publication claims it will work via Siri.

Siri might also be able to handle complex commands better, the report claimed. It is said to be getting a new smart response framework and an on-device large language model (LLM) which will understand the context of a request and will adjust its replies. The report also expects the language to be more conversational with the inclusion of the LLM.

Another focus area for Apple is to integrate AI features into its existing apps to make the user experience smoother. For instance, a previous report claimed that the Safari browser might get a web page summarisation feature.

This could be similar to the features offered by Google’s Gemini and Microsoft’s Copilot. Additionally, a Web Eraser feature might also be introduced that can delete any element from a web page including banner ads, images, and text.

The Photos app could also get an AI tool dubbed ‘Clean Up’ which is said to offer ‘Photoshop-grade’ editing capabilities to users. The feature can reportedly remove any unwanted background objects from the image.

An AI-based real-time audio transcription feature could also be added to the Notes app. A report detailing this feature claimed that users will be able to read, edit, copy, and share these transcripts later. To easily catch up to long transcriptions or notes, a text summarisation feature is also being added to the Notes app.

Reportedly, these in-house AI features are being powered by Apple’s Ajax LLM, which the company is keeping under wraps for now.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version