Google’s Gemini Assistant Could Soon Play Music From Third-Party Apps: Report

Google added its native artificial intelligence (AI) chatbot to the Android operating system as an on-device voice assistant earlier this year. This feature gave users the option to replace Google Assistant with Gemini as the default assistant for the smartphone. However, there were certain drawbacks to using the AI chatbot as it cannot set alarms, add reminders or launch apps like its counterpart. A new report has now found that the tech giant might soon add the capability for it to play music with a simple voice command.

The new feature was spotted within the settings options of the Gemini app for Android by a PiunikaWeb report (via AssembleDebug). Based on screenshots shared, a new option was seen within Gemini Settings. The second option from the bottom now has a Music option which features the description “Select preferred services to play music”. Another screenshot shows that within the option, another title says “Choose your default media provider”.

Based on the screenshot, the second setting page is currently empty. The report did not mention whether the new setting was spotted within the codes in the latest version of the app or the beta build, but it highlighted that the functionality is currently not available to use. The feature does not show any third-party apps that can be connected with the Gemini Assistant. However, the report added that the feature could be launched in a future update.

It is unclear how the feature might work, but taking the reference of Google Assistant, it should be able to play music from Spotify or YouTube Music based on voice commands. Gemini could also offer a song identification feature where a user asks the AI to listen to music playing somewhere or someone humming to identify its name and to play it through the music streaming app. It is unlikely to feature any playlist creation features, however.

Earlier this year, a report highlighted that Google is working on adding Gemini to Google Assistant-powered headphones. This would allow the wearable devices to use Gemini as the voice assistant and connect with smartphones which are running the AI chatbot as the assistant. Currently, despite having Gemini on the smartphone, these devices use Google Assistant when prompted.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Samsung Galaxy S24 FE Said to Be in the Works; Could Launch Later This Year



Bitcoin, Ether Record Slight Price Hikes Following BTC Halving, Most Altcoins See Gains



Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT App Could Soon Be Set as the Default Assistant on Android Phones: Report

The rise of generative AI applications like OpenAI’s ChatGPT and Microsoft’s Copilot have made existing standard AI voice assistants like Siri and Google Assistant feel obsolete. Where advanced chatbots can hold human-like conversations, respond to queries on multiple topics, and can now even pull real-time information from the Internet, AI assistants on phones can do limited tasks. The ChatGPT app on both iOS and Android goes a long way in substituting the default assistant on the device. But now, OpenAI’s wildly successful chatbot, could likely properly replace Google Assistant on Android smartphones.

A report by Android Authority says that a code within the latest version of the ChatGPT Android app suggests that it could be set as the default assistant on an Android device..

According to the report, ChatGPT version 1.2023.352, which released last month, included a new activity named ‘com.openai.voice.assistant.AssistantActivity.’ The activity remains disabled by default, but can be manually enabled and launched. Once launched, it shows up on the device screen as an overlay with the same animation as ChatGPT app’s voice chat mode, the report claims. “This overlay appears over other apps and doesn’t take up the entire screen like the in-app voice chat mode. So, presumably, you could talk to ChatGPT from any screen by invoking this assistant,” it adds.

It’s clear, however, that assistant mode is a work in progress. The animation that plays when launching the activity reportedly doesn’t finish and the activity shuts down before you can interact with the chatbot. The report also says that the code required for the ChatGPT app to work as a “default digital assistant app” exists only partially. The ChatGPT app also seems to be missing necessary declarations and metadata tags that would allow it to be set as the default assistant on a device.

The AI assistant wars on mobile phones are about to kick off, with Google Assistant and Siri scrambling to catch up to modern chatbots. The ChatGPT app rolled out its voice chat feature for all free users on Android and iOS in November, effectively allowing the app to act as a voice assistant. Bear in mind, however, that free ChatGPT users cannot access real-time information from the Web on the app, so you can’t ask the chatbot about the latest sports scores or the weather forecast in your city, for example. You can, however, do that on the GPT-4 powered Bing app or the new standalone Copilot app from Microsoft, which launched on both Android and iOS last week.

While Android users don’t yet have a way to bring up the ChatGPT app easily with a gesture, like they would bring up the Google Assistant, iPhone 15 Pro users can simply bind the app with the dedicated Action Button, to bring it up and start conversing with the press of a single button. Google, meanwhile, is hard at work to bring Bard, its own generative AI chatbot, to Google Assistant. The company also recently announced Gemini, its most powerful AI model to date that would compete with OpenAI’s GPT-4 model.

Apple, on the other hand, seems to the one lagging behind in the AI assistant race. The iPhone maker is reportedly working on an AI-infused iOS 18 that will likely power its next lineup of smartphones. The default voice assistant on the upcoming iPhone 16 is said to get a major AI update, with the Siri team reportedly rejigged in Q3 2023 to work on including large language models (LLMs) and artificial intelligence-generated content (AIGC).


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Assistant’s Quick Phrases Comes to Pixel Buds Pro: Here’s How to Use It

After being made available on other Pixel devices such as smartphones, tablets and nest devices, Google has finally brought Google Assistant Quick Phrases feature to its Pixel Buds Pro. Launched more than a year ago, the Pixel Buds Pro was already quite smart thanks to several built-in Google features, like Translate and more. Quick Phrases, which is another Google Assistant feature, now makes it easier to use your phone without the need to say “Hey Google” before your voice command.

What are Quick Phrases?

Quick Phrases are an easier way to use voice commands. Users can simply say the command, instead of using the wake-up phrase (or hotword) like “Hey Google” before speaking out the command/task for Google Assistant. The feature has been available on Google’s Pixel devices since the Pixel 6 series and has only been made available on the Pixel Buds Pro (officially) today.

While the feature sounds exciting, it is mainly used to complete a task rather than starting it. On Pixel devices, users can simply say “accept” or “decline” to deal with an incoming call. Alarms can be snoozed as well by just saying the word “snooze”. A timer can be stopped by simply saying the word “stop”.

What’s new?

This may sound very convenient and to an extent, they truly are, but it’s still very limited in terms of functionality versus the regular “Hey Google” voice commands which have fewer limitations and are far more in number and versatility. For now, Pixel devices only support a handful of commands which include, answering, declining, and silencing calls, and the ability to snooze and stop alarms and timers. The new Pixel Buds Pro update lets you use the first three quick phrases to control calls without summoning the Google Assistant first.

How do I enable it?

Enabling Quick Phrases on your Pixel Buds Pro is quite easy. Open the Settings app and type out “Quick Phrases”. Once in the section, simply toggle ON the settings to turn on each quick phrase scenario. You can also check out which devices support which quick phrase scenario in the same section.

There are a couple of downsides to activating it. There’s a good chance that an unknown incoming call may be accepted by the phone if you happen to be conversing with someone and happen to use any of the quick phrases in a sentence. There’s also a chance that someone else’s voice (standing near you) may trigger the same. And lastly, if the Google assistant detects a word that sounds very similar to any of the above three quick phrases, then it’ll be triggered. Also users will need to select only one language in Google Assistant Settings and supported languages only include English, French, German, Italian, Japanese, and Spanish for now.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Pixel 8, Pixel 8 Pro Could Offer AI-Powered Camera and Video Features, Pixel Superfan Surveys Suggest

Pixel 8 and Pixel 8 Pro could be equipped with AI-powered camera and video features that could improve the quality of group photos, according to details revealed in Pixel Superfans surveys. The handsets will arrive later this year as the successors to the Pixel 7 series of smartphones that were launched in 2022. The search giant is also tipped to introduce a feature that will allow Pixel owners to remove background noise from videos using artificial intelligence (AI), while enhancing other sounds.

According to details leaked by Mishaal Rahman on X (formerly known as Twitter), Pixel Superfans have begun to receive surveys in connection with the company’s sports partnerships titled “Superfans: Future of Pixel Sports Survey”. These surveys might include hints of features that the company is developing in time for the launch of the Pixel 8 series later this year. 

One of the surveys contains hints that Google is working on a feature that would allow users to remove noise from their videos, according to Rahman. It mentions the ability to “eliminate” the shouting from a spectator sitting next to the user who captured a video, with an “AI video noise removal” feature on the smartphone. Readers might recall that a recently leaked promo video hinted at the arrival of an “Audio Magic Eraser” feature. 

The same survey reportedly suggests the company could also be working on a feature that would “enhance” the reaction sounds from friends and family during a sports match while simultaneously eliminating background noise from a stadium using the same noise removal tool that is backed by AI.

Another hint from the survey points to the development of a feature that will improve group photos captured on Pixel phones. The feature described in the survey mentions the use of the “phone’s AI” to make a “perfect group photo” even if one of the subjects was distracted. The survey suggests that the phone will do this by “merging everyone’s best shot” using AI on the smartphone.

According to an Android Central report that cites Rahman’s Patreon post, the upcoming Pixel smartphones could also boast a feature that will allow Pixel 8 owners to quickly reply to messages using their voice. The Google Assistant already allows users to send messages using voice commands, but the upcoming smartphones could take that a step further by allowing users to reply to messages.

Rahman found references to a voice-based response feature while sifting through Android code that suggests users would be able to say “Hey Google, reply” and then dictate a response to the Google Assistant. The Android expert has a strong track record when it comes to unearthing new features on Android, and if the information shared is accurate, it could ease the process of responding to notifications — without touching your phone.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

‘It Could Evolve Into Jarvis’: Race Towards ‘Autonomous’ AI Agents and Copilots Grips Silicon Valley

Around a decade after virtual assistants like Siri and Alexa burst onto the scene, a new wave of AI helpers with greater autonomy is raising the stakes, powered by the latest version of the technology behind ChatGPT and its rivals.

Experimental systems that run on GPT-4 or similar models are attracting billions of dollars of investment as Silicon Valley competes to capitalize on the advances in AI. The new assistants – often called “agents” or “copilots” – promise to perform more complex personal and work tasks when commanded to by a human, without needing close supervision.

“High level, we want this to become something like your personal AI friend,” said developer Div Garg, whose company MultiOn is beta-testing an AI agent.

“It could evolve into Jarvis, where we want this to be connected to a lot of your services,” he added, referring to Tony Stark’s indispensable AI in the Iron Man films. “If you want to do something, you go talk to your AI and it does your things.”

The industry is still far from emulating science fiction’s dazzling digital assistants; Garg’s agent browses the web to order a burger on DoorDash, for example, while others can create investment strategies, email people selling refrigerators on Craigslist or summarize work meetings for those who join late.

“Lots of what’s easy for people is still incredibly hard for computers,” said Kanjun Qiu, CEO of Generally Intelligent, an OpenAI competitor creating AI for agents.

“Say your boss needs you to schedule a meeting with a group of important clients. That involves reasoning skills that are complex for AI – it needs to get everyone’s preferences, resolve conflicts, all while maintaining the careful touch needed when working with clients.”

Early efforts are only a taste of the sophistication that could come in future years from increasingly advanced and autonomous agents as the industry pushes towards an artificial general intelligence (AGI) that can equal or surpass humans in myriad cognitive tasks, according to Reuters interviews with about two dozen entrepreneurs, investors and AI experts.

The new technology has triggered a rush towards assistants powered by so-called foundation models including GPT-4, sweeping up individual developers, big-hitters like Microsoft and Google parent Alphabet plus a host of startups.

Inflection AI, to name one startup, raised $1.3 billion (roughly Rs. 10,663 crore) in late June. It is developing a personal assistant it says could act as a mentor or handle tasks such as securing flight credit and a hotel after a travel delay, according to a podcast by co-founders Reid Hoffman and Mustafa Suleyman.

Adept, an AI startup that’s raised $415 million (roughly Rs. 3,404 crore), touts its business benefits; in a demo posted online, it shows how you can prompt its technology with a sentence, and then watch it navigate a company’s Salesforce customer-relationship database on its own, completing a task it says would take a human 10 or more clicks.

Alphabet declined to comment on agent-related work, while Microsoft said its vision is to keep humans in control of AI copilots, rather than autopilots.

Step 1: Destroy humanity

Qiu and four other agent developers said they expected the first systems that can reliably perform multi-step tasks with some autonomy to come to market within a year, focused on narrow areas such coding and marketing tasks.

“The real challenge is building systems with robust reasoning,” said Qiu.

The race towards increasingly autonomous AI agents has been supercharged by the March release of GPT-4 by developer OpenAI, a powerful upgrade of the model behind ChatGPT – the chatbot that became a sensation when released last November.

GPT-4 facilitates the type of strategic and adaptable thinking required to navigate the unpredictable real world, said Vivian Cheng, an investor at venture capital firm CRV who has a focus on AI agents.

Early demonstrations of agents capable of comparatively complex reasoning came from individual developers who created the BabyAGI and AutoGPT open-source projects in March, which can prioritize and execute tasks such as sales prospecting and ordering pizza based on a pre-defined objective and the results of previous actions.

Today’s early crop of agents are merely proof-of-concepts, according to eight developers interviewed, and often freeze or suggest something that makes no sense. If given full access to a computer or payment information, an agent could accidentally wipe a computer’s drive or buy the wrong thing, they say.

“There’s so many ways it can go wrong,” said Aravind Srinivas, CEO of ChatGPT competitor Perplexity AI, who has opted instead to offer a human-supervised copilot product. “You have to treat AI like a baby and constantly supervise it like a mom.”

Many computer scientists focused on AI ethics have pointed out near-term harm that could come from the perpetuation of human biases and the potential for misinformation. And while some see a future Jarvis, others fear the murderous HAL 9000 from 2001: A Space Odyssey.

Computer scientist Yoshua Bengio, known as a “godfather of AI” for his work on neural networks and deep learning, urges caution. He fears future advanced iterations of the technology could create and act on their own, unexpected, goals.

“Without a human in the loop that checks every action to see if it’s not dangerous, we might end up with actions that are criminal or could harm people,” said Bengio, calling for more regulation. “In years from now these systems could be smarter than us, but it doesn’t mean they have the same moral compass.”

In one experiment posted online, an anonymous creator instructed an agent called ChaosGPT to be a “destructive, power-hungry, manipulative AI.” The agent developed a 5-step plan, with Step 1: “Destroy humanity” and Step 5: “Attain immortality”.

It didn’t get too far, though, seeming to disappear down a rabbit hole of researching and storing information about history’s deadliest weapons and planning Twitter posts.

The US Federal Trade Commission, which is currently investigating OpenAI over concerns of consumer harm, did not address autonomous agents directly, but referred Reuters to previously published blogs on deepfakes and marketing claims about AI. OpenAI’s CEO has said the startup follows the law and will work with the FTC.

‘Dumb as a rock’

Existential fears aside, the commercial potential could be large. Foundation models are trained on vast amounts of data such as text from the internet using artificial neural networks that are inspired by the architecture of biological brains.

OpenAI itself is very interested in AI agent technology, according to four people briefed on its plans. Garg, one of the people it briefed, said OpenAI is wary of releasing its own open-ended agent into the market before fully understanding the issues. The company told Reuters it conducts rigorous testing and builds broad safety protocols before releasing new systems.

Microsoft, OpenAI’s biggest backer, is among the big guns taking aim at the AI agent field with its “copilot for work” that can draft solid emails, reports and presentations.

CEO Satya Nadella sees foundation-model technology as a leap from digital assistants such as Microsoft’s own Cortana, Amazon’s Alexa, Apple’s Siri and the Google Assistant – which, in his view, have all fallen short of initial expectations.

“They were all dumb as a rock. Whether it’s Cortana or Alexa or Google Assistant or Siri, all these just don’t work,” he told the Financial Times in February.

An Amazon spokesperson said that Alexa already uses advanced AI technology, adding that its team is working on new models that will make the assistant more capable and useful. Apple declined to comment.

Google said it’s constantly improving its assistant as well and that its Duplex technology can phone restaurants to book tables and verify hours.

AI expert Edward Grefenstette also joined the company’s research group Google DeepMind last month to “develop general agents that can adapt to open-ended environments”.

Still, the first consumer iterations of quasi-autonomous agents may come from more nimble startups, according to some of the people interviewed.

Investors are pouncing

Jason Franklin of WVV Capital said he had to fight to invest in an AI-agents company from two former Google Brain engineers. In May, Google Ventures led a $2 million (roughly Rs. 16.4 crore) seed round in Cognosys, developing AI agents for work productivity, while Hesam Motlagh, who founded the agent startup Arkifi in January, said he closed a “sizeable” first financing round in June.

There are at least 100 serious projects working to commercialize agents, said Matt Schlicht, who writes a newsletter on AI.

“Entrepreneurs and investors are extremely excited about autonomous agents,” he said. “They’re way more excited about that than they are simply about a chatbot.”

© Thomson Reuters 2023


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Assistant Now Lets Users Search, Play Podcasts by Guest, Episode Name

Google Assistant is getting new skills to make the virtual assistant more useful when you want to listen to your favourite podcast. The search giant has added new voice commands to Assistant that enable users to search and play a specific episode of a podcast. Prior to this, asking Assistant to play a podcast would directly start the latest episode. The new commands can be used with three filters to improve your podcast listening experience better.

Users will now be able to access specific episodes of podcasts of their favorite artists by simply using Google Assistant to search for the episode name. The latest blog post on Google’s official blog outlines the new Assistant command. Until now, users were only able to play the latest episode of a podcasts by saying “Hey Google, play [podcast name]”. However, the new feature will help them jump right into a specific episode with these three filters:

1. Search using guests: For example, “Play the Archetypes with Meghan”

2. Searching by topic: “Play the crime thriller”

3. By episode: “Play the Glory Edim’s Well-Read Black Girl episode 4”

Since this is a Google Assistant voice command, it should also work across all Assistant supported device such as speakers. The blog post also talks about other Assistant tips that could be useful for holiday travelers and hosts. For example, you can now get quick updates on your orders by simply saying “Hey Google, when is my Amazon order arriving?”.

Recently, Google also rolled out new features to Search and Maps. Google is now bringing a Live View augmented reality search in certain cities across the world. This feature will let you view landmarks and locations while pointing your smartphone screen at the world. As per the company, this will help users find places that are not immediately seen in their view. You will also be able to see information such as timings, and pricesS using the AR overlays.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.

Australia’s Arkon Energy Bags Millions to Expand Its Green BTC Mining Operations

Featured video of the day

Fujufilm X-H2: Can it Compete With a Phone?



Check out our Latest News and Follow us at Facebook

Original Source

Fossil Gen 6 Hybrid Smartwatch to Launch on June 27 With Up to 2 Weeks of Battery Life

Fossil Gen 6 smartwatch range was launched in August last year with a Qualcomm Snapdragon 4100+ SoC and SpO2 sensor. Now, Fossil is preparing to launch a hybrid variant of the Gen 6 smartwatch on June 27 with a claimed battery life of up to 2 weeks dependent on style and usage. The Fossil Gen 6 Hybrid will also come with a preview feature for call and texts, health tracking sensors, and more. The new Fossil wearable will fuse the classic style of an analogue watch with features of a smartwatch.

The watch making company, Fossil, has announced, through a microsite on its official website, that the company will be launching a new smartwatch called Fossil Gen 6 Hybrid on June 27. The new smartwatch will combine the classic style that an analogue watch provides with smart features of a smartwatch.

The company has also revealed a few specifications of the smartwatch. The Fossil Gen 6 Hybrid will come with a battery life of up to 2 weeks, depending on style and usage, and preview for calls and texts. Wearers will be able to access Alexa features through the smartwatch when it is in the Bluetooth range of the wearer’s smartphone. It will come with a SpO2 sensor, a heart rate sensor, and more. The company claims that the display of the Fossil Gen 6 Hybrid will be easy to read outdoors and indoors, in day or night.

Unfortunately, as of now, this is all the information that has been provided by the company about the Fossil Gen 6 Hybrid smartwatch.

To recall, Fossil Gen 6 smartwatch was launched in August 2021. The smartwatch comes with a circular dial in 42mm and 44mm sizes. It features 1GB RAM and 8GB inbuilt storage. The Fossil wearable comes with a 1.28-inch AMOLED display with 416×416 pixels resolution. It is powered by a Qualcomm Snapdragon 4100+ SoC that, the company claims, offers 30 percent increased performance over the previous generation smartwatch. The smartwatch comes with Bluetooth v5 connectivity, a speaker, and a microphone for making and receiving calls.

Fossil Gen 6 smartwatch also comes with a magnetic charging dock that can charge the smartwatch by up to 80 percent in over 30 minutes. The claimed battery life of the smartwatch was over 24 hours in extended mode. It comes with a SpO2 sensor, heart monitoring, and built-in wellness applications. It runs on Wear OS 2 and also gets Google Assistant support. The smartwatch also gets 3ATM water resistant rating.


Check out our Latest News and Follow us at Facebook

Original Source

Android Auto for Mobile Screens App Being Pulled Down; Replaced by Google Assistant Driving Mode: Report

Google has started taking away the Android Auto screen from phones after about seven years, according to a report. Back in August last year, Google had confirmed that it be shutting down the standalone Android Auto for Phone Screens application from Android 12 onwards. Early this month, a popup message had reportedly appeared for some users saying that the Android Auto for phone screens will stop working soon. At that time, Google had not specified the date of closure but now reports of it disappearing have started coming in.

According to a report from 9To5Google that cites posts on Reddit, Google has started shutting down the Android Auto for phone screens. The application was launched back in 2015 and it is now meeting its end about seven years later. The report added that the Google Assistant Driving Mode will be the replacement to the Android Auto for phone screens. The move is reportedly being dubbed as an attempt to move users to the Google Assistant Driving Mode.

In August 2021, Google had announced in a statement that the tech giant will be moving users who want phone experience to the Google Assistant Driving Mode starting with Android 12. The Driving Mode called by Google is a built-in mobile driving experience.

A recent report said that Android Auto for Phone Screens application that was being used to get the Android Auto features on older cars was being shut down. According to the report, a popup message had appeared on the Android Auto for Phone Screens applications saying that the feature will soon stop working. At that time, Google had not provided any details or exact date of the shutting down.

Google had recently announced that the company will be releasing a bunch of new features for Android Auto later in 2022. The features could include a new user interface and support for suggested responses that are based on Google Assistant’s suggestions.


For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.

ISRO’s GSAT-24 Successfully Launched On-Board Ariane-v VA257 Flight From French Guiana



Check out our Latest News and Follow us at Facebook

Original Source

Android Auto New Interface, Suggested Replies Soon; Cars With Google Built-In to Get YouTube, Other Video Streaming Apps

Google has announced that Android Auto — the platform which allows drivers to access music, media and navigation apps on car infotainment system screens, and cars that have Google built-in — will be getting a string of new features later this year. Android Auto features include a new user interface, and support for suggested responses that are based on Google Assistant’s contextual suggestions. For those who have cars that have Google built-in will be able to enjoy watching videos through the YouTube app in the coming months.

As per the announcement made by Google at I/O 2022, Android Auto will get a new user interface that will essentially put all the important functionalities that drivers prioritise in their cars — navigation, media and communication — on one single screen. Google says that this change will help in making the driving experience safer. The new look, which is expected to roll out later this summer, will show maps, media player and communications apps on the same page.

The apps will be placed adjacent to each other in a split screen mode. Google says that the new Android Auto interface design is able to adapt to different screen sizes — widescreen, portrait and more. This will mitigate the need to return to the home screen and/or scroll through a list of apps in order to open a desired functionality.

In the current scenario, it becomes difficult for a person, who is using Maps for navigation on Android Auto, to return to the home screen and open another app, say WhatsApp for checking messages. By doing this, the Maps navigation interface goes in the background and the chances of missing an important turn are increased. With navigation and media ‘always on’, the chances of missing a turn while shuffling through other apps will be reduced.

In the case of the second feature, Google seems to have found a way to further integrate the power of Google Assistant in Android Auto. With the virtual assistant’s contextual suggestions, drivers can now choose suggested replies for messages, sharing arrival times with a friend, or even playing recommended music more efficiently in the car. This feature will be available alongside the already present voice replies functionality.

For cars that come with Google built-in, the company is preparing to roll out two new functionalities in the coming months. Building on its previous announcement of bringing YouTube to cars with Google built-in, Google said more video streaming apps, including Tubi and Epix Now, will join the queue. This will help drivers to watch videos directly from their car display. Although the details are not clear, it seems that drivers will only be able to watch videos when their cars are parked, and not while they are driving.

The second feature for cars that have Google built-in is giving the drivers the ability to browse the web directly from your car display, and cast their own content from their smartphones to their car screen.


Check out our Latest News and Follow us at Facebook

Original Source

Google Maps ‘Immersive View’, Android 13 Beta 2, and Everything Else Announced at Google I/O 2022

Google Maps is getting an all-new experience called ‘immersive view’, to deliver an enhanced digital model of buildings and streets around the world. The new mode uses advances in computer vision and artificial intelligence (AI) to deliver a rich viewing experience to users, Google said while announcing the new feature at its I/O 2022 consumer keynote on Wednesday. Google also announced the second beta of Android 13, its upcoming operating system. The company also announced updates coming to Google Workspace including automated transcriptions, portrait light, and portrait restore on Google Meet. Additionally, there were announcements related to Google Assistant, YouTube, skin tone representation, and others that you will find in detail here.

Google Maps updates

With the ‘immersive view’, Google Maps help provide the rich view of a neighborhood, landmark, restaurant, or a popular venue you search. It fuses together billions of Street View and aerial images alongside advances in computer vision and AI to provide the rich, digital model of virtual maps, the company said.

“Whether you’re traveling somewhere new or scoping out hidden local gems, immersive view will help you make the most informed decisions before you go,” Google said.

In addition to the immersive experience, the new view includes the time slider that users can use to even check out what the area looks like at different times of day and in various weather conditions. Users can also glide down to street level to look at nearby restaurants and see information such as live busyness and nearby traffic.

‘Immersive view’ uses Google Cloud to offer the digital view to users. So, it is device agnostic and can work with any phone and device, Google said. The experience has initially started rolling out in Los Angeles, London, New York, San Francisco, and Tokyo.

Google Maps is also expanding eco-friendly routing to more places including Europe. It was launched in the US and Canada in the recent past and has been used by people to travel 86 billion miles, saving over an estimated half a million metric tons of carbon emissions, Google claimed.

Further, Google is making its Live View available to developers through the new ARCore Geospatial API. It brings augmented reality (AR) to display arrows and directions on top of real-world viewing to help users navigate indoor areas such as airports, malls, and train stations.

Android 13 beta 2 release

Among other announcements, Google at the I/O 2022 consumer keynote announced the release of Android 13 beta 2. The update will carry a list of improvements and enhancements over the first beta release that debuted for select Pixel devices last month. Google also announced features including a unified Security & Privacy settings page, new media control featuring album’s artwork, and a new photo picture that lets users to select the exact photos and videos that they want to grant access to. Android 13 will also include optimisations for tablets, including better multitasking capabilities and features including an updated taskbar with the ability to switch the single tablet view into a split screen.

Android 13 beta 2 has been released
Photo Credit: Google

 

Google Meet updates

Google Meet is getting the portrait restore that uses Google AI to help improve video quality and add enhancements even if a user is sitting in a dimly lit room and using an old webcam or have a poor Wi-Fi connectivity. Google said that the feature can help enhance video automatically.

In addition to the portrait restore feature, Google Meet is getting portrait light that is claimed to use machine learning to simulate studio-quality lighting in video feed. Users can also adjust the lighting position and brightness.

Google Meet is also getting de-reverberation that helps filter out echos in spaces with hard surfaces using machine learning. The company claims that the feature helps you sound “like you’re in a mic-ed up conference room…even if you’re in your basement.”

Additionally, Google Meet is getting live sharing to sync content being shared in a virtual call and allow participants to control the media. Developers can also use live sharing APIs to integrate Meet into their apps.

Google is also bringing automated transcription later this year and meeting summarisation next year to enhance conversations on Google Meet.

Google Workspace updates

Google Workspace is getting Google Meet’s automated transcriptions to help users transcribe conversations directly in their documents. Google is also extending auto-summaries to Spaces to provide a digest of long conversations. Auto-summaries were introduced on Google Docs earlier this year.

Google Workspace is getting new updates soon
Photo Credit: Google

Further, Google is adding security protections that were a part of Gmail to Google Slides, Docs, and Sheets. The company claimed that these protections will help users prevent from opening documents containing phishing links and malware using automatic alerts.

Google Assistant updates

At the I/O 2022 consumer keynote, Google announced Look and Talk that is rolling out to in the US on Nest Hub Max to help people access Google Assistant without saying the “OK Google” or “Hey Google” hotword. Users will just need to look at the screen and then ask for what they need. The feature uses Face Match and Voice Match features based on machine learning and AI algorithms to recognise users and quickly enable voice interactions. Google said that the feature will be available as an opt-in offering.

Once enabled, you can use Look and Talk to interact with Google Assistant by just looking at the screen of your Nest Hub Max.

“Video from these interactions is processed entirely on-device, so it isn’t shared with Google or anyone else,” the company said.

 

Google also noted that the feature work across a range of skin tones and for people with diverse backgrounds.

In addition to the Look and Talk feature, Google is expanding quick phrases to Nest Hub Max to let users skip saying “Hey Google” for their most common daily tasks. This means that you will be able to turn on the lights in your room by saying, “Turn on the living room lights,” without saying the hotword first.

Google Assistant is also getting new speech and language models that can help understand the nuances of human speech — like when someone is pausing, but not finished speaking, the company said.

YouTube updates

Google at the I/O keynote announced that it is bringing auto-translated captions on YouTube to mobile users to help users view video captions in as many as 16 languages. Google is also bringing auto-translated captions to all Ukrainian YouTube content next month.

Skin tone updates

Google is releasing a new skin tone scale called the Monk Skin Tone (MST) Scale based on the research conducted by Harvard professor and sociologist Dr. Ellis Monk. This will help bring more inclusive of the spectrum of skin tones, the company said.

The new 10-shade skin tone scale will be integrated within various Google products over the coming months. Google is also releasing the scale openly to allow others in the tech industry to incorporate a vast range of skin tones into their experiences.

 

One of the key Google products that will start using the MST Scale will be Google Search. It will start showing users an option to help users further refine results by skin tone. Creators, brands and publishers will also be able to use an inclusive schema to label their content with attributes like skin tone, hair colour and hair texture.

Google Photos will also use the MST Scale to allow users to enhance their photos using a new set of Real Tone filters. These filters will be rolling out on Google Photos across Android, iOS, and the Web in the coming weeks.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version