Android 15 Beta 2 With Private Space, Advanced Anti-Theft Protection and More Released

Android 15 Beta 2 was announced at Google I/O 2024 on Wednesday, a day after the company’s annual developer conference kicked off. The latest beta version of its next major operating system (OS) update is now available to download on Google Pixel smartphones and is coming to select smartphones over the coming weeks. This year, Google is focussing on adding new security and privacy features to Android 15, and the latest beta version contains powerful new features that are available to beta testers.

During a Google I/O developer session, the company showed off its first security and privacy-oriented feature coming to smartphones with Android 15 — Private Space. It will allow users to hide certain apps (such a banking, finance, dating, or social media apps) on their smartphone in a secure location. Apps in the Private Space can be updated via a separate Play Store app and will also have access to their own storage area not accessible to other apps on the phone.

Google says that the Private Space feature is located in the default app drawer on Android 15. Users can scroll down to the end of the list of apps to reveal the secured app. This segregated application list can optionally be protected by a separate passcode — or using a biometric lock — while allowing them to completely hide its existence. 

With Android 15, Google is also upgrading its anti-theft protections, making it more difficult for thieves to use a stolen phone that has been reset, without the previously used Google account credentials. Android 15 will also ask users for their biometrics when attempting to increase the screen timeout, accessing passkeys, or disable Find My Device.

Another powerful anti-theft feature coming to Android 15 is “Theft Detection Lock”, or the ability to lock a smartphone when an “abrupt motion that could indicate theft” has been detected. Meanwhile, “Offline Device Lock” will also automatically lock itself if a user turns off access to the internet — something a thief might do after stealing a smartphone.

Remote Lock will let users send a command using a different number (when their handset is stolen) to lock and/ or wipe their phone remotely without logging on to Find My Device. Theft Detection Lock, Remote Lock, and Offline Device Lock are aimed at disincentivising smartphone theft and will arrive later this year on handsets running on Android 10 and newer versions, according to Google. 

Other features coming to Android 15 include the ability to automatically enable Bluetooth the next day — we can assume this feature is designed to help Google ensure its Find My Device network continues operating as intended. Google has also added improvements to the widget picker on Android 15, as well as a redesigned volume control panel that offers better reachability during one-handed use.

Android 15 Beta 2 is available for download on the Pixel 8Pixel 8 ProPixel 7aPixel 7Pixel 7 ProPixel 6aPixel 6 ProPixel 6Pixel Fold, and Pixel Tablet. Select smartphones from a handful of manufacturers will also be eligible to install Android 15 beta versions, and you can access the complete list of compatible handsets here.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google DeepMind to Use SynthID to Watermark Gemini and Veo’s AI-Generated Content

Google made a large number of artificial intelligence (AI)-based announcements late Tuesday during its I/O 2024 keynote session. These include new AI models, upgrades to existing foundation models, integration of AI features into Google’s products, and more. The tech giant also focused on AI safety and expanded the usage of its native watermarking technology for AI-generated content, dubbed SynthID. This new toolkit will now be embedding watermarks for text generated by the Gemini app and web client, and videos generated by Veo.

SynthID was first unveiled by Google DeepMind in August 2023 as a beta project aimed at correctly labelling AI-generated content. The need for such a solution was felt due to the rise of instances where these synthetically created media were shared as real. These were used to spread misinformation and cybercrimes such as phishing. The tech giant first used this technology in November 2023, when it was used to watermark AI-generated audio created through its Lyria model. The toolkit added watermarks as a waveform to the audio to make it imperceptible yet detectable.

Now, Google is expanding the usage of SynthID to include text and video generation. It will now watermark the text generated using the Gemini app and the website. For this, the toolkit will target the generation process itself. Every text-based AI model uses tokens — which can be words, syllables, or phrases — to train. The training process also includes understanding the flow of using these tokens, or the order the tokens should follow to generate the most coherent response.

SynthID introduces “additional information in the token distribution at the point of generation by modulating the likelihood of tokens being generated.” This way it assigns a number to certain words in a block of generated text. When detecting whether AI was used to generate the text, it checks the score against its adjusted probability scores to determine whether the source could be an AI model. DeepMind highlighted in a post that this technique is useful when an AI generates long creative text as it is easier for probability models to check how it was created. However, for shorter factual responses, the detection may not be as accurate.

The company is also expanding SynthID to recently unveiled Veo’s AI-generated videos. Google said the technology will embed watermarks directly into the pixels of every video frame which will be imperceptible to the human eye but will show up when a detection system is used.

In the coming months, Google plans to open-source SynthID text watermarking through its Responsible Generative AI toolkit. It will also publish a detailed research paper explaining the text watermarking technology.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: DeepMind Showcases Real-Time Computer Vision-Based AI Interaction With Project Astra

Google I/O 2024’s keynote session allowed the company to showcase its impressive lineup of artificial intelligence (AI) models and tools that it has been working on for a while. Most of the introduced features will make their way to public previews in the coming months. However, the most interesting technology previewed in the event will not be here for a while. Developed by Google DeepMind, this new AI assistant was called Project Astra and it showcased real-time, computer vision-based AI interaction.

Project Astra is an AI model that can perform tasks that are extremely advanced for the existing chatbots. Google follows a system where it uses its largest and the most powerful AI models to train its production-ready models. Highlighting one such example of an AI model which is currently in training, the co-founder and CEO of Google DeepMind Demis Hassabis showcased Project Astra. Introducing it, he said, “Today, we have some exciting new progress to share about the future of AI assistants that we are calling Project Astra. For a long time, we wanted to build a universal AI agent that can be truly helpful in everyday life.”

Hassabis also listed a set of requirements the company had set for such AI agents. They need to understand and respond to the complex and dynamic real-world environment, and they need to remember what they see to develop context and take action. Further, it also needs to be teachable and personal so it can learn new skills and have conversations without delays.

With that description, the DeepMind CEO showcased a demo video where a user could be seen holding up a smartphone with its camera app open. The user speaks with an AI and the AI instantly responds, answering various vision-based queries. The AI was also able to use the visual information for context and answer related questions required generative capabilities. For instance, the user showed the AI some crayons and asked the AI to describe it with alliteration. Without any lag, the chatbot says, “Creative crayons colour cheerfully. They certainly craft colourful creations.”

But that was not all. Further in the video, the user points towards the window, from which some buildings and roads can be seen. When asked about the neighbourhood, the AI promptly gives the correct answer. This shows the capability of the AI model’s computer vision processing and the massive visual dataset it would have taken to train it. But perhaps the most interesting demonstration was when the AI was asked about the user’s glasses. They appeared on the screen briefly for a few seconds and it had already left the screen. Yet, the AI could remember its position and guide the user to it.

Project Astra is not available either in public or private preview. Google is still working on the model, and it has to figure out the use cases for the AI feature and decide how to make it available to users. This demonstration would have been the most ridiculous feat by AI so far, but OpenAI’s Spring Update event a day ago took away some of its thunder. During its event, OpenAI unveiled GPT-4o which showcased similar capabilities and emotive voices that made the AI sound more human.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: Text-to-Image AI Model Imagen 3 Unveiled, Gets Improved Image Generation Capabilities

Google made several new announcements at its annual developer-focused Google I/O 2024 event. Among many artificial intelligence (AI) focused announcements made during the keynote session, one was particularly surprising. The tech giant introduced the next generation of its text-to-image AI model, Imagen 3. The new AI model was introduced just months after the launch of its predecessor Imagen 2, which came out in December 2023 and was later upgraded last month. The company said the new model can generate detailed photorealistic images while closely following the prompt.

Imagen 3 was introduced by Douglas Eck, Senior Research Director at Google DeepMind. Unveiling it, he said, “Today, I’m so excited to introduce Imagen 3. It is our most capable image generation model yet. It understands prompts written the way people write. The more creative and detailed you are, the better. Plus, this is our best model yet for rendering text which has been a challenge for image generation models.”

The AI model’s ability to understand prompts is said to have been heavily improved, which now allows it to closely follow the prompt to capture small details and generate a faithful image. This also appears to be a common direction for most of the AI-related announcements during the event, as most of the AI models are now capable of better understanding prompts. Google added that Imagen 3 will be available in multiple versions where each model is optimised for a specific type of task that can range from generating quick sketches to creating high-resolution images.

To enable Imagen 3 to capture small details and specific instructions such as camera angles or compositions in long, complex prompts, Google has trained the AI model with images that contain detailed descriptions in its captions, allowing it to pick up on even smaller nuances. It can also generate a variety of textures and can render text-based images.

Focusing on safety, every image generated by Imagen 3 will contain its SynthID’s watermark labelling. It embeds a digital watermark directly into the pixels of the image, making it impossible to remove via cropping, sharing, or making any alterations to the image. The AI model is expected to arrive in a public preview in the coming months. Right now, Google is working on adding inpainting and outpainting editing options. Imagen 3 is currently available in private preview inside ImageFX for select creators. It will soon be made available for the tech giant’s enterprise customers.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: Google Unveils AI Video Generator Veo, Takes on OpenAI’s Sora

Google I/O 2024’s keynote session was a 112-minute-long affair where the company made several major announcements focused on artificial intelligence (AI). The announcements ranged from new AI models to integration of AI into Google products, but perhaps one of the most interesting introductions was Veo, an AI-powered video generation model, that can generate 1080p resolution videos. The tech giant said that the AI tool can generate videos that go beyond the one-minute mark. Notably, OpenAI also unveiled its video AI model dubbed Sora in February.

During the event, Demis Hassabis, co-founder and CEO of Google DeepMind, unveiled Veo. Announcing the AI model, he said, “Today, I’m excited to announce our newest and most capable generative video model called Veo. Veo creates high-quality 1080p videos from text, image and video prompts. It can capture the details of your instructions in different visual and cinematic styles.”

The tech giant claims that Veo can closely follow prompts to understand the nuance and tone of a phrase and then generate a video to resemble it. The AI model can generate videos in different styles like timelapse, close-ups, fast-tracking shots, aerial shots, and various lighting and depth of field shots. Apart from video generation, the AI model can also edit videos when the user provides it with an initial video and a prompt to add or remove something. Further, it can also generate videos beyond the one-minute mark either through a single prompt or via multiple sequential prompts.

To solve the problem of consistency in video generation models, Veo uses latent diffusion transformers. This helps in reducing the instances of characters, objects, or the entire scene flickering, jumping, or morphing unexpectedly between frames. Google highlighted that videos created by Veo will be watermarked using SynthID, the company’s in-house tool for watermarking and identifying AI-generated content. The model will soon be available for select creators via the VideoFX tool at Google Labs.

Veo’s similarities with OpenAI’s Sora

While neither of the AI models is available to the public yet, both share several similarities. Veo can generate 1080p videos for a duration that can surpass one minute, whereas OpenAI’s Sora can generate videos of up to 60 seconds. Both models can generate videos from text prompts, images, and videos. Based on diffusion models, both are capable of generating videos from multiple shots, styles, and cinematography tehcniques. Both Sora and Veo also come with AI-generated content labels. Sora uses the Coalition for Content Provenance and Authenticity (C2PA) standard while Veo uses its native SynthID.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: Google Photos to Get an AI-Powered ‘Ask Photos’ Feature With Intelligent Search Capabilities

Google Photos received a surprise upgrade at the Google I/O 2024 event’s keynote session on Tuesday. The session, led by CEO Sundar Pichai, witnessed several major artificial intelligence (AI) announcements, including new upgrades for Gemini 1.5 Pro, new Google Search features, the introduction of new image and video AI models, and more. Interestingly, the tech giant also unveiled Ask Photos, a new AI-powered intelligent chatbot for Google Photos that makes searching for a particular image in the library easier.

During the event, Pichai highlighted that the company is now creating more powerful search experiences within Google products using Gemini’s capabilities. One such example is Google Photos, which was one of the first platforms by the tech giant to get AI capabilities. Before the new updates, AI tools in Photos could only understand basic keywords and certain subjects, which could be used to help find photos users were looking for. However, with the latest intelligent search tool Ask Photos, this process could get much easier.

Ask AI is powered by Gemini and is fine-tuned as a search engine. It can understand natural language prompts and can read and understand a large number of photos by their subject, background, and even digital information in the metadata. “With Ask Photos, you can ask for what you’re looking for in a natural way, like: “Show me the best photo from each national park I’ve visited.” Google Photos can show you what you need, saving you from all that scrolling,” the company said in a post.

Further, it can also answer questions based on this information. For example, a user can ask about the theme of an office party, and the AI will check the images and share the information. It can even tell the user the colour of the shirt they wore that day. The tech giant claims the AI tool can even perform tasks that go beyond searching and answering queries. The AI can also create a highlight of a recent trip by suggesting top pictures and writing personalised captions for each of them in case the user wants to share it on social media.

Google is also focusing on the privacy of users’ data. Since Ask Photos will be trained on users’ photo galleries, it has access to private and sensitive data. But the tech giant said this data will never be used for ads. The company will also not review these conversations and personal data in Ask Photos unless it addresses abuse and harm. The data will also not be used to train any AI product outside of Google Photos, the company said.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: Search With AI-Powered Multi-Step Reasoning, Planning and Video Search Features Unveiled

Google I/O 2024 began with multiple major artificial intelligence (AI) announcements. On Tuesday, the tech giant held the day 1 keynote session where it introduced new AI models, integrated AI with Google products, and teased new capabilities for Pixel smartphones and Android 15. During the event, the company also announced several new features for Google Search. The Search Generative Experience (SGE), available to only some users, is now being launched in the US as AI Overviews. New multimodal capabilities for the Search engine were also unveiled.

AI Overviews

Last year, Google unveiled SGE as a generative AI-led search where users could get a snapshot of the information curated by AI on the top of the results page. This was an experimental feature only available to some users. The Search giant is now rolling out the feature, rebranded as AI Overviews, to everyone in the US. The feature is also confirmed to expand to more countries soon and reach one billion users by the end of this year.

Integrated with Gemini’s capabilities, AI Overviews shows answers to ‘how-to’ queries in a simple text format where the information is curated from across the web. It also finds the most relevant answers and shows them at the top of the page. It also helps users find the right products when shopping online. The AI shows both links to the sources of the information and gives an overview of the topic.

The company will soon introduce two additional format options for AI Overviews — Simpler and Break it down. The Simpler format will simplify the language to help children and those without technical knowledge understand topics. On the other hand, the Break it down format will divide the topic into smaller concepts to help users delve into the complexity in a step-by-step manner. This will be first added as an experimental feature in Search Labs and will be available for English queries in the US.

New Google Search features

Apart from AI Overviews, Google introduced three new AI-powered features to Search. First, Google Search is getting multi-step reasoning capabilities that will let it understand complex questions asked by users. The search engine will show results with all the requirements of the question. For instance, if a user wants to know about the best gym that has introductory offers and is within a walkable distance, Search will be able to understand each requirement and show the closest gyms with the highest rating and introductory offers. The tech giant says it will use high-quality sources to find this information.

Google Search is also getting a new planning feature. Gemini AI integration will allow Search to show results for questions such as meal plans or planning a trip. It will be able to take each of the users’ criteria into consideration and only show relevant results. “Search for something like “create a 3 day meal plan for a group that’s easy to prepare,” and you’ll get a starting point with a wide range of recipes from across the web,” the company said. Further, users will be able to make adjustments to such queries after results have shown to make granular changes. For example, users can opt for vegetarian recipes or microwavable recipes.

Finally, Google is bringing Gemini’s multimodal capabilities to Search. Users will soon be able to ask questions with videos. To expand the scope of Google Search, the company will let users upload a video about something they have a query about. Asking a text question with the video will allow the AI to process the video and answer the query. This will be a useful tool to ask about things that are difficult to describe. While multi-step reasoning and planning are available via Search Labs, video searches will be added soon. Both are currently limited to English queries in the US.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024 Set to Take Place on May 14: Android 15, Pixel 8a, More Expected

Google is hosting its next I/O developer conference on May 14. The company followed its tradition of posting a ‘Break the Loop puzzle’ before revealing the dates for its developer-focused annual conference. This will be an in-person event and it will be live-streamed across all Google’s official channels. The annual event is expected to focus on Google’s latest advancements in AI. The event will presumably disclose the new features set to arrive in Android 15, Chrome and Google’s other services like Gmail, Google Photos, and more. It is expected to tease some new hardware including the Google Pixel 8a.

The tech giant has revealed the date for this year’s I/O conference through a dedicated event website on Thursday (March 14). The live event will take place on May 14 at Shoreline Amphitheatre in Mountain View, California, the regular venue for the I/O conference. As always, Google is expected to livestream the keynote via the I/O website and its YouTube channels. The developer session could be limited for the attendees. Similar to past years, developers can register for the event for free and get email updates about the schedule and content.

Besides developer-focused talks about apps and product developments, Google’s annual event is anticipated to reveal its next steps in the fast-moving AI field including possible announcements about  Android XR and Gemini. We are likely to learn about new features of Chrome, Android 15, and other Google products.

Google showed off the Pixel 7a at least year’s event. Based on this, the company is expected to provide a look at the upcoming Pixel 8a smartphone during the May 14 event. It is said to feature a 6.1-inch display with 90Hz refresh rate and could run on Google’s Tensor G3 SoC alongside 8GB of RAM. It is expected to be more expensive than its predecessor.

Last year’s I/O also saw the launch of Pixel Fold as Google’s first foldable phone. So we can also expect potential teasers for the Pixel Fold 2 and Pixel 9 series this time.


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version