Google I/O 2024: Text-to-Image AI Model Imagen 3 Unveiled, Gets Improved Image Generation Capabilities

Google made several new announcements at its annual developer-focused Google I/O 2024 event. Among many artificial intelligence (AI) focused announcements made during the keynote session, one was particularly surprising. The tech giant introduced the next generation of its text-to-image AI model, Imagen 3. The new AI model was introduced just months after the launch of its predecessor Imagen 2, which came out in December 2023 and was later upgraded last month. The company said the new model can generate detailed photorealistic images while closely following the prompt.

Imagen 3 was introduced by Douglas Eck, Senior Research Director at Google DeepMind. Unveiling it, he said, “Today, I’m so excited to introduce Imagen 3. It is our most capable image generation model yet. It understands prompts written the way people write. The more creative and detailed you are, the better. Plus, this is our best model yet for rendering text which has been a challenge for image generation models.”

The AI model’s ability to understand prompts is said to have been heavily improved, which now allows it to closely follow the prompt to capture small details and generate a faithful image. This also appears to be a common direction for most of the AI-related announcements during the event, as most of the AI models are now capable of better understanding prompts. Google added that Imagen 3 will be available in multiple versions where each model is optimised for a specific type of task that can range from generating quick sketches to creating high-resolution images.

To enable Imagen 3 to capture small details and specific instructions such as camera angles or compositions in long, complex prompts, Google has trained the AI model with images that contain detailed descriptions in its captions, allowing it to pick up on even smaller nuances. It can also generate a variety of textures and can render text-based images.

Focusing on safety, every image generated by Imagen 3 will contain its SynthID’s watermark labelling. It embeds a digital watermark directly into the pixels of the image, making it impossible to remove via cropping, sharing, or making any alterations to the image. The AI model is expected to arrive in a public preview in the coming months. Right now, Google is working on adding inpainting and outpainting editing options. Imagen 3 is currently available in private preview inside ImageFX for select creators. It will soon be made available for the tech giant’s enterprise customers.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: Google Unveils AI Video Generator Veo, Takes on OpenAI’s Sora

Google I/O 2024’s keynote session was a 112-minute-long affair where the company made several major announcements focused on artificial intelligence (AI). The announcements ranged from new AI models to integration of AI into Google products, but perhaps one of the most interesting introductions was Veo, an AI-powered video generation model, that can generate 1080p resolution videos. The tech giant said that the AI tool can generate videos that go beyond the one-minute mark. Notably, OpenAI also unveiled its video AI model dubbed Sora in February.

During the event, Demis Hassabis, co-founder and CEO of Google DeepMind, unveiled Veo. Announcing the AI model, he said, “Today, I’m excited to announce our newest and most capable generative video model called Veo. Veo creates high-quality 1080p videos from text, image and video prompts. It can capture the details of your instructions in different visual and cinematic styles.”

The tech giant claims that Veo can closely follow prompts to understand the nuance and tone of a phrase and then generate a video to resemble it. The AI model can generate videos in different styles like timelapse, close-ups, fast-tracking shots, aerial shots, and various lighting and depth of field shots. Apart from video generation, the AI model can also edit videos when the user provides it with an initial video and a prompt to add or remove something. Further, it can also generate videos beyond the one-minute mark either through a single prompt or via multiple sequential prompts.

To solve the problem of consistency in video generation models, Veo uses latent diffusion transformers. This helps in reducing the instances of characters, objects, or the entire scene flickering, jumping, or morphing unexpectedly between frames. Google highlighted that videos created by Veo will be watermarked using SynthID, the company’s in-house tool for watermarking and identifying AI-generated content. The model will soon be available for select creators via the VideoFX tool at Google Labs.

Veo’s similarities with OpenAI’s Sora

While neither of the AI models is available to the public yet, both share several similarities. Veo can generate 1080p videos for a duration that can surpass one minute, whereas OpenAI’s Sora can generate videos of up to 60 seconds. Both models can generate videos from text prompts, images, and videos. Based on diffusion models, both are capable of generating videos from multiple shots, styles, and cinematography tehcniques. Both Sora and Veo also come with AI-generated content labels. Sora uses the Coalition for Content Provenance and Authenticity (C2PA) standard while Veo uses its native SynthID.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: Google Photos to Get an AI-Powered ‘Ask Photos’ Feature With Intelligent Search Capabilities

Google Photos received a surprise upgrade at the Google I/O 2024 event’s keynote session on Tuesday. The session, led by CEO Sundar Pichai, witnessed several major artificial intelligence (AI) announcements, including new upgrades for Gemini 1.5 Pro, new Google Search features, the introduction of new image and video AI models, and more. Interestingly, the tech giant also unveiled Ask Photos, a new AI-powered intelligent chatbot for Google Photos that makes searching for a particular image in the library easier.

During the event, Pichai highlighted that the company is now creating more powerful search experiences within Google products using Gemini’s capabilities. One such example is Google Photos, which was one of the first platforms by the tech giant to get AI capabilities. Before the new updates, AI tools in Photos could only understand basic keywords and certain subjects, which could be used to help find photos users were looking for. However, with the latest intelligent search tool Ask Photos, this process could get much easier.

Ask AI is powered by Gemini and is fine-tuned as a search engine. It can understand natural language prompts and can read and understand a large number of photos by their subject, background, and even digital information in the metadata. “With Ask Photos, you can ask for what you’re looking for in a natural way, like: “Show me the best photo from each national park I’ve visited.” Google Photos can show you what you need, saving you from all that scrolling,” the company said in a post.

Further, it can also answer questions based on this information. For example, a user can ask about the theme of an office party, and the AI will check the images and share the information. It can even tell the user the colour of the shirt they wore that day. The tech giant claims the AI tool can even perform tasks that go beyond searching and answering queries. The AI can also create a highlight of a recent trip by suggesting top pictures and writing personalised captions for each of them in case the user wants to share it on social media.

Google is also focusing on the privacy of users’ data. Since Ask Photos will be trained on users’ photo galleries, it has access to private and sensitive data. But the tech giant said this data will never be used for ads. The company will also not review these conversations and personal data in Ask Photos unless it addresses abuse and harm. The data will also not be used to train any AI product outside of Google Photos, the company said.

Check out our Latest News and Follow us at Facebook

Original Source

Google I/O 2024: Search With AI-Powered Multi-Step Reasoning, Planning and Video Search Features Unveiled

Google I/O 2024 began with multiple major artificial intelligence (AI) announcements. On Tuesday, the tech giant held the day 1 keynote session where it introduced new AI models, integrated AI with Google products, and teased new capabilities for Pixel smartphones and Android 15. During the event, the company also announced several new features for Google Search. The Search Generative Experience (SGE), available to only some users, is now being launched in the US as AI Overviews. New multimodal capabilities for the Search engine were also unveiled.

AI Overviews

Last year, Google unveiled SGE as a generative AI-led search where users could get a snapshot of the information curated by AI on the top of the results page. This was an experimental feature only available to some users. The Search giant is now rolling out the feature, rebranded as AI Overviews, to everyone in the US. The feature is also confirmed to expand to more countries soon and reach one billion users by the end of this year.

Integrated with Gemini’s capabilities, AI Overviews shows answers to ‘how-to’ queries in a simple text format where the information is curated from across the web. It also finds the most relevant answers and shows them at the top of the page. It also helps users find the right products when shopping online. The AI shows both links to the sources of the information and gives an overview of the topic.

The company will soon introduce two additional format options for AI Overviews — Simpler and Break it down. The Simpler format will simplify the language to help children and those without technical knowledge understand topics. On the other hand, the Break it down format will divide the topic into smaller concepts to help users delve into the complexity in a step-by-step manner. This will be first added as an experimental feature in Search Labs and will be available for English queries in the US.

New Google Search features

Apart from AI Overviews, Google introduced three new AI-powered features to Search. First, Google Search is getting multi-step reasoning capabilities that will let it understand complex questions asked by users. The search engine will show results with all the requirements of the question. For instance, if a user wants to know about the best gym that has introductory offers and is within a walkable distance, Search will be able to understand each requirement and show the closest gyms with the highest rating and introductory offers. The tech giant says it will use high-quality sources to find this information.

Google Search is also getting a new planning feature. Gemini AI integration will allow Search to show results for questions such as meal plans or planning a trip. It will be able to take each of the users’ criteria into consideration and only show relevant results. “Search for something like “create a 3 day meal plan for a group that’s easy to prepare,” and you’ll get a starting point with a wide range of recipes from across the web,” the company said. Further, users will be able to make adjustments to such queries after results have shown to make granular changes. For example, users can opt for vegetarian recipes or microwavable recipes.

Finally, Google is bringing Gemini’s multimodal capabilities to Search. Users will soon be able to ask questions with videos. To expand the scope of Google Search, the company will let users upload a video about something they have a query about. Asking a text question with the video will allow the AI to process the video and answer the query. This will be a useful tool to ask about things that are difficult to describe. While multi-step reasoning and planning are available via Search Labs, video searches will be added soon. Both are currently limited to English queries in the US.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Teases Computer Vision, Conversational Capabilities of Gemini AI Ahead of Google I/O Event

Google shared a video on its social media platforms on Monday, teasing new capabilities of its artificial intelligence (AI)-powered chatbot Gemini. The video was released just a day before the company’s annual developer-focused Google I/O event. It is believed that the tech giant could make several announcements around AI and unveil new features and possibly new AI models. Besides that, the centre-stage is likely to be taken by Android 15 and Wear OS 5, which could be unveiled during the event.

In a short video posted on X (formerly known as Twitter), the official account of Google teased new capabilities of its in-house AI chatbot. The 50 second-long video highlighted marked improvements in its speech, giving Gemini a more emotive voice and modulations that gives it a more human-like appearance. Further, the video highlighted new computer vision capabilities. The AI could pick up on the visuals on the screen and analyse it.

Gemini could also access the camera of the smartphone, a capability it does not possess at present. The user was moving the camera across the space and asked the AI to describe what it saw. Almost without any time lag, the chatbot could describe the setting as a stage and when prompted, could even recognise the Google I/O logo and share information around it.

The video shared no further details about the AI, and instead asked people to watch the event to know more. There are some questions that might be answered during the event such as whether Google is using a new large language model (LLM) for computer vision or if it an upgraded version of Gemini 1.5 Pro. Further, Google may also reveal what else can the AI do with its computer vision. Notably, there are rumours that the tech giant might introduce Gems, which are considered to be chatbot agents that can be designed for particular tasks, similar to OpenAI’s GPTs.

While Google’s event is expected to introduce new features to Gemini, OpenAI held its Spring Update event on Monday and unveiled its latest GPT-4o AI model that added features to ChatGPT, similar to the video shared by Google. The new AI model allows it to have a conversational speech, computer vision, real-time language translation, and more.

Check out our Latest News and Follow us at Facebook

Original Source

Apple’s Siri Assistant Could Get a Massive AI-Charged Revamp at WWDC 2024: Report

Apple could introduce the biggest revamp to its native virtual assistant Siri since its launch at the upcoming Worldwide Developers Conference (WWDC) 2024. The Cupertino-based tech giant is rumoured to unveil its artificial intelligence (AI) strategy and introduce new features for its devices. As per a new report, the central piece of this move will be making Siri smarter and more efficient. The iPhone maker is expected to use either in-house AI models or licence them from a third-party source to improve Siri’s capabilities.

According to a report by the New York Times, top executives at Apple made the decision last year that its virtual assistant needs a major revamp to stay relevant. The realisation came as AI chatbots such as OpenAI’s ChatGPT showcased the diverse range of tasks they can complete. The inclusion of the contextual understanding of language, which allowed users to make vague queries and still get the right response, was also considered a significant upgrade. Citing unnamed people familiar with the matter, the report highlighted that Apple is working on adding AI capabilities to Siri.

The report highlighted that improving Siri has become a “tent pole project” at Apple’s Cupertino headquarters, which refers to a “once-in-a-decade” initiative in the company. It is said that the company is now gearing up to showcase the new Siri at the WWDC 2024 event on June 10. Two focus areas to improve Siri include conversational language and versatility of tasks, the report mentioned. However, it is believed that the tech giant does not want its virtual assistant to turn into another AI-powered chatbot.

It is believed that instead of turning Siri into a generalist chatbot capable of generating poetry and essays, its output will be controlled and limited to the tasks it already does, but with significant improvements. Users might be able to ask follow-up questions without repeating all the information, something Siri is not capable of currently. It might also be able to perform more tasks across the device. These details are not known at present.

However, it is said that Apple intends to keep Siri private and run it entirely on-device. This means the iPhone maker will be limited to its on-device neural processing unit (NPU) to power the computing and minimise the latency issues. This is interesting given an earlier report claimed that Apple is also working on building AI chips for its data centres.

The NY Times report claims Apple’s decision to not rely on cloud servers comes from cost-effectiveness. Highlighting an example, it said OpenAI is forced to spend 12 cents (roughly Rs. 16) for every 1,000 words generated by ChatGPT due to cloud computing costs. Apple might be able to circumvent this expense by keeping the feature within the device.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI GPT-4o With Real-Time Responses and Video Interaction Announced, GPT-4 Features Now Available for Free

OpenAI held its much-anticipated Spring Update event on Monday where it announced a new desktop app for ChatGPT, minor user interface changes to ChatGPT’s web client, and a new flagship-level artificial intelligence (AI) model dubbed GPT-4o. The event was streamed online on YouTube and was held in front of a small live audience. During the event, the AI firm also announced that all the GPT-4 features, which were so far available only to premium users, will now be available to everyone for free.

OpenAI’s ChatGPT desktop app and interface refresh

Mira Murati, the Chief Technical Officer of OpenAI, kickstarted the event and launched the new ChatGPT desktop app, which now comes with computer vision and can look at the user’s screen. Users will be able to turn this feature on and off, and the AI will analyse and assist with whatever is shown. The CTO also revealed that the ChatGPT’s web version is getting a minor interface refresh. The new UI comes with a minimalist appearance and users will see suggestion cards when entering the website. The icons are also smaller and hide the entire side panel, making a larger portion of the screen available for conversations. Notably, ChatGPT can now also access web browser and provide ral-time search results.

GPT-4o features

The main attraction of the OpenAI event was the company’s newest flagship-grade AI model called GPT-4o, where the ‘o’ stands for omni-model. Murati highlights that the new chatbot is twice as fast, 50 percent cheaper, and has five times higher rate limits compared to the GPT-4 Turbo model.

GPT-4o also offers significant improvements in the latency of responses and can generate real-time responses even in speech mode. In a live demo of the AI model, OpenAI showcased that it can converse in real time and react to the user. GPT-4o-powered ChatGPT can now also be interrupted to answer a different question, which was impossible earlier. However, the biggest enhancement in the unveiled model is the inclusion of emotive voices.

Now, when ChatGPT speaks, its responses contain various voice modulations, making it sound more human and less robotic. A demo showed that the AI can also pick up on human emotions in speech and react to them. For instance, if a user speaks in a panicking voice, it will speak in a concerned voice.

Improvements have also been made to its computer vision, and based on the live demos, it can now process and respond to live video feeds from the device’s camera. It can see a user solve a mathematical equation and offer step-by-step guidance. It can also correct the user in real time if he makes a mistake. Similarly, it can now process large coding data and instantaneously analyse it and share suggestions to improve it. Finally, users can now open the camera and speak with their faces visible, and the AI can detect their emotions.

Finally, another live demo highlighted that the ChatGPT, powered by the latest AI model, can also perform live voice translations and speak in multiple languages in quick succession. While OpenAI did not mention the subscription price for access to the GPT-4o model, it highlighted that it will be rolled out in the coming weeks and available as an API.

GPT-4 is now available for free

Apart from all the new launches, OpenAI has also made the GPT-4 AI model, including its features, available for free. People using the free tier of the platform will be able to access features such as GPTs (mini chatbots designed for specific use cases), GPT Store, the Memory feature through which the AI can remember the user and specific information relating to them for future conversations, and its advanced data analytics without paying anything.

Check out our Latest News and Follow us at Facebook

Original Source

Samsung Galaxy S21, Galaxy Z Fold 3, Z Flip 3 to Get Only Two Galaxy AI Features With One UI 6.1 Update

Samsung Galaxy S21 series is set to receive the One UI 6.1 update, along with the company’s Galaxy Z Fold 3 and Galaxy Z Flip 3 foldable phones. However, unlike more recent models from the South Korean smartphone maker, these three handsets will only gain support for two new Galaxy AI features. One the past few months, Samsung has been rolling out One UI 6.1 to recent Galaxy Z and Galaxy S series phones, with support for up to 10 AI-powered features.

The company confirmed on Friday that Samsung Galaxy S21 series, Galaxy Z Fold 3, and Galaxy Z Flip 3 users will get access to two Galaxy AI features: Circle to Search and Chat Assist. The company’s announcement also contains a footnote stating that it will provide Galaxy AI features “for free until the end of 2025” — the same message that was shown on the company’s website when the Galaxy S24 series was launched earlier this year. 

With the One UI 6.1 update, owners of the Samsung Galaxy S21 series, Galaxy Z Fold 3, and Galaxy Z Flip 3 will be able to use the Circle to Search feature that is currently exclusive to smartphones from Samsung and Google. Users can log press the navigation pill to summon an overlay that lets them draw around, scribble, or highlight a part of the screen to perform a visual lookup — without leaving the app they are using.

Chat Assist, another AI feature that is designed to help users compose texts in different languages, via seamless translation of incoming or outgoing messages, is also coming to all three handsets. Samsung says the feature is also designed to work with third party apps, which means users won’t have to open a translation app while messaging another user. Chat Assist is also designed to help users change the tone of their messages, according to the company.

Samsung’s announcement that it will update the Galaxy S21 series, Galaxy Z Fold 3, and Galaxy Z Flip 3 with these two features confirms that the company’s other Galaxy AI functionality won’t be making its way to these older handsets. AI features that won’t be a part of the update include Interpreter, Live Translate, Note Assist, Transcript Assist, Browsing Assist, Generative Edit, Edit Suggestion and AI-Generated Wallpaper.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Said to Be Nearing Deal With OpenAI to Put ChatGPT on iPhone

Apple Inc. has closed in on an agreement with OpenAI to use the startup’s technology on the iPhone, part of a broader push to bring artificial intelligence features to its devices, according to people familiar with the matter.

The two sides have been finalizing terms for a pact to use ChatGPT features in Apple’s iOS 18, the next iPhone operating system, said the people, who asked not to be identified because the situation is private. Apple also has held talks with Alphabet Inc.’s Google about licensing that company’s Gemini chatbot. Those discussions haven’t led to an agreement, but are ongoing.

An OpenAI accord would let Apple offer a popular chatbot as part of a flurry of new AI features that it’s planning to announce next month. Bloomberg reported in April that the discussions with OpenAI had intensified. Still, there’s no guarantee that an agreement will be announced imminently.

Representatives for Apple, OpenAI and Google declined to comment.

Apple plans to make a splash in the artificial intelligence world in June, when it holds its annual Worldwide Developers Conference. As part of the push, the company will run some of its upcoming artificial intelligence features via data centers equipped with its own in-house processors, Bloomberg has reported.

Last year, Apple Chief Executive Officer Tim Cook said he personally uses OpenAI’s ChatGPT but added that there were “a number of issues that need to be sorted.” He promised that new AI features would come to Apple’s products on a “very thoughtful basis.”

On Apple’s earnings conference call last week, he argued that Apple would have an edge in AI.

“We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software and services integration,” Cook said during the earnings call.

© 2024 Bloomberg L.P.


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Said to Use In-House Server Chips to Power AI Tools Coming to iPhone, iPad, and Mac Computers This Year

Apple Inc. will deliver some of its upcoming artificial intelligence features this year via data centers equipped with its own in-house processors, part of a sweeping effort to infuse its devices with AI capabilities. The company is placing high-end chips — similar to ones it designed for the Mac — in cloud-computing servers designed to process the most advanced AI tasks coming to Apple devices, according to people familiar with the matter. Simpler AI-related features will be processed directly on iPhones, iPads and Macs, said the people, who asked not to be identified because the plan is still under wraps.

The move is part of Apple’s much-anticipated push into generative artificial intelligence — the technology behind ChatGPT and other popular tools. The company is playing catch-up with Big Tech rivals in the area but is poised to lay out an ambitious AI strategy at its Worldwide Developers Conference on June 10.

Apple’s plan to use its own chips and process AI tasks in the cloud was hatched about three years ago, but the company accelerated the timeline after the AI craze — fueled by OpenAI’s ChatGPT and Google’s Gemini — forced it to move more quickly.

The first AI server chips will be the M2 Ultra, which was launched last year as part of the Mac Pro and Mac Studio computers, though the company is already eyeing future versions based on the M4 chip.

Apple shares briefly reached a session high of $184.59 in New York trading after Bloomberg reported the details. The stock is down more than 4% for the year. A representative for Cupertino, California-based Apple declined to comment.

Relatively simple AI tasks — like providing users a summary of their missed iPhone notifications or incoming text messages — could be handled by the chips inside of Apple devices. More complicated jobs, such as generating images or summarizing lengthy news articles and creating long-form responses in emails, would likely require the cloud-based approach — as would an upgraded version of Apple’s Siri voice assistant.

The move, coming as part of Apple’s iOS 18 rollout in the fall, represents a shift for the company. For years, Apple prioritized on-device processing, touting it as a better way to ensure security and privacy. But people involved in the creation of the Apple server project — code-named ACDC, or Apple Chips in Data Centers — say that components already inside of its processors can safeguard user privacy. The company uses an approach called Secure Enclave that can isolate data from a security breach.

For now, Apple is planning to use its own data centers to operate the cloud features, but it will eventually rely on outside facilities — as it does with iCloud and other services. The Wall Street Journal reported earlier on some aspects of the server plan.

Luca Maestri, Apple’s chief financial officer, hinted at the approach on an earnings call last week. “We have our own data center capacity and then we use capacity from third parties,” he said after being asked about the company’s AI infrastructure. “It’s a model that has worked well for us historically, and we plan to continue along the same lines going forward.”

Handling AI features on devices will still be a big part of Apple’s AI strategy. But some of those capabilities will require its most recent chips, such as the A18 launched in last year’s iPhone and the M4 chip that debuted in the iPad Pro earlier this week. Those processors include significant upgrades to the so-called neural engine, the part of the chip that handles AI tasks.

Apple is rapidly upgrading its product line with more powerful chips. In a first, it’s bringing a next-generation processor — the M4 — to its entire range of Mac computers. The Mac mini, iMac and MacBook Pro will get the M4 later this year, and the chip will go into the MacBook Air, Mac Studio and Mac Pro next year, Bloomberg News reported in April.

Taken together, the plans lay the groundwork for Apple to weave AI into much of its product line. The company will focus on features that make life easier for users as they go about their day — say, by making suggestions and offering a customized experience. Apple isn’t planning to roll out its own ChatGPT-style service, though it’s been in discussions about offering that option through a partnership.

Just last week, Apple said the ability to run AI on its devices will help it stand out from rivals.

“We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software and services integration,” Chief Executive Officer Tim Cook said during the earnings call.

Without getting into specifics, Cook said that Apple’s in-house semiconductors would give it in an edge in this still-nascent field. He added that the company’s privacy focus “underpins everything we create.”

The company has invested hundreds of millions of dollars in the cloud-based initiative over the past three years, according to the people. But there are still gaps in its offerings. For users who want a chatbot, Apple has held discussions with Alphabet Inc.’s Google and OpenAI about integrating one into the iPhone and iPad.

Talks with OpenAI have recently intensified, suggesting that a partnership is likely. Apple also could offer a range of options from outside companies, people familiar with the discussions have said.

© 2024 Bloomberg LP


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version