Google, Meta Bid Millions for Hollywood Content as AI Licencing Race Heats Up: Report

Google and Meta have reportedly joined OpenAI in the artificial intelligence (AI) content licensing war. OpenAI has been making numerous deals with news publishers and other websites to access their data to train its AI models. As per a report, Google and Meta have also entered the market to strike content licensing deals with Hollywood studios. It is said that the tech giants intend to use this database to train their video generation models. Notably, Google recently unveiled its AI video model Veo.

According to a report by Bloomberg, both Google and Meta are trying to get access to the large content libraries of Hollywood studios to train their respective AI video models. While Google might want these partnerships for Veo, Meta has not publicly announced any such model. However, the report claims that the social media giant might be working on a video model internally.

Citing people familiar with the matter, the report highlighted that both companies have offered tens of millions of dollars to partner with studios. While the Hollywood studios are said to be interested in forging partnerships, they are also concerned about losing control of how their content will be used by the Silicon Valley giants.

Hollywood studios reportedly give mixed responses

As per the report, Netflix and Walt Disney have refused to licence their content to the companies. However, they have expressed their interest in forming other types of partnerships. It is not known what these partnerships are. On the other hand, Warner Bros Discovery has reportedly shown a willingness to licence some specific programmes to train the AI, but not its full content library.

It is believed the recent incident involving Hollywood actor Scarlett Johansson, where she accused OpenAI of creating a voice for ChatGPT that sounds very similar to hers has also contributed in raising concerns among Hollywood studios.

However, OpenAI has been successful in making some content licensing deals with media publications. It has reportedly signed a deal with News Corp, which is the parent company of the Wall Street Journal, Barron’s, New York Post, The Daily Telegraph, and others. Both Google and OpenAI have also signed a deal to access real-time content from Reddit.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Adobe Reportedly Paying for Videos to Train a New Text-to-Video AI Model

Adobe is reportedly working on building an artificial intelligence (AI)-powered text-to-video generation model. To train its AI model, the company is said to be purchasing videos from photographers and artists. This data will be used in addition to the platform’s existing library of stock images and videos. Interestingly, the software giant is paying an average of $2.62 (roughly Rs. 220) for every minute of submitted videos. Notably, earlier this year, Adobe unveiled its Project Music GenAI Control, an AI music-generation tool which is still under development.

According to a report by Bloomberg, the company has offered its network of photographers and artists as much as $120 (roughly Rs. 10,000) to submit videos, which will then be used to train its AI video model. On the basis of documents seen by the publication, it added that the company has requested videos of “people engaged in everyday actions such as walking or expressing emotions including joy and anger”. It appears Adobe might use this data to train human expressions and natural motion through these procured videos.

Further, the report highlighted that the software giant has requested as many as 100 short clips of people expressing emotions and shots of human anatomy such as feet, hands, or eyes. It also has asked for videos where people are “interacting with objects” such as smartphones and gym equipment.

The documents seen by the publication also warn the collaborators to not submit copyrighted material, nudity or any other offensive content. The payout for this task on average is $2.62 but the report states that some submitters could also be paid as high as $7.25 (roughly Rs. 600) for a minute of submitted video.

The development also highlights the increased costs companies are now incurring for procuring data to train their AI model as the publicly available data sources are being used up. While some tech companies have resorted to ethically procuring the data, others have been accused of stealing copyrighted data from social media platforms. A recent report claimed that OpenAI used more than a million hours of transcribed data from YouTube videos to train GPT-4.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


ZTE Axon 60 Ultra With Dual Satellite Connectivity, Snapdragon 8 Gen 2 SoC Launched: Specifications



Check out our Latest News and Follow us at Facebook

Original Source

Higgsfield AI Introduces Diffuse App, an Image-to-Video Generator for Smartphones

Higgsfield AI, a video AI company, has launched its first artificial intelligence (AI)-powered app for smartphones. Dubbed Diffuse, this mobile-first app is an image-to-video generator that can use a selfie to turn it into a video featuring the character. The company claims that the character it creates comes with lifelike motion. The app is being gradually rolled out to select markets and users will be able to find the app on both Android and iOS. This AI tool will likely compete with OpenAI’s Sora, which is reported to be launched later this year.

The company’s official account took to X (formerly known as Twitter) to make the announcement. Higgsfield AI introduced itself as a video AI company which is focused on creating video content for social media. This vision likely comes from the CEO of the AI firm, Alex Mashrabov, who was previously leading the generative AI division at Snap. As per the post, the company is building a foundational model from scratch for its tools which will be entirely video-focused.

In a separate post, the company also introduced Diffuse, the first mobile app built by the company, which is currently available in preview mode. “Pick out a video from the content library, select your selfie and Diffuse will generate a personalized character in that video’s style. Or use Prompt Builder to create a video from scratch using text, images or video. Diffuse offers a deep level of personalization, creative control and fine-tuning so anyone can create exactly what they want (with safeguards of course),” the post added. The app is currently available in select markets which include India, South Africa, the Philippines, Canada, and countries in Central Asia.

We were able to locate the app in the App Store, but it did not show up on the Android Play Store. This can change in the next few days as the company is gradually rolling out the app. The app currently offers video generation of 2 seconds as it is in preview mode. The AI firm says its ultimate goal is to achieve realistic, detailed and fluid video, all on a mobile device.

For this, it is building its foundational model from scratch. The video model uses transformer architectures, which are also used by OpenAI’s ChatGPT. Higgsfield AI further highlights that it was able to efficiently train its AI model on limited GPUs due to a proprietary framework developed in-house. The company has not revealed when it might release the full version of the app in public.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Stability AI Releases Stable Video 3D, an AI Model That Can Render 3D Videos From 2D Images

Stability AI released a new 3D video rendering model dubbed Stable Video 3D (SV3D) on Monday. The artificial intelligence (AI) model generates videos, but unlike popular video generators such as OpenAI’s Sora, Runway AI, and Pika 1.0, it does not take text inputs. The main focus of SV3D is to take the image input and turn the 2D photo into an orbital 3D model. The company has made the new AI model public for commercial as well as non-commercial usage.

The announcement was made by the official account of Stability AI on X (formerly known as Twitter) in a post where it said, “Today, we are releasing Stable Video 3D, a generative model based on Stable Video Diffusion. This new model advances the field of 3D technology, delivering greatly improved quality and multi-view.” The announcement came just a month after the AI firm announced Stable Diffusion 3, which improves performance in multi-subject prompts.

Stable Video 3D AI model is being made available in two different variants — SV3D_u and SV3D_p. The former is capable of generating orbital videos based on single image inputs but it does not use camera conditioning. This means that while the objects in the 2D image will be converted into 3D renders, there will not be any camera movement. The more capable variant is SV3D_p which accommodates both single images and orbital views which will also allow it to create fully rendered 3D videos along specified camera paths.

As per the company, the AI model solves the inconsistency issues faced by older generation models such as Stable Zero123. SV3D leverages Neural Radiance Fields (NeRF) and mesh representations to improve the quality and consistency of rendered videos. “Additionally, in order to reduce the issue of baked-in lighting, Stable Video 3D employs a disentangled illumination model that is jointly optimized along with 3D shape and texture,” Stability AI said in a detailed blog post.

The Stable Video 3D is now available for both commercial and non-commercial usage. For commercial usage, users will require a Stability AI membership which starts at $20 (roughly Rs. 1650) a month for the Professional tier. For non-commercial usage, users can download the model weights on Hugging Face.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Nvidia Unveils B200 Flagship AI Chip, New AI Software Tools at Annual Conference



Nvidia Announces Platform With Generative AI Features to Power Humanoid Robots



Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version