Instagram to Test Features That Blur Messages Containing Nudity in Move to Boost Teen Safety

Instagram will test features that blur messages containing nudity to safeguard teens and prevent potential scammers from reaching them, its parent Meta said on Thursday as it tries to allay concerns over harmful content on its apps.

The tech giant is under mounting pressure in the United States and Europe over allegations that its apps were addictive and have fueled mental health issues among young people.

Meta said the protection feature for Instagram’s direct messages would use on-device machine learning to analyze whether an image sent through the service contains nudity.

The feature will be turned on by default for users under 18 and Meta will notify adults to encourage them to turn it on.

“Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won’t have access to these images – unless someone chooses to report them to us,” the company said.

Unlike Meta’s Messenger and WhatsApp apps, direct messages on Instagram are not encrypted but the company has said it plans to roll out encryption for the service.

Meta also said that it was developing technology to help identify accounts that might be potentially engaging in sextortion scams and that it was testing new pop-up messages for users who might have interacted with such accounts.

In January, the social media giant had said it would hide more content from teens on Facebook and Instagram, adding this would make it more difficult for them to come across sensitive content such as suicide, self-harm and eating disorders.

Attorneys general of 33 US states, including California and New York, sued the company in October, saying it repeatedly misled the public about the dangers of its platforms.

In Europe, the European Commission has sought information on how Meta protects children from illegal and harmful content.

© Thomson Reuters 2024


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Collaborates With PTI to Expand Its Third-Party Fact-Checking Programme in India

Meta has joined hands with the Press Trust of India (PTI) to mitigate viral misinformation on its platforms including Facebook, WhatsApp, Instagram, and Threads. As part of the collaboration, PTI will become an independent fact-checker for the company and help in identifying, reviewing, and rating misinformation. With this move, Meta now has partnered with 12 fact-checking agencies in India and has a coverage over content in 16 Indian languages. Notably, the announcement came just days before the start of India’s general elections.

In a press release dated April 1, Meta announced the partnership and said, “Today, we are expanding our third-party fact-checking program in India to include Press Trust of India (PTI), a dedicated fact-checking unit within the newswire’s editorial department. The partnership will enable PTI to identify, review and rate content as misinformation across Meta platforms.”

With this partnership, Meta now has 12 fact-checking partners in India including AFP- Hub, The Quint, NewsChecker, India Today Fact Check, Factly, and more. The social media giant said this has now made India the country with the most third-party fact-checking partners globally across Meta. The company also has an extensive coverage over regional language content with 16 Indian languages covered by the partners besides English, including Hindi, Bengali, Telugu, Kannada, Malayalam, and more.

Meta’s fact-checking programme started in 2016 with an aim to address the problem of viral misinformation, and in particular hoaxes that have no basis in fact. The third-party fact-checking partners keep tabs on such viral content and both identify, rate, and review them. Whenever a fact-checker rates a piece of content as false, altered, or partly false, the company claims to reduce its distribution so that fewer people see it. Additionally, it also notifies users that are trying to share such content about the fact-checker’s rating and adds a warning label that links to the partner’s article with more information about the topic.

The company highlighted that it only collaborates with agencies that are certified through the non-partisan International Fact-Checking Network (IFCN). Meta now has nearly 100 fact-checking partners globally that review and rate misinformation in more than 60 languages.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Mark Zuckerberg Seeks to Avoid Personal Liability in Lawsuits Blaming Him for Kids’ Instagram Addiction

Mark Zuckerberg is seeking to avoid being held personally liable in two dozen lawsuits accusing Meta Platforms Inc. and other social media companies of addicting children to their products. The Meta chief executive officer made his case at a hearing Friday in California federal court, but the judge didn’t immediately make a decision. A ruling in Zuckerberg’s favor would dismiss him as a personal defendant in the litigation with no impact on the allegations against Meta.

Holding him personally responsible may be a challenge because of a corporate law tradition of shielding executives from liability, especially at larger companies where decision-making is often layered. A loss for the billionaire who launched Facebook with friends as a Harvard undergraduate two decades ago could encourage claims against other CEOs in mass personal injury litigation.

Zuckerberg faces allegations from young people and parents that he was repeatedly warned that Instagram and Facebook weren’t safe for children, but ignored the findings and chose not to share them publicly.

The cases naming Zuckerberg are a small subset of a collection of more than 1,000 suits in state and federal courts by families and public school districts against Meta along with Alphabet Inc.’s Google, ByteDance Ltd.’s TikTok and Snap Inc. US District Judge Yvonne Gonzalez Rogers in Oakland, who is overseeing the federal cases, recently allowed some claims to proceed against the companies while dismissing others.

Plaintiffs contend that as the face of Meta, Zuckerberg has a responsibility to “speak fully and truthfully on the risks Meta’s platforms pose to children’s health.”

“With great power comes great responsibility,” plaintiffs’ lawyers said in a court filing, quoting the Spider Man comics in a footnote. “Unfortunately, Mr. Zuckerberg has not lived up to that maxim.”

Zuckerberg, the world’s fourth-richest person, has argued that he can’t be held personally responsible for actions at Meta just because he’s the CEO. His lawyers also claim that Zuckerberg didn’t have a duty to disclose the safety findings that were allegedly reported to him.

“There is ample legal precedent establishing that being an executive does not confer liability for alleged conduct of a corporation,” a Meta spokesperson said in a statement, adding that the claims against Zuckerberg should be dismissed in their entirety.

At the hearing, Rogers pressed the plaintiffs about whether Zuckerberg was required to disclose safety information absent a “special relationship” with the users of his products. Plaintiffs had argued that the Meta CEO had a responsibility to Facebook and Instagram users given his “outsize role in the company,” but Rogers challenged them to point to a specific law that would support their argument.

Rogers appeared more sympathetic to plaintiffs’ arguments that Zuckerberg could be held liable for personally concealing information as a corporate officer at Meta, asking Zuckerberg’s lawyers how he avoids potential personal liability if there’s an understanding that Meta itself had a duty to disclose the safety information.

The judge also discussed with lawyers how laws covering corporate officer responsibility, which vary among states, apply to Zuckerberg.

Zuckerberg, who is Meta’s most significant shareholder and maintains sole voting control at the company, is also at risk of being held personally liable in a separate 2022 lawsuit over the Cambridge Analytica data privacy scandal brought by the attorney general of the District of Columbia in Washington.

Pinning blame on an executive for unlawful conduct typically hinges on showing their involvement in relevant day-to-day decisions or their knowledge of the practices at issue. It’s generally easier to assign executive liability at smaller companies, where an individual’s direct participation in decision-making can be clearer. At large companies, liability comes down to proving control over decision-making.

Social media companies have come under increased scrutiny for their impact on young people’s mental health and role in spreading sexually explicit content. At a Senate hearing last month, US Senator Josh Hawley, a Missouri Republican, pressed Zuckerberg on whether he should personally compensate victims of sexual exploitation online. Zuckerberg then offered a rare apology to the victims’ families.

The case is In Re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, 22-md-03047, US District Court, Northern District of California (Oakland).

© 2024 Bloomberg L.P.


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2024 hub.

Check out our Latest News and Follow us at Facebook

Original Source

Meta to Label AI Generated Images on Facebook Instagram Threads

Meta announced that it will begin labelling artificial intelligence (AI)-generated images on all of its platforms, including Facebook, Threads, and Instagram. The announcement, made on February 6, came just a day after the company’s oversight board highlighted the need to change Meta’s policy on AI-generated content and to focus on preventing the harm it may cause, responding to the complaint involving the US President Joe Biden’s digitally altered video that surfaced online. Meta said that while it does label photorealistic images created by its own AI models, it will now work with other companies to label all AI-generated images shared on its platforms.

In a newsroom post Tuesday, Meta’s President of Global Affairs, Nick Clegg underlined the need to label AI-generated content to protect users and stop disinformation, and shared that it has already started working with industry players to develop a solution. He said, “We’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.” The social media giant also revealed that currently, it can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. It has been labelling images created by Meta’s own AI models as “Imagined with AI”.

To correctly identify AI-generated images, detection tools require a common identifier in all such images. Many firms working with AI have begun adding invisible watermarks and embedding information in the metadata of the images as a way to make it apparent that it was not created or captured by humans. Meta said it was able to detect AI images from the highlighted companies as it follows the industry-approved technical standards.

But there are a few issues with this. First, not every AI image generator uses such tools to make it apparent that the images are not real. Second, Meta has noticed that there are ways to strip out the invisible watermark. For this, the company has revealed that it is working with industry partners to create a unified technology for watermarking that is not easily removable. Last year, Meta’s AI research wing, Fundamental AI Research (FAIR), announced that it was developing a watermarking mechanism called Stable Signature that embeds the marker directly into the image generation process. Google’s DeepMind has also released a similar tool called SynthID.

But this just covers the images. AI-generated audio and videos have also become commonplace today. Addressing this, Meta acknowledged that a similar detection technology for audio and video has not been created yet, although development is in the works. Till a way to automatically detect and identify such content emerges, the tech giant has added a feature for users on its platform to disclose when they share AI-generated video or audio. Once disclosed, the platform will add a label to it.

Clegg also highlighted that in the event that people do not disclose such content, and Meta finds out that it was digitally altered or created, it may apply penalties to the user. Further, if the shared content is of high-risk nature and can deceive the public on matters of importance, it might add an even more prominent label to help users gain context.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Facebook Messenger Turns End-to-End Encryption on by Default for Individual Chats

Facebook Messenger is finally rolling out support for end-to-end encryption (E2EE) by default for individual chats and calls, the company announced on Wednesday. In the coming weeks and months, Facebook parent Meta says existing conversations will be protected by E2EE and new chats will also be protected by the technology. The company says that E2EE Messenger chats will offer the same features as previously unencrypted conversations including the ability to unsend messages, set chat themes, and send custom message reactions.

In a post detailing the launch of the new features, Messenger head Loredana Crisan said that both one-on-one chats and calls on the messaging app will now be protected by end-to-end encryption. Meta collaborated with experts and governments, academics and advocates to ensure a balance of privacy and safety, according to Crisan.

Just like WhatsApp, which is also owned by Meta, chats on Messenger can no longer be accessed by the company — with one exception. Meta will be able to see the contents of E2EE messages when a conversation participant reports the contents of a conversation — WhatsApp offers the same reporting mechanism.

In January 2022, Meta updated Secret Conversations — its opt-in E2EE chats feature on Messenger — with support for features that are available on regular chats. These include the ability to send GIFs and stickers in chats. Users can also set chat themes in secret conversations. Enabling the 24-hour disappearing message mode in E2EE chats will also alert users when another participant takes a screenshot, according to Meta.

Messenger’s E2EE chats have been updated with support for features found on regular chats
Photo Credit: Meta

Meta has been working on enabling encrypted chats by default for years now, and the first indication of the company’s efforts was revealed years ago when Meta CEO Mark Zuckerberg stated that the firm was adding support for default E2EE chats for both Instagram and Messenger.

The company says that it has implemented the Signal Protocol (used on Signal, widely considered the gold standard in encrypted messaging apps) and the firm’s own Labyrinth Protocol.

However, not all users will see their conversations upgraded to E2EE chats immediately. Crisan notes that “it may take some time for Messenger chats to be updated with default end-to-end encryption”, which suggests that the rollout could take a considerable amount of time.

It is worth noting that features like optional E2EE encryption for chats on Instagram are yet to roll out to users in some regions, including India. Gadgets 360 has reached out to the company for details of the rollout to users in the country. Meta is expected to enable E2EE chats by default on Instagram once the Messenger rollout is complete.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Brings Standalone Text-to-Image Generation Tool to Web; AI Enhancements to Instagram, Facebook

Meta unveiled a host of new enhancements for its AI experiences across Facebook, Instagram, Messenger, and WhatsApp on Wednesday. The company’s virtual assistant, Meta AI, which was launched in September, will now give more detailed and accurate responses to queries. The Facebook parent is also expanding its text-to-image generation tool, Imagine, as a standalone AI experience on Web, outside of chats.

In its Newsroom post announcing new AI updates, Meta detailed a standalone Imagine tool for image generation. Initially only embedded within Meta’s messaging platforms, Imagine can now be accessed on the Web for free. “Today, we’re expanding access to imagine outside of chats, making it available in the US to start at imagine.meta.com,” Meta said in the blog. The image creation tool runs on the company’s image foundation model, Emu. The tool will initially be available in the US.

Imagine with Meta is free to use on the Web
Photo Credit: Meta

Meta is also bringing new updates and capabilities to core AI experiences on its platforms. The Meta AI virtual assistant is now more helpful, the company claims, generating more detailed responses on mobile and more accurate summaries of search results. “We’ve even made it so you’re more likely to get a helpful response to a wider range of requests,” the blog said. A Meta AI interaction can be triggered by starting a new message and selecting “Create an AI chat” on Meta’s messaging platforms, or by typing “@MetaAI” in a group chat followed by the query.

Outside of chats, Meta AI’s large language model will bring new experiences on Facebook and Instagram like options for AI-generated post comment suggestions, community chat topic suggestions in groups and more.

Imagine with Meta, the text-to-image generation tool, is also getting a new ‘reimagine’ feature on Messenger and Instagram that lets your friends riff on a Meta AI-generated image shared by you in messages and create entirely new images. Additionally, the company is also rolling out Instagram Reels in Meta AI chats, wherein the AI assistant will recommend and share reels for relevant video requests. AI-powered improvements are coming to Facebook, too. Meta is working on AI features that would draft birthday greetings, edit feed posts, write up a dating profile, or set up a new group.

Meta will also roll out invisible watermarking to its new Imagine with Meta AI image generation tool to boost AI transparency and curb misleading AI-generated content in the coming weeks. “The invisible watermark is applied with a deep learning model. While it’s imperceptible to the human eye, the invisible watermark can be detected with a corresponding model,” the blog said. The watermark will withstand image manipulation like cropping, editing or screenshotting.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version