Facebook Parent Meta to Modify Cross-Check Feature for VIP Posts

Meta on Friday said it will modify the company’s criticised special handling of posts by celebrities, politicians and other big audience Instagram or Facebook users, taking steps to avoid business interests swaying decisions.

The tech giant promised to implement in full or in part most of the 32 changes to its “cross-check” programme recommended by an independent review board that it funds as a sort of top court for content or policy decisions.

“This will result in substantial changes to how we operate this system,” Meta global affairs president Nick Clegg said in a blog post.

“These actions will improve this system to make it more effective, accountable and equitable.”

Meta declined, however, to publicly label which accounts get preferred treatment when it comes to content filtering decisions and nor will it create a formal, open process to get into the programme.

Labeling users in the cross-check programme might target them for abuse, Meta reasoned.

The changes came in response to the oversight panel in December calling for Meta to overhaul the cross-check system, saying the programme appeared to put business interests over human rights when giving special treatment to rule-breaking posts by certain users.

“We found that the programme appears more directly structured to satisfy business concerns,” the panel said in a report at the time.

“By providing extra protection to certain users selected largely according to business interests, cross-check allows content which would otherwise be removed quickly to remain up for a longer period, potentially causing harm.”

Meta told the board that the programme is intended to avoid content-removal mistakes by providing an additional layer of human review to posts by high-profile users that initially appear to break rules, the report said.

“We will continue to ensure that our content moderation decisions are made as consistently and accurately as possible, without bias or external pressure,” Meta said in its response to the oversight board.

“While we acknowledge that business considerations will always be inherent to the overall thrust of our activities, we will continue to refine guardrails and processes to prevent bias and error in all our review pathways and decision making structures.”


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.

Check out our Latest News and Follow us at Facebook

Original Source

Snapchat Removes Few Children Off Its Platform Every Month in Britain: Ofcom

Snapchat is kicking dozens of children in Britain off its platform each month compared with tens of thousands blocked by rival TikTok, according to internal data the companies shared with Britain’s media regulator Ofcom and which Reuters has seen.

Social media platforms such as Meta‘s Instagram, ByteDance‘s TikTok, and Snap‘s Snapchat require users to be at least 13 years old. These restrictions are intended to protect the privacy and safety of young children.

Ahead of Britain’s planned Online Safety Bill, aimed at protecting social media users from harmful content such as child pornography, Ofcom asked TikTok and Snapchat how many suspected under-13s they had kicked off their platforms in a year.

According to the data seen by Reuters, TikTok told Ofcom that between April 2021 and April 2022, it had blocked an average of around 180,000 suspected underage accounts in Britain every month, or around 2 million in that 12-month period.

In the same period, Snapchat disclosed that it had removed approximately 60 accounts per month, or just over 700 in total.

A Snap spokesperson told Reuters the figures misrepresented the scale of work the company did to keep under-13s off its platform. The spokesperson declined to provide additional context or to detail specific blocking measures the company has taken.

“We take these obligations seriously and every month in the UK we block and delete tens of thousands of attempts from underage users to create a Snapchat account,” the Snap spokesperson said.

Recent Ofcom research suggests both apps are similarly popular with underage users. Children are also more likely to set up their own private account on Snapchat, rather than use a parent’s, when compared to TikTok.

“It makes no sense that Snapchat is blocking a fraction of the number of children that TikTok is,” said a source within Snapchat, speaking on condition of anonymity.

Snapchat does block users from signing up with a date of birth that puts them under the age of 13. Reuters could not determine what protocols are in place to remove underage users once they have accessed the platform and the spokesperson did not spell these out.

Ofcom told Reuters that assessing the steps video-sharing platforms were taking to protect children online remained a primary area of focus, and that the regulator, which operates independently of the government, would report its findings later this year.

At present, social media companies are responsible for setting the age limits on their platforms. However, under the long-awaited Online Safety Bill, they will be required by law to uphold these limits, and demonstrate how they are doing it, for example through age-verification technology.

Companies that fail to uphold their terms of service face being fined up to 10 percent of their annual turnover.

In 2022, Ofcom’s research found 60 percent of children aged between eight and 11 had at least one social media account, often created by supplying a false date of birth. The regulator also found Snapchat was the most popular app for underage social media users.

Risks to young children

Social media poses serious risks to young children, child safety advocates say.

According to figures recently published by the NSPCC (National Society for the Prevention of Cruelty to Young Children), Snapchat accounted for 43 percent of cases in which social media was used to distribute indecent images of children.

Richard Collard, associate head of child safety online at the NSPCC, said it was “incredibly alarming” how few underage users Snapchat appeared to be removing.

Snapchat “must take much stronger action to ensure that young children are not using the platform, and older children are being kept safe from harm,” he said.

Britain, like the European Union and other countries, has been seeking ways to protect social media users, in particular children, from harmful content without damaging free speech.

Enforcing age restrictions is expected to be a key part of its Online Safety Bill, along with ensuring companies remove content that is illegal or prohibited by their terms of service.

A TikTok spokesperson said its figures spoke to the strength of the company’s efforts to remove suspected underage users.

“TikTok is strictly a 13+ platform and we have processes in place to enforce our minimum age requirements, both at the point of sign up and through the continuous proactive removal of suspected underage accounts from our platform,” they said.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.

Check out our Latest News and Follow us at Facebook

Original Source

Metaverse, AI Next Digital Markets to Come Under Regulatory Scanner, Says EU Antitrust Chief

The metaverse, shared virtual worlds accessible via the Internet, is the next digital market to attract regulatory scrutiny, EU antitrust chief Margrethe Vestager said on Thursday.

The metaverse has come into sharper focus since Facebook changed its name to Meta Platforms two years ago to reflect its bet on the new sector as the successor to the mobile Internet.

That move has in turn triggered concerns about Meta’s possible dominance. Alphabet’s Google and Microsoft are also active in generative artificial intelligence that the industry sees as the new bright spot.

“It’s already time for us to start asking what healthy competition would look like in the metaverse,” Vestager said a conference organised by Keystone Strategy.

Vestager asked whether it would change the equation when there are competing digital realities and language AI models like ChatGPT.

“Do we need to do more on something new? And obviously we have started that work,” she said.

She said regulatory scrutiny of digital markets has been escalating worldwide in the last three years.

“And there’s a much wider political debate that digital markets need careful attention. I think all jurisdictions are moving forward in one form or another,” Vestager said.

She said some antitrust enforcers were more advanced than others.

“We are moving at different speeds. We will not get the same legal framework. And maybe that is not a bad thing. Because that will allow us to hone our toolkits in the process of mutual learning,” Vestager said.

© Thomson Reuters 2023


After facing headwinds in India last year, Xiaomi is all set to take on the competition in 2023. What are the company’s plans for its wide product portfolio and its Make in India commitment in the country? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.


Motorola Confirms New Razr Foldable Smartphone Edition in 2023

Featured video of the day

MWC 2023: Hands-on With the Motorola Rizr Rollable Concept Phone

Check out our Latest News and Follow us at Facebook

Original Source

Meta Steps Up Chatbot Buzz, Announces Research Tool LLaMA as Rival to Microsoft’s ChatGPT, Google’s LaMDA

Meta Platforms introduced a research tool for building artificial intelligence-based chatbots and other products, seeking to create a buzz for its own technology in a field lately focused on internet rivals Google and Microsoft.

The tool, LLaMA, is Meta‘s latest entry in the realm of large language models, which “have shown a lot of promise in generating text, having conversations, summarizing written material and more complicated tasks like solving math theorems or predicting protein structures,” Chief Executive Officer Mark Zuckerberg said in an Facebook post on Friday.

For now LLaMA isn’t in use in Meta’s products, which include social networks Facebook and Instagram, according to a spokesperson. The company plans to make the technology available to AI researchers.

“Meta is committed to this open model of research,” Zuckerberg wrote.

Large language models are massive AI systems that suck up enormous volumes of digital text — from news articles, social media posts or other internet sources — and use that written material to train software that predicts and generates content on its own when given a prompt or query. The models can be used for tasks like writing essays, composing tweets, generating chatbot conversations and suggesting computer programming code. 

The technology has become popular, and controversial, in recent months as more companies have started to build them and introduce tests of products based on the models, spotlighting a new area of competition among tech giants. Microsoft is investing billions in OpenAI, the maker of GPT-3, the large language model that runs the ChatGPT chatbot. The software maker this month unveiled a test version of its Bing search engine running on OpenAI’s chat technology, which raised immediate concerns over its sometimes-inappropriate responses.

Alphabet‘s Google has a model called LaMDA, or Language Model for Dialogue Applications. The internet search and advertising leader is testing a chat-based, AI-powered search product called Bard, which also still has some glitches.

Meta previously launched a large language model called OPT-175B, but LLaMA is a newer and more advanced system. Another model Meta released late last year, Galactica, was quickly pulled back after researchers discovered it was routinely sharing biased or inaccurate information with people who used it.

Zuckerberg has made AI a top priority inside the company, often talking about its importance to improving Meta’s products on earnings conference calls and in interviews. While LLaMA is not being used in Meta products now, it’s possible that it will be in the future. Meta for now relies on AI for all kinds of functions, including content moderation and ranking material that appears in user feeds. 

Making the LLaMA model open-source allows outsiders to see more clearly how the system works, tweak it to their needs and collaborate on related projects. Last year, Big Science and Hugging Face released BLOOM, an open-source LLM that was intended to make this kind of technology more accessible.

© 2023 Bloomberg LP


Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.

Check out our Latest News and Follow us at Facebook

Original Source

China Directs Tech Companies Not to Offer Access to ChatGPT on Their Platforms: Report

In yet another clampdown on big tech companies, China has instructed them not to offer access to ChatGPT services on their platforms, either directly or via third parties, people with direct knowledge of the matter told Nikkei Asia.

Beijing’s clampdown on ChatGPT, the hugely popular AI-powered chatbot, comes as little surprise to many in China’s tech industry.

Chinese state media outlet blasted the chatbot for spreading US government ‘misinformation’ amid growing alarm in Beijing over the AI-powered chatbot’s uncensored replies to user queries, reported Nikkei Asia.

On Monday, state-owned media outlet China Daily said in a post on Weibo, China’s heavily censored equivalent of Twitter, that the chatbot “could provide a helping hand to the US government in its spread of disinformation and its manipulation of global narratives for its own geopolitical interests.”

Tencent Holdings and Ant Group, the fintech affiliate of Alibaba Group Holding, have been instructed not to offer access to ChatGPT services on their platforms.

The sources added that tech companies will also need to report to regulators before they launch their own ChatGPT-like services.

ChatGPT, developed by Microsoft-backed startup OpenAI, is not officially available in China but some internet users have been able to access it using a virtual private network (VPN), reported Nikkei Asia.

There have also been dozens of “mini programs” released by third-party developers on Tencent’s WeChat social media app that claim to offer services from ChatGPT.

Under regulatory pressure, Tencent has suspended several such third-party services regardless of whether they were connected to ChatGPT or were in fact copycats, people familiar with the matter told Nikkei.

This is not the first time that China has blocked foreign websites or applications. Beijing has banned dozens of prominent US websites and apps.

Between 2009 and 2010, it moved to block Google, Facebook, YouTube, and Twitter. Between 2018 and 2019, it instituted bans on Reddit and Wikipedia.

The latest move by regulators comes amid an official backlash against ChatGPT. Sources in the tech industry say they are not surprised by such a clampdown, reported Nikkei Asia.

“Our understanding from the beginning is that ChatGPT can never enter China due to issues with censorship, and China will need its own versions of ChatGPT,” said one executive from a leading tech company.

An executive from another leading Chinese tech player said that even without a direct warning his company would not use ChatGPT, reported Nikkei Asia.

“We have already been a target of the Chinese regulator [amid the tech industry crackdown in recent years], so even if there were no such ban, we would never take the initiative to add ChatGPT to our platforms because its responses are uncontrollable,” the person said.

“There will inevitably be some users who ask the chatbot politically sensitive questions, but the platform would be held accountable for the results.”

Since ChatGPT took the tech world by storm, Chinese tech giants, including Tencent, Alibaba and Baidu, have rushed to unveil their own plans for developing ChatGPT-like services.

These companies have been cautious about wording their announcements, however, with all of them stressing that their services are ChatGPT-like but do not integrate ChatGPT itself, reported Nikkei Asia.

China’s clampdown on ChatGPT comes as tensions between the world’s two largest economies continue to escalate.

US Secretary of State Antony Blinken said earlier this week that new information suggests Beijing could provide “lethal support” to Russia in the Ukraine war, triggering concerns over a new Cold War. The Chinese Foreign Ministry said the claims were false and accused Washington of spreading lies.


After facing headwinds in India last year, Xiaomi is all set to take on the competition in 2023. What are the company’s plans for its wide product portfolio and its Make in India committment in the country? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.

Check out our Latest News and Follow us at Facebook

Original Source

Around 95 Percent WhatsApp Users in India Receive Pesky Calls, SMS Through Online Business: Survey

Around 76 percent of respondents have claimed that they have noticed a rise in pesky calls or SMS based on their conversations with WhatsApp business accounts and their activity on Facebook or Instagram, online survey firm LocalCircles said on Wednesday.

According to the survey conducted between February 1 and 20, 95 percent of WhatsApp users surveyed by LocalCircles in India indicate that they get one or more pesky messages each day out of which 41 percent claim to get four or more such messages daily.

When contacted, a Meta spokesperson said that Whatsapp has built systems that make it faster for WhatsApp to suspend a business from sending messages when people provide feedback they have had a low-quality experience.

“If a business receives excessive negative feedback we may limit or remove a business’ access to WhatsApp,” Meta spokesperson said.

LocalCircles said that to understand if users were having such an experience based on the privacy policy changes and to quantify the magnitude of such instances, it asked respondents if they are seeing an increase in unsolicited commercial messages on WhatsApp based on their conversation with WhatsApp business users or your activity on Facebook or Instagram.

“Nearly three fourth or 76 percent of the 12,215 WhatsApp users who responded to this question stated that they are seeing an increase in pesky or unsolicited commercial messages based on their conversations with WhatsApp business accounts and their activity on Facebook/Instagram, the platforms owned by Meta in addition to WhatsApp,” the survey report said.

The survey claims to have received over 51,000 responses from citizens located in 351 districts of India.

LocalCircles said that the survey finding indicates that the majority of WhatsApp users surveyed are using tools available to them like blocking or archiving and yet the spam messages are continuing indicating that the senders are also switching numbers or that there are so many of them that the menace continues.

“Majority or 73 percent out of 12,673 respondents to this question indicated that they exercise the option to block the numbers from where the unsolicited commercial messages come,” the report said.

The Meta spokesperson said that the user’s choice is key and provides them the option to block a business account.

“We would like to share that at WhatsApp, users’ choice is at the core of what we do. Messaging is the new way to get business done that is better than SMS, e-mail, and the phone, which have become overloaded and spammy. Our rule is that whether people want to talk to a business or not, the choice is that of our users,” the spokesperson said.

The spokesperson shared it constantly works with businesses to ensure messages are helpful and expected.

“We allow businesses to only send a certain number of messages per day… We’ve recently added the ability for businesses to create a simple way for customers to opt out of receiving certain types of messages right within the chat,” the spokesperson said.


The OnePlus 11 5G was launched at the company’s Cloud 11 launch event which also saw the debut of several other devices. We discuss this new handset and all of OnePlus’ new hardware on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.

Check out our Latest News and Follow us at Facebook

Original Source

Twitter Becomes First Social Media Platform to Allow Cannabis Ads in the US

Twitter on Wednesday became the first social media platform to allow cannabis companies to market their brands and products in the United States.

The company had earlier only allowed advertising for hemp-derived CBD topical products, while other social media platforms including Facebook, Instagram and TikTok follow a “no cannabis advertising policy” as pot remains illegal at the federal level.

However, more states in the United States are moving towards allowing the sale of recreational cannabis, with 21 already on board.

Twitter said it will permit cannabis companies to advertise, as long as they have proper license, pass through its approval process, only target jurisdictions where they are licensed to operate and most importantly, do not target people below 21 years.

“This is a pretty massive win for legal cannabis marketers,” multistate cannabis and medical marijuana company Cresco Labs said.

Most pot companies were quick to embrace the changes suggested by Twitter. Trulieve Cannabis Corp already launched a multistate campaign on the platform on Wednesday.

“This change speaks to the growing acceptance of cannabis as a mainstream wellness category, and we are hopeful it will serve as a catalyst for other social media platforms to follow suit,” said Kate Lynch of Curaleaf, the biggest cannabis company operating in the United States.

After enjoying a sales surge during the early stages of the pandemic, the US cannabis industry showed signs of slowing in the face of regulatory and economic challenges, including falling prices and an illicit market poaching its customers.

Curaleaf recently reduced its payroll by 10 percent, which equated to less than 4 percent of its workforce, and exited the majority of its operations in three US states.

© Thomson Reuters 2023


Samsung’s Galaxy S23 series of smartphones was launched earlier this week and the South Korean firm’s high-end handsets have seen a few upgrades across all three models. What about the increase in pricing? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

 

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Oversight Board to Review More Cases, Expedite Decision-Making Process

Meta Platforms’ Oversight Board announced on Tuesday that it will review more types of content moderation cases and expedite some decisions, as it aims to expand its work.

The Oversight Board was created in late 2020 to review Facebook and Instagram‘s decisions on taking down or leaving up certain content and make rulings on whether to uphold or overturn the social media company’s actions. Since then, the board has published 35 case decisions, it said in a blog post.

The board, which has 22 members, said it will now begin publishing decisions on some cases on an expedited basis. Rulings could come as quickly as 48 hours after accepting a case, while others could take up to 30 days.

Standard decisions, in which the Oversight Board reviews Meta‘s content moderation actions in depth, can take up to 90 days.

Publishing more decisions and increasing the pace will “let us tackle more of the big challenges of content moderation, and respond more quickly in situations with urgent real-world consequences,” the board said in the blog post.

Unlike standard decisions, expedited cases will be reviewed by a panel of board members instead of the full board and will not consider public comments.

The board will also begin publishing summary decisions to analyse cases in which Meta changed its mind about whether to leave up or take down posts. Such cases could help Meta avoid similar mistakes in the future and may be useful for researchers and civil society, the board said.

It also said Tuesday it will add Kenji Yoshino, a constitutional law professor at New York University School of Law, as a new board member.

© Thomson Reuters 2023


The OnePlus 11 5G was launched at the company’s Cloud 11 launch event which also saw the debut of several other devices. We discuss this new handset and all of OnePlus’ new hardware on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

 

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta’s Chief Business Officer Marne Levine, Appointed in 2021, Leaves Company

Meta Platforms said on Monday Chief Business Officer Marne Levine was leaving the owner of Facebook after a 13-year stint.

Fifty-two-year-old Levine, appointed as the company’s first chief business officer in 2021, has served in various other executive positions at the social media company, including chief operating officer of Instagram.

The company said it expanded Nicola Mendelsohn’s role as head of global business group and named Justin Osofsky as head of online sales, operations and partnerships, in the wake of Levine’s imminent departure.

Mendelsohn will handle the company’s relationships with top marketers and agencies for all of its apps, while Osofsky will be leading sales and operations focused on growing small- and medium-sized businesses on Meta‘s platforms.

The changes come at a time when Meta has promised to cut costs by $5 billion (nearly Rs. 41,320 crore) in the year to a range of $89 billion (nearly Rs. 7,35,450 crore) to $95 billion (nearly Rs. 7,85,000 crore), calling 2023 the “Year of Efficiency”.

A few days back, it was reported that Meta has asked many of its managers and directors to transition to individual contributor jobs or leave the company.

The process is known internally as a “flattening,” the people said. Individual contributors aren’t in charge of others, and instead focus on tasks like coding, designing and research.

Back in November, Meta — owner of Facebook and Instagram — fired 13 percent of its workforce in November during its first major layoff. Meta Chief Executive Officer Mark Zuckerberg explained during the company’s earnings report that he still felt the organisation was too slow-moving and bloated. He called 2023 the “Year of Efficiency” and vowed to cut middle-managers and underperforming projects.

In January, Meta announced the appointment of Vikas Purohit as the director of Meta’s Global Business Group in India to lead the strategy and delivery of the charter, focused on the country’s largest advertisers and agency partners.

 


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google to Spread Misinformation Prebunking in Europe; Initiative in the Works in India

After seeing promising results in Eastern Europe, Google will initiate a new campaign in Germany that aims to make people more resilient to the corrosive effects of online misinformation.

The tech giant plans to release a series of short videos highlighting the techniques common to many misleading claims. The videos will appear as advertisements on platforms like Facebook, YouTube or TikTok in Germany. A similar campaign in India is also in the works.

It’s an approach called prebunking, which involves teaching people how to spot false claims before they encounter them. The strategy is gaining support among researchers and tech companies.

“There’s a real appetite for solutions,” said Beth Goldberg, head of research and development at Jigsaw, an incubator division of Google that studies emerging social challenges. “Using ads as a vehicle to counter a disinformation technique is pretty novel. And we’re excited about the results.”

While belief in falsehoods and conspiracy theories isn’t new, the speed and reach of the internet has given them a heightened power. When catalyzed by algorithms, misleading claims can discourage people from getting vaccines, spread authoritarian propaganda, foment distrust in democratic institutions and spur violence.

It’s a challenge with few easy solutions. Journalistic fact checks are effective, but they’re labor intensive, aren’t read by everyone, and won’t convince those already distrustful of traditional journalism. Content moderation by tech companies is another response, but it only drives misinformation elsewhere, while prompting cries of censorship and bias.

Prebunking videos, by contrast, are relatively cheap and easy to produce and can be seen by millions when placed on popular platforms. They also avoid the political challenge altogether by focusing not on the topics of false claims, which are often cultural lightning rods, but on the techniques that make viral misinformation so infectious.

Those techniques include fear-mongering, scapegoating, false comparisons, exaggeration and missing context. Whether the subject is COVID-19, mass shootings, immigration, climate change or elections, misleading claims often rely on one or more of these tricks to exploit emotions and short-circuit critical thinking.

Last fall, Google launched the largest test of the theory so far with a prebunking video campaign in Poland, the Czech Republic and Slovakia. The videos dissected different techniques seen in false claims about Ukrainian refugees. Many of those claims relied on alarming and unfounded stories about refugees committing crimes or taking jobs away from residents.

The videos were seen 38 million times on Facebook, TikTok, YouTube and Twitter — a number that equates to a majority of the population in the three nations. Researchers found that compared to people who hadn’t seen the videos, those who did watch were more likely to be able to identify misinformation techniques, and less likely to spread false claims to others.

The pilot project was the largest test of prebunking so far and adds to a growing consensus in support of the theory.

“This is a good news story in what has essentially been a bad news business when it comes to misinformation,” said Alex Mahadevan, director of MediaWise, a media literacy initiative of the Poynter Institute that has incorporated prebunking into its own programs in countries including Brazil, Spain, France and the US.

Mahadevan called the strategy a “pretty efficient way to address misinformation at scale, because you can reach a lot of people while at the same time address a wide range of misinformation.”

Google’s new campaign in Germany will include a focus on photos and videos, and the ease with which they can be presented of evidence of something false. One example: Last week, following the earthquake in Turkey, some social media users shared video of the massive explosion in Beirut in 2020, claiming it was actually footage of a nuclear explosion triggered by the earthquake. It was not the first time the 2020 explosion had been the subject of misinformation.

Google will announce its new German campaign Monday ahead of next week’s Munich Security Conference. The timing of the announcement, coming before that annual gathering of international security officials, reflects heightened concerns about the impact of misinformation among both tech companies and government officials.

Tech companies like prebunking because it avoids touchy topics that are easily politicized, said Sander van der Linden, a University of Cambridge professor considered a leading expert on the theory. Van der Linden worked with Google on its campaign and is now advising Meta, the owner of Facebook and Instagram, as well.

Meta has incorporated prebunking into many different media literacy and anti-misinformation campaigns in recent years, the company told The Associated Press in an emailed statement.

They include a 2021 program in the US that offered media literacy training about COVID-19 to Black, Latino and Asian American communities. Participants who took the training were later tested and found to be far more resistant to misleading COVID-19 claims.

Prebunking comes with its own challenges. The effects of the videos eventually wears off, requiring the use of periodic “booster” videos. Also, the videos must be crafted well enough to hold the viewer’s attention, and tailored for different languages, cultures and demographics. And like a vaccine, it’s not 100 percent effective for everyone.

Google found that its campaign in Eastern Europe varied from country to country. While the effect of the videos was highest in Poland, in Slovakia they had “little to no discernible effect,” researchers found. One possible explanation: The videos were dubbed into the Slovak language, and not created specifically for the local audience.

But together with traditional journalism, content moderation and other methods of combating misinformation, prebunking could help communities reach a kind of herd immunity when it comes to misinformation, limiting its spread and impact.

“You can think of misinformation as a virus. It spreads. It lingers. It can make people act in certain ways,” Van der Linden told the AP. “Some people develop symptoms, some do not. So: if it spreads and acts like a virus, then maybe we can figure out how to inoculate people.”

 


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version