UN chief calls for new era of social media integrity in bid to stem misinformation — Global Issues

Alarm over the potential threat posed by the rapid development of generative artificial intelligence (AI) must not obscure the damage already being done by digital technologies that enable the spread of online hate speech, as well as mis- and disinformation, he said.

The policy brief argues that they should be integral players in upholding the accuracy, consistency and reliability of information shared by users.

“My hope is that it will provide a gold standard for guiding action to strengthen information integrity,” he wrote in the introduction.

Connecting and dividing

Digital platforms – which include social media channels, search engines and messaging apps – are connecting billions of people across the planet, with some three billion users of Facebook alone.

They have brought many benefits, from supporting communities in times of crisis and struggle, to helping to mobilize global movements for racial justice and gender equality. They are also used by the UN to engage people worldwide in pursuit of peace, dignity and human rights on a healthy planet.

Yet these same digital platforms are being misused to subvert science and spread disinformation and hate, fuelling conflict, threatening democracy and human rights, and undermining public health and climate action.

“These risks have further intensified because of rapid advancements in technology, such as generative artificial intelligence,” the UN chief said in the report, adding “it has become clear that business as usual is not an option.”

Deceitful, dangerous and deadly

Although misinformation, disinformation and hate speech are related and overlap, they are distinct phenomena.

Hate speech refers to abusive or threatening language against a group or person, simply because of their race, colour, religion, ethnicity, nationality, or similar grounds.

The difference between mis- and disinformation is intent, though the distinction can be difficult to determine. In general, misinformation refers to the unintentional spread of inaccurate information, while disinformation is not only inaccurate but intended to deceive.

Regardless, they have all proved to be dangerous and even deadly.

“While traditional media remain an important source of news for most people in conflict areas, hatred spread on digital platforms has also sparked and fuelled violence,” the report said. “Some digital platforms have faced criticism of their role in conflicts, including the ongoing war in Ukraine.”

© UNICEF/UN051302/Herwig

Adolescent girls use cellphones and tablets in the Za’atari camp for Syrian refugees (file).

Safer digital space

Given the threat, the Secretary-General has called for coordinated international action to make the digital space safer and more inclusive while also protecting human rights.

Constructive responses have largely been lacking. Some tech companies have done far too little to prevent their platforms from contributing to the spread of violence and hatred, while Governments have sometimes resorted to drastic measures – including internet shutdowns and bans – that lack any legal basis and infringe on human rights.

Code of Conduct

The report puts forward the framework for global action though a Code of Conduct for information integrity on digital platforms, that outlines potential guardrails while safeguarding the rights to freedom of expression and information.

It will build on principles that include respect for human rights, support for independent media, increased transparency, user empowerment and strengthened research and data access.

The Secretary-General also provided recommendations that could inform the Code of Conduct.

They include a call for Governments, tech companies and other stakeholders to refrain from using, supporting, or amplifying disinformation and hate speech for any purpose.

Governments should also guarantee a free, viable, independent, and plural media landscape, with strong protections for journalists.

Meanwhile, digital platforms should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages.

All stakeholders should take urgent and immediate measures to ensure that all AI applications are safe, secure, responsible and ethical, and comply with human rights obligations, he added.

Advertisers and digital platforms should ensure that ads are not placed next to online mis- or disinformation or hate speech, and that ads containing disinformation are not promoted.

Our common future

The policy brief is the latest in a series of 11 reports based on proposals contained in Our Common Agenda, the Secretary-General’s 2021 report that outlines a vision for future global cooperation and multilateral action.

They are intended to inform discussions ahead of the SDG Summit in September, marking the midpoint towards achieving the Sustainable Development Goals by 2030, and the related Summit of the Future next year.

Check out our Latest News and Follow us at Facebook

Original Source

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *