Shafaq News/ More than 200 civil advocacy groups are calling on Big Tech to bolster their fight against artificial intelligence-fueled misinformation as billions of voters head to the polls this year in elections around the globe.
The coalition of activists wrote to the CEOs of Meta, Reddit, Google and X, and 8 other tech executives Tuesday, urging them to adopt more aggressive policies that could stem the tide of dangerous political propaganda.
These extra steps are critical in 2024 given that more than 60 countries are holding national elections, the groups charged in their letter, a copy of which was obtained exclusively by The Technology 202.
“So many elections are happening around the world this year and social media platforms are one of the most essential ways that people typically connect with information,” said Nora Benavidez, senior counsel of the digital rights group Free Press. The companies need “to increase platform integrity measures for this moment.”
The organizations — which include the civil rights group Color of Change and the LGBTQ+ advocacy group GLAAD — also pushed the tech giants to beef up their policies on political ads, including prohibiting deepfakes and labeling any AI-generated content in them.
For months, advocates have been warning that the rise in audio clips and videos generated by AI is already leading to confusion in elections around the world. Politicians, for instance, have been able to dismiss potentially damning pieces of evidence — such as hotel trysts or recordings of them criticizing their opponents --- as AI-generated fakes. And risks of AI could lead to real-world harm in politically volatile democracies, experts say.
Tech companies such as Meta, Google and Midjourney have insisted that they are working on systems to identify AI-generated content with a watermark. Just last week, Meta said it would expand its AI-labeling policy to apply to a wider range of video, audio and images.
But experts say tech companies are unlikely to catch all the misleading AI-generated content proliferating on their networks or fix the underlying algorithms that make some of those posts go viral in the first place.
“People … are not on high alert” when they consume social media in typical passive fashion, said Benavidez. “That’s one of the problems.”
“Social media has diminished our curiosity and increased that siloed echo chamber effect,” she added.
The groups also called on the tech companies to be more transparent about the data powering their AI models and lambasted them for weakening policies and systems meant to fight political misinformation over the last couple of years.
X, for instance, has reversed some of its rules against misinformation and allowed far-right extremists to return to the platform. Meta is offering users the option to opt out of the company’s fact-checking program, allowing debunked posts to gain more traction in news feeds. YouTube has reversed a policy banning videos falsely promoting the idea that the 2020 election was stolen from former president Donald Trump, while Meta started allowing such claims in political advertisements.
Meanwhile, mass layoffs at X, formerly Twitter, and other major tech companies have gutted teams dedicated to promoting accurate information online. And an aggressive conservative legal movement has led the federal government to stop warning tech companies about foreign disinformation campaigns on their social networks.
If tech companies don’t step up, dangerous propaganda on social media could lead to extremism or political violence, the activists argued.
“It’s not beyond the realm of impossibility that we’re going to see even more persuasive misinformation in the form of deepfakes” said Meta whistleblower Frances Haugen, whose group, Beyond the Screen, signed on to the letter. “Even if you are not willing to believe that violence can happen in the United States at scale … countries with far more fragile democracies … are just as vulnerable to all of these manipulations.”
(The Washington Post)
Only headline is modified by Shafaq News Agency