News Technology

EU Pressures Big Tech to Step Up Content Moderation

The European Union has recently urged major social media platforms like Meta and TikTok to take more action against harmful and illegal content, especially in light of the recent violence in Israel and Palestine. The EU’s Digital Services Act (DSA), which came into force in August 2023, requires platforms with more than 45 million EU users to adopt stricter content monitoring, or face fines of up to 6% of their global revenue.

Meta’s Response to the EU’s Warning

Meta, formerly known as Facebook, announced the creation of operations centers with Arabic and Hebrew experts to monitor real-time content related to the Israel-Hamas conflict. The company says it has removed over 795,000 violating posts and increased takedowns of content supporting dangerous organizations. Specific actions include prioritizing removal of content inciting violence or endangering kidnap victims, restricting problematic hashtags, temporarily removing strikes against accounts, and cooperating on memorializing deceased users per family requests.

Meta also said it has deployed artificial intelligence tools to detect and remove hate speech, misinformation, and graphic violence. The company claims that its AI systems can proactively identify 97% of hate speech before users report it. However, Meta also acknowledged the limitations of its technology and the need for human review and oversight.

EU Pressures Big Tech to Step Up Content Moderation

TikTok’s Response to the EU’s Warning

TikTok, the popular short-video app, said in a blog post that it has mobilized significant resources and personnel in response to recent events in Israel and Palestine. This includes establishing a dedicated command center to monitor emerging threats and rapidly take action against violative content. TikTok also detailed the rollout of automated detection systems, additional content moderators, and restrictions around livestreaming and hashtags. The company claims over 500,000 videos have been taken down for policy violations amid the recent violence.

TikTok also emphasized its commitment to transparency and accountability, saying that it will publish regular reports on its content moderation efforts and cooperate with independent audits. The company said it welcomes feedback from users, experts, and regulators on how to improve its policies and practices.

Other Big Tech Companies Under Scrutiny

The EU’s warning was not only directed at Meta and TikTok, but also at other big tech companies like Alphabet’s YouTube, Twitter, and Telegram. European Commissioner Thierry Breton sent letters to the CEOs of these companies, expressing his concerns about the surge of illegal content and disinformation being disseminated on their platforms, following the terrorist attacks carried out by Hamas against Israel and the latter’s military response.

Breton urged these companies to take immediate and decisive actions to curb the spread of violent, terror-related and election disinformation content. He also reminded them of their responsibilities under the DSA to swiftly moderate illegal material when notified. Breton said he expects these companies to report back to him on their actions by the end of October 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *