News

AI: Voice cloning tech used to spread fake news in Sudan

What is voice cloning?

Voice cloning is a technology that allows anyone to create a synthetic copy of someone else’s voice. It uses artificial intelligence (AI) to analyze the vocal characteristics of a person and generate a digital replica that can be used to say anything.

Voice cloning has many potential applications, such as creating personalized voice assistants, dubbing movies, restoring speech for people who have lost their voice, and enhancing accessibility for people with disabilities.

However, voice cloning also poses serious ethical and security risks, as it can be used to impersonate, deceive, or manipulate others. For example, voice cloning can be used to create fake audio evidence, spread misinformation, or scam people.

How is voice cloning used in Sudan?

In Sudan, a country that has been suffering from civil war and political instability for decades, voice cloning has been used to create fake recordings of Omar al-Bashir, the former leader who was ousted by the military in 2019.

AI: Voice cloning tech used to spread fake news in Sudan

An anonymous account on TikTok, called The Voice of Sudan, has been posting what it claims are “leaked recordings” of Bashir since late August 2023. The channel has posted dozens of clips, but the voice is fake.

Bashir, who has been accused of orchestrating war crimes and genocide in Darfur, has not been seen in public for a year and is believed to be seriously ill. He denies the war crimes allegations.

The fake recordings appear to be a mixture of old clips from press conferences during coup attempts, news reports, and several “leaked recordings” attributed to Bashir. The posts often pretend to be taken from a meeting or phone conversation, and sound grainy as you might expect from a bad telephone line.

The content of the recordings varies from criticizing the current military leaders, such as General Abdel Fattah Burhan and General Mohamed Hamdan Dagalo (also known as Hemeti), to praising the Rapid Support Forces (RSF), a paramilitary group that has been accused of human rights violations and atrocities.

The recordings also express support for the Sudanese Professionals Association (SPA), a civil society group that led the protests against Bashir in 2019 and has been calling for a civilian-led transition to democracy.

The recordings have received hundreds of thousands of views on TikTok, adding online confusion to a country torn apart by violence and chaos.

How was the voice cloning exposed?

The voice cloning was exposed by a user on X, formerly Twitter, who recognized the very first of the Bashir recordings posted in August 2023. It apparently features the leader criticizing General Burhan.

The Bashir recording matched a Facebook Live broadcast aired two days earlier by a popular Sudanese political commentator, known as Al Insirafi. He is believed to live in the United States but has never shown his face on camera.

Al Insirafi often uses voice cloning software to create satirical videos mocking Sudanese politicians and celebrities. He has admitted that he used his own voice to create the fake Bashir recording, and said he was surprised that someone else copied it and posted it on TikTok as genuine.

He also said he had no idea who was behind The Voice of Sudan account or what their motives were. He speculated that they could be either supporters or opponents of Bashir, trying to influence public opinion or sow discord.

What are the implications of voice cloning in Sudan?

The use of voice cloning in Sudan raises serious concerns about the spread of fake news and misinformation in a country that is already facing multiple crises.

Sudan has been struggling with a fragile political transition since Bashir was overthrown by the military in April 2019, following months of mass protests. A power-sharing deal was signed between the military and civilian forces in August 2019, but it has been marred by delays, disputes, and violence.

In April 2023, fighting broke out between the military and the RSF militia group, which had been allied with the military during the coup against Bashir. The RSF is led by Hemeti, who is also the deputy head of the transitional council. The clashes have killed hundreds of people and displaced millions more.

The conflict has also worsened the humanitarian situation in Sudan, which is already facing food insecurity, inflation, Covid-19 pandemic, floods, and locusts. More than 13 million people are in need of humanitarian assistance, according to the UN.

The use of voice cloning to impersonate Bashir could further undermine the trust and confidence of the Sudanese people in their leaders and institutions. It could also fuel tensions and violence among different factions and groups.

Moreover, voice cloning could pose a threat to the accountability and justice process for Bashir and his associates, who are facing trial for corruption and human rights violations in Sudan. Bashir is also wanted by the International Criminal Court (ICC) for genocide, war crimes, and crimes against humanity in Darfur.

Voice cloning could be used to create fake evidence or testimonies, or to interfere with the investigations and prosecutions. It could also be used to influence the public perception and opinion of Bashir and his legacy.

How can voice cloning be detected and prevented?

Voice cloning is not a new technology, but it has become more accessible and realistic in recent years, thanks to the advances in AI and machine learning. Several online platforms and apps offer voice cloning services, some for free or for a low cost.

However, voice cloning is not perfect, and there are ways to detect and prevent it. Some of the methods include:

  • Analyzing the audio quality and consistency of the voice. Voice cloning may produce glitches, distortions, or unnatural pauses or intonations that can reveal its artificial nature.
  • Comparing the voice with other recordings of the same person. Voice cloning may not capture all the nuances and variations of a person’s voice, such as their accent, tone, or emotion.
  • Verifying the source and context of the voice. Voice cloning may not match the time, place, or situation of the voice, or may contradict other information or evidence available.
  • Using digital forensics and watermarking tools. These tools can help identify and authenticate the origin and integrity of the voice, or embed hidden signals or marks that can indicate its authenticity.
  • Educating and raising awareness among the public. This can help people to be more critical and cautious when consuming or sharing audio content online, and to check for multiple and reliable sources before believing or spreading it.

Leave a Reply

Your email address will not be published. Required fields are marked *