close
close
Fri. Oct 25th, 2024

Africa: the deepfake is a powerful weapon in the war in Sudan

Africa: the deepfake is a powerful weapon in the war in Sudan

Although still rudimentary – voice cloning models cannot yet convey convincing Sudanese dialects – the use of deepfakes is now being used routinely on Sudan’s violent, albeit bloodless, alternative battlefield: social media.

In April 2024, an image of a building going up in flames went viral on Facebook in Sudan. The image was shared widely with captions claiming that the building was part of Al-Jazeera University in the town of Wad Madani and that the Sudanese army had bombed it. Many political leaders and public figures were among those who fell prey and shared it.

However, this was not an isolated case. As the conflict between the country’s military and the rebels’ paramilitary Rapid Support Forces continues, social media platforms have become an alternative battlefield where AI-generated deepfakes are generously used to spread fake news about each other. and gain more sympathizers. This trend poses a serious threat in the Northeast African country that is in dire need of a healthy information ecosystem.

AI was used to generate fake videos during the very early days of the ongoing war. In August 2023, the Daily Mail identified a video in which the US ambassador to Sudan said America has a plan to reduce the influence of Islam in the country.

In October 2023, a BBC investigation uncovered a campaign using AI to impersonate Sudan’s former leader Omar al-Bashir, which had received hundreds of thousands of views on TikTok.

In March 2024, an This AI-made recording was viewed by 230,000 people and shared by hundreds, including well-known Sudanese politicians.

Furthermore, in September 2023, a number of tech-savvy Sudanese without clear political affiliations began using deepfake technologies to create satirical content. For example, a song originally published in support of the Sudanese armed forces was reproduced as a deepfake showing RSF leader Mohamed Hamdan Dagalo (alias Hemedti) singing along to the song along with one of his senior officers. It was seen thousands of times. While viewers didn’t miss the humorous purpose of the modified song, in other cases this content turned into disinformation.

In March 2024, an AI-made recording showed a secret meeting between the leaders of the RSF militia and some leaders of the Freedom and Change coalition in which a plan for a military coup was discussed. Although the recording was not authentic, it was shared the same month. by well-known journalists and even National TV before it was later removed. Obai Alsadig, the creator of the recording, told me that the recording was “sarcastic with weak dialogue and fake” and that he “wanted to show the audience that making such fake recordings is not difficult.”

The supporters of the Sudanese Armed Forces, as part of the psychological warfare, launched a campaign to cast doubt on the authentic recordings of the Hemedti, falsely claiming that they were all created by AI and that he was dead even though it was revealed independent analysis with high confidence: that these recordings are accurate.

Several attempts have been made to combat the deeply fake disinformation content, which is headquartered in Khartoum-based Beam Reports, the only Sudanese fact-checking organization verified by the International Fact-checking Organization, and which has been operating since 2023 controls the content in Sudan. The organization has monitored the deep fakes in Sudan and published an analysis about it.

“Although deepfake technology is being used on social media for deceptive purposes, we cannot say that its use has increased significantly over the past six months, especially in the context of Sudan. However, it is notable that misleading audio content has also been generated Marsad Beam, a division within Beam Report charged with monitoring and fact-checking viral fake news, has been working on a report in which it verified the authenticity of this content, Beam Reports explained out. me via email.

During an online seminar organized by UNESCO in May, Beam Reports highlighted the challenges that the use of AI has brought in recent months.

“After a year of fighting online disinformation, Beam Reports highlighted that the lack of on-the-ground reporting is leading to an increase in mis/disinformation,” UNESCO said in a statement afterwards. “This is further amplified and complicated by the increasing use of generative artificial intelligence in the production and spread of disinformation and hate speech.”

People with advanced technical skills also helped to fact-check viral content, Mohanad Elbalal, a Britain-based Sudanese activist who voluntarily checks content on social media, explained to me. “If I think a clip is a deepfake, I will invert its search frames to try to find a match, as these AI deepfakes are usually recreated from a template, so similar images are likely to appear. I will go to find anything recognizable in the clip, such as the fake news channel logo used in the deepfake.”

“The biggest limitation in deepfake detection is the lack of access to reliable tools,” Shirin Anlen, a media technologist at Witness, told me via email. “While there is inspiring research taking place in the field, it is often out of reach of the general public and requires a high level of technical expertise. The publicly available detection tools can be difficult to understand due to a lack of transparency and clarity in the results, which users are left in the dark – especially when these tools produce false positives, which they do quite often,” she explained.

“From a technical perspective, these tools are still highly dependent on the quality and diversity of the training data. This dependency poses challenges, especially when it comes to biases against specific types of manipulation, personas, or file quality. In our work we have noticed that file compression and resolution play a major role in detection accuracy,” she added.

The problem of AI-generated fake news in Sudan could further increase as the technology becomes more advanced.

“So far, many of the AI-generated deepfakes circulating in the country can be easily identified as fake due to their poor quality, which can be attributed to the lack of data trained in Sudanese dialects,” said Mohamed Sabry, a Sudanese AI researcher at Dublin City University, shared his thoughts with me. But this will change in the future if malicious actors decide to invest more time and money in using advanced AI technology to produce their content, he added.

“In low-resource languages ​​such as Sudanese dialects, voice cloning models are less effective and easy to identify. The robotic tone is quite clear even to inexperienced listeners,” Mohamed said. However, “many efforts are being made to address the challenges of low-resource language datasets. Furthermore, the impressive generalizability of deep neural networks when trained on near-Arabic dialects and domains is notable.”