Deepfakes are changing what we believe, turning what we see into something we can no longer trust.
For 23-year-old Debra Nashipae, a Kenyan student and aspiring musician, one manipulated image changed everything. Her face was used in deepfake pornography, a disturbing form of digital abuse spreading at alarming speed. Advocates for digital safety stress that such violations are not merely technological misuse but acts of gender-based violence that strip victims of dignity and control over their identities. UN Women reports that 90–95% of all deepfakes online are non-consensual pornographic images, with around 90% targeting women, and that half of the world’s women and girls lack legal protection against this kind of digital violence.
A single clip can destroy a reputation, a sense of safety, even a life. Deepfake technology uses artificial intelligence to fabricate videos and audio so convincingly that truth and fiction blur, turning what once seemed like futuristic fiction into a daily threat to individuals and public trust alike.

Deepfakes and the Weaponization of Trust
Deepfakes use advanced AI models to replicate human expressions, movements, and speech with alarming accuracy. With just a few images or audio samples, anyone can be made to say or do anything. While some applications are harmless or creative, the rapid spread of malicious deepfakes has exposed a darker side of this technology , one where misinformation travels faster than verification, and damage is often done before the truth catches up.
Beyond the technology, the real story lies in the human cost. Victims of deepfakes have faced reputational ruin, financial loss, emotional distress, and public humiliation. From fake investment endorsements that scam ordinary people, to fabricated videos targeting politicians, journalists, and private citizens, deepfakes weaponize trust. Once a manipulated clip circulates online, disproving it becomes an uphill battle, especially in a digital environment driven by speed, outrage, and virality.

Deepfakes and the Erosion of Journalistic Trust
For journalism, the implications are profound. Media has long operated on the assumption that visual evidence strengthens credibility. Today, that assumption is under siege. Deepfakes threaten to erode confidence not only in content found online, but also in legitimate reporting.
When audiences begin to doubt everything they see and hear, truth itself becomes a casualty. This creates fertile ground for denial, propaganda, and what experts call the “liar’s dividend” where real wrongdoing is dismissed as fake.
The challenge is not only ethical but deeply systemic particularly in the Global South. Across much of Africa, including Kenya, legal and regulatory frameworks have struggled to keep pace with the rapid evolution of AI technologies. While existing laws such as Kenya’s Data Protection Act (2019) and cybercrime statutes touch on privacy and digital harm, they offer limited protection against the creation and spread of synthetic media, leaving significant gaps in enforcement and accountability. Platforms are racing to develop detection systems, yet creators of deepfakes continuously refine their methods to bypass safeguards.
Meanwhile, journalists and media houses often operating with limited resources are being forced to rethink verification processes, source protection, and digital literacy in already fragile information ecosystems. Addressing the deepfake crisis therefore requires a collective response: stronger and more adaptive legislation, regional cooperation, platform accountability, and sustained investment in media literacy and newsroom capacity-building.
Addressing the threat of synthetic media requires a collective and coordinated approach:
- Governments must strengthen legal frameworks that criminalize the malicious creation and use of synthetic media, while updating cybercrime and electoral laws to reflect emerging digital threats.
- Technology companies should invest in robust detection systems and transparent content-labeling tools to help users distinguish between authentic and AI-generated content.
- Media organizations must double down on verification, transparency, and public education by strengthening fact-checking processes and training journalists to identify manipulated media.
- Audiences must adopt a more critical approach to consuming digital content pausing before sharing, questioning sources, and verifying information before believing it.
The video serves as a stark reminder that deepfakes are not merely a technological challenge but a human one. What is at stake is not only innovation or regulation, but the very foundations of democracy, journalism, and personal safety. In an age where reality itself can be manipulated, trust, dignity, and the credibility of information hang in the balance.
As artificial intelligence continues to evolve, society must demand accountability alongside innovation choosing to protect the truth rather than allow it to be quietly edited out of reality.
- When Platforms Fall Silent: Social Media Shutdowns and the Battle for Digital Expression in Africa - February 24, 2026
- Social Media Is the Greatest Threat to Democracy. Prove Me Wrong - February 19, 2026
- Thursday TBT Zilizopendwa: Embe Dodo by Them Mushrooms - February 12, 2026