Radio and Artificial Intelligence (AI)
Introduction
AI has helped enhance the capabilities of African journalists, including radio broadcasters (9). A 2023 survey in Zambia found that 60% of journalists used AI tools, with chatbots *, image and video analysis, and automated content generation * being the most popular. But 73% of Zambian journalists using such tools had no formal training in AI. On the other hand, a 2025 study of radio and other media practitioners in four African countries, found that “the use of AI in Africa’s journalism landscape is still emerging, with many practitioners, particularly in radio, not yet utilising it to a significant extent.” (13)
AI-based tools present both opportunities and challenges. They can enhance audience engagement and reduce travel and payroll expenses, but adoption in Africa is limited by the fact that many languages are not present online, by poor digital literacy, and by basic issues such as lack of access to the Internet and poor access to electricity (7).
In Africa, as around the world, views on AI are divided. On the one hand, utopian visions claim that AI will revolutionize development and “end poverty forever.” On the other hand, there are warnings that AI will surely increase inequality and exploitation.
Added to this uncertainty and polarized opinion is the fact that AI is evolving so rapidly that even experts struggle to predict its capacities in six months, while many of its far-reaching and hoped-for societal impacts remain speculative (10).
According to some African commentators (10), the most immediate risk of AI in Africa is its capacity to worsen the damages already associated with digital technologies, such as disinformation and surveillance. Social media has undermined political discourse, created “echo chambers,” and enabled text, audio, and video manipulation. AI tools make such manipulation faster, cheaper, better targeted, more persuasive, and more difficult to trace, a huge challenge to civic society and human rights. This is especially important because studies show that, but 2026, as much as 90% of online content may be synthetically generated by 2026 (8).
A 2025 study (10) found that Western authors dominate African media coverage of AI, and that such coverage largely focuses on its technical and economic aspects of AI, emphasizes these rather than its societal implications. The study authors and advise that, to mitigate this Western-centric bias and AI “hype,” local journalism is needed, with a wide variety of local voices reporting on and analyzing AI. They also recommended that AI literacy be included
in journalism training and professional development, equipping journalists to ask deeper questions about the technologies they use and report on.
It’s important to remember that the AI world is not just about competing tools from web giants (15). Many free, ethical, open source, and community-based AI tools are being developed. These are often more transparent, modular, and more respectful of privacy. Modular systems are easier to understand, maintain, and customize, and facilitate collaboration between they are easily broken down into their component parts.
Details
What is AI?
AI can be defined as a set of technologies and techniques used to complement traditional human attributes such as intelligence, ability to analyze, and other capabilities, and as a set of technologies broadly defined as “self-learning, adaptive systems.” AI processes vast amounts of data to identify patterns and generate content. Common examples of AI include chatbots like ChatGPT, virtual assistants such as Siri and Alexa, and self-driving cars.
There are various types of AI, including (1):
- Traditional AI, which mimics human intelligence through machine learning *, deep learning *, and natural language processes.
- Generative AI uses large language models (2) to generate new content such as images, text, and code.
- Narrow AI, also called specialized AI, performs specific tasks, for example, the AI used in self-driving cars.
- General AI performs new tasks, for example, ChatGPT generating text outputs.
- Predictive AI forecasts future events based on historical trends.
African radio and AI
AI can be defined as a set of technologies and techniques used to complement traditional human attributes such as intelligence, ability to analyze, and other capabilities, and as a set of technologies broadly defined as “self-learning, adaptive systems.” AI processes vast amounts of data to identify patterns and generate content. Common examples of AI include chatbots like ChatGPT, virtual assistants such as Siri and Alexa, and self-driving cars.
There are various types of AI, including (1):
- Traditional AI, which mimics human intelligence through machine learning *, deep learning *, and natural language processes.
- Generative AI uses large language models (2) to generate new content such as images, text, and code.
- Narrow AI, also called specialized AI, performs specific tasks, for example, the AI used in self-driving cars.
- General AI performs new tasks, for example, ChatGPT generating text outputs.
- Predictive AI forecasts future events based on historical trends.
AI is changing African radio broadcasting, from experimental usage to stations which are fully-powered by AI.
This influx of AI tools is enhancing content creation, automating production, and enabling better engagement with local and multilingual communities.
Recent developments
- Fully AI-powered stations: Africa's first fully AI-powered radio station, Tingo AI Radio 102.5 FM, is operating in Lagos, Nigeria without human broadcasters, employing AI for DJing, news, and production.
- AI personalities: Cape Town, South Africa's Heart FM launched "Jay I," an AI-generated presenter, in order to show the potential of AI in media.
- Multilingualism and local content: AI tools are being used to improve transcription and translation for African languages such as Swahili, Hausa, Luganda, and Bambara, enabling better engagement with rural communities.
- AI for social impact and safety: The Burkina Faso government is collaborating with the national broadcaster to use AI to improve service delivery. Other projects in the country are using AI tools (for example, Dubawa Audio, an AI-powered, web-based platform), to combat misinformation.
- Production efficiency: Broadcasters are using AI to automate video editing and sound levelling, write news scripts and weather forecasts, and manage playlists, significantly reducing production times.
- Voice cloning: Broadcasters are using AI tools to conduct interviews and read scripts.
- Audience research: AI-powered analytical tools are helping broadcasters better understand their listeners.
Regulation of AI: The potential harms of AI have raised debate about how to regulate AI in Africa. The majority of African nations appear to be advocating for strong AI guidelines. Also, there are a growing number of accredited fact-checking organizations. In South Africa, the Real411 platform allows voters to report concerns about online political content, including the possible use of AI (6).
How can AI improve radio? (4, 17)
- AI can help with program flow, managing regular tasks like scheduling, voice tracking, weather and sports updates, and administrative tasks.
- AI can help broadcasters understand their audience more deeply, leading to expanded interactivity, more personalized content, and, by analyzing data, helping to uncover hidden stories.
- AI can empower listeners, offering real-time interaction and personalized experiences, and helping all listeners feel included. Tools such as AI-powered live transcription for the hearing impaired, voice synthesis for the visually impaired, and automatic translation for minority languages are making radio more accessible to everyone. Tools for those with sensory disabilities include Microsoft’s Seeing AI Talking Camera for the Blind, Google Lookout, and Samsung’s Galaxy Buds2 earphones (22).
- AI can help improve the quality of content by assisting with fact-checking and verifying sources. It can also reveal and mine the richness of archives, transforming often-dormant memory into a rich and active resource.
- AI can power local storytelling by tailoring messages for specific communities and translating content into indigenous languages.
- AI can greatly expand your ability to create sound for programming. AI tools can compose a jingle, generate music to set a mood, or mix voices with background sound.
Challenges
While AI has big potential as a force for good, there are considerable challenges:
AI tools can amplify disinformation and hate speech (12). By personalizing content, AI-driven algorithms can create "echo chambers," deepening polarization and ensuring that hateful or divisive content reaches susceptible audiences more likely to act on it. AI enables the rapid, low-cost creation, sophisticated manipulation, targeting, and personalization of disinformation, including deepfakes and other extremely convincing false content. Deepfakes disproportionately target women and girls, while others used cloned voices to spread fake news, for example, related to armed conflicts in the Democratic Republic of Congo, in elections, and during armed government takeovers. So, while AI can help fight disinformation by making it quicker to verify data, cross-check sources, and warn of dubious information (15), it can also generate deepfakes and made-up stories.
AI can “invent” facts about historical events (20). For example, AI models have produced misleading or outright false narratives about the Holocaust. Without being AI- and media- literate, users may not know how to recognize unreliable data or verify AI-produced content.
Trustworthiness and transparency of models (21): It is often unclear how AI tools arrive at their conclusions; the models may not be transparent. There is also the issue of factual mistakes. A 2025 (5) study of public media networks in 18 countries (none in Africa) coordinated by the European Broadcasting Union (EBU) found that 45% of answers generated by four popular AI assistants—ChatGPT, Copilot, Gemini, and Perplexity— “had at least one significant issue, that 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions; and that 20% contained major accuracy issues, including hallucinated * details and outdated information.”
Bias: AIs learn from the data they are trained on. If data is biased, for example, by gender, race, disability, or culture, AI results will be biased, and can reproduce and amplify stereotypes. Thus, human editorial input is essential. Recent research (25) found biases in GPT-2 and ChatGPT, as well as in Meta’s Llama 2. All AI models studied produced cultural stereotypes and sexist, misogynistic content.
Data availability and ownership: It’s important to clearly stipulate under which circumstances data can be made available and to whom. It’s also important to respect ownership and provide clear promises of confidentiality for certain types of data, and to obtain explicit permission before using personal data for AI-driven personalization. Cyber-attacks can cause security breaches with horrific consequences, but techniques such as federated learning * can reduce risks by enabling AI models to be trained on multiple, local, decentralized devices (like smartphones, servers, or hospitals) without sending the actual data to a central server.
Limited know-how: AI can be engaged for many types of problems, but there are relatively few experts who know how to apply AI ethically. There have been calls to involve sociologists and policy-makers in discussions, rather than leaving decisions about deployment to “technologists”—computer engineers and data scientists.
Equitable use of AI: AI research requires a massive amount of computing power, data processing, and energy. Unequal access to these resources deepens the divide between the few companies and universities that do have resources, and the rest of the world that does not.
Environmental impact: AI training and operation consumes a massive amount of electricity, causing a large carbon footprint. Cooling data centres depletes local water supplies, and pollution includes e-waste from hardware (cobalt and rare earths). Solutions include improving data centre energy efficiency, using renewable energy, expanding e-waste recycling, responsible material sourcing, and greater transparency in reporting AI's footprint (7).
Infrastructure and cost: Many media organizations in Africa face financial difficulties and lack the infrastructure to fully adopt AI.
Accountability and responsibility for using AI models: This refers to individuals’ and organizations’ ethical and legal obligation to take responsibility for the outcomes, behaviours, and impacts of AI systems. This means that human oversight is dominant, that AI decisions must be explainable and transparent, and that harms are clearly assigned to a particular party.
Policy and regulatory frameworks for AI remain at a formative stage in Africa. Without good governance, digital journalism could be more dangerous than offline journalism, especially with the current heavy investment in media surveillance in Africa (9). One reason it’s important to regulate AI is to avoid infringing on societal freedoms and violations against journalists.
Ensuring that AI works for Africans: For AI to work well in Africa, it must incorporate indigenous knowledge systems (9), be tailored to people’s needs and values, and involve local tech innovators in product development. Otherwise, it will be an alien, western concept.
Impact on culture and creativity (25): AI can enrich cultural and creative work, but can also concentrate cultural content, data, markets, and income in the hands of a few, reducing the diversity and pluralism of languages, media, cultural expression, participation, and equality. A recent UNESCO report on AI and culture maintains that AI must prioritize cultural rights, preserve diversity, and uphold human creative agency against the risks of “algorithmic homogenization and platform dominance.
Because AI impacts society as a whole, radio’s coverage of AI must report on issues such as the power dynamics between companies, governments, citizens, and computer chips, and between data and algorithms (25). And while AI can serve the public interest, journalists also need the insight and expertise to alert their audiences about unequal benefits and human rights violations.
Ethical use of AI in radio broadcasting
The ethical use of AI in radio broadcasting is centred on preserving the unique human connection and public trust that radio has with its listeners. Broadcasting bodies such as the European Broadcasting Union and the Canadian Broadcasting Corporation have created frameworks to ensure that AI serves as an efficient tool rather than a substitute for human judgment.
Keys to ethical use of AI
Transparency and disclosure (19)
Ethical standards require stations to:
- Label AI content: Tell audiences clearly when they are listening to an AI-generated voice.
- Disclose AI assistance: Tell audiences when AI was significantly used in scripting or newsgathering.
- Maintain "no surprises": Ensure audiences are never misled about whether content is human- or machine-produced (3).
Greater transparency contributes to more peaceful, just, democratic, and inclusive societies (27). It enables public scrutiny that checks corruption and discrimination, and helps identify and prevent human rights abuses.
Human accountability and "human-in-the-loop"
AI should supplement, not replace, human actions.
- Fact-checking: Broadcasters must verify all AI outputs for accuracy to prevent spreading misinformation or “hallucinations.”
- Editorial oversight and responsibility: All AI-produced scripts and story outlines are subject to final human review before airing. The human editor or station is accountable for content.
Protection of personality and intellectual property
"Voice cloning" creates ethical challenges (15). Can a cloned voice be used without someone’s consent? Should a station pay for it? Who owns an AI voice?
- Image and likeness: Protect media figures from deepfakes used for fraud or misinformation.
- Voice ownership: Write contracts for using and compensating a broadcaster’s cloned voice.
- Respect copyright: Don’t use AI tools that were trained on stolen content. Make sure that broadcasters are compensated when AI uses their original reporting.
- Beneficial uses of voice cloning include making audiobooks, personalized virtual assistants, voiceovers in various languages, and restoring speech in medical conditions.
- Malicious uses of voice cloning include impersonating relatives in distress or CEOs to approve fraudulent transactions, or creating fake urgency to deceive victims into sending money or sensitive data.
Mitigating bias
AI systems can inherit biases from their training data, affecting music rotation, news reporting, ad targeting, etc.
- Regular audits: Conduct systematic reviews to ensure AI tools do not discriminate based on, among other things, race, gender, or location.
- Diverse datasets: Use data that reflects your audience to ensure AI tools work for the communities served by the station.
Data privacy and security
Where does data go? Is it stored? Who has access to confidential content? Ethical use requires perfect control of data flows (15).
- Note the UN General Assembly’s 2022 resolution on “The Right to Privacy in the Digital Age.” The resolution addresses privacy and security not only for those whose data might be compromised, but for journalists:
“… in the digital age, encryption and anonymity tools have become vital for many journalists and media workers to freely exercise their work and their enjoyment of human rights, in particular their rights to freedom of expression and to privacy, including to secure their communications and to protect the confidentiality of their sources, and calls upon States not to interfere with the use by journalists and media workers of such technologies and to ensure that any restrictions thereon comply with the obligations of States under international human rights law …”
Combatting hate speech and disinformation:
If you encounter online content that you suspect to be generated by AI, use lateral reading * skills to fact-check information (22). Also:
- Check for inconsistencies: Note any unusual details or inconsistencies in images, text, and video.
- Analyze writing: Pay attention to writing style. AI content might lack “a human touch,” and be overly formal, repetitive, or lacking nuanced expression.
- Examine image quality: Look for unnatural lighting or shadow, artifacts (unwanted, unintended, or accidental visual distortions, anomalies, or errors) or blurriness, particular around edges.
- Examiner metadata: If possible, inspect the metadata of digital content. It may provide clues about its origin and whether it has been artificially created.
- Stay informed: Keep up to date with advancements in AI technology and customary characteristics of AI-generated content to more easily recognize it.
Remember that using AI is great to get a deeper understanding on a topic. But don’t use it as your only source of information. Remember also that it’s common for AI tools to invent the results you ask for, often adding links, facts, or biographical information that is not true (22).
There are various solutions to hate speech and disinformation:
- Hybrid moderation models *
- Algorithmic transparency *
- Media and information literacy *
- Using AI tools that were trained on data that reflect the linguistic and cultural diversity of Africa, including local languages, cultural contexts and diverse perspectives (12)
- Strong legal frameworks
- Counter-narratives (that challenge disinformation and/or hate speech)
- AI-powered fact-checking tools like the Dubawa WhatsApp chatbot and MyAIFactChecker,
- Audio transcription and analysis tools like Dubawa Audio
- Localized AI detection systems for African languages *: Localized AI detection systems are mainly developed through open-source initiatives like Masakhane and NTeALan, and focus on overcoming data scarcity for languages such as isiZulu, Hausa, Yorùbá, and Amharic. They create models that understand cultural nuances, local phrases and proverbs (4).
- Collaborative Open Source Intelligence (OSINT) projects are collective, community initiatives that collect, analyze, and verify publicly available information to uncover insights, investigate events, or monitor trends. They include Bellingcat, Forensic Architecture, BBC Africa Eye, Oryx, and Global Fishing Watch. Africa-based projects include The African Network of Centres for Investigative Reporting, The African Academy for Open Source Investigation, The African Digital Democracy Observatory, and PesaCheck.
AI Training (23)
Adopting AI is about understanding what AI can do and can't do, and how to integrate it intelligently and ethically. Radio stations can offer awareness-raising sessions and practical training courses. The better a radio team understands AI tools, the more effectively they will use them.
Here are some recommendations for training:
- Adopt a tiered training approach: Structure AI capacity-building into levels such as awareness, operational use, and governance.
- Integrate AI literacy with Media and Information Literacy (MIL): MIL is a foundation for AI literacy, and is shifting to include it, educating AI users on how to critically assess, use, and understand the ethical implications of AI content.
- Human agency and autonomy: AI challenges human control over information and decision-making. MIL acts as a defence to maintain user empowerment and agency.
- Addressing societal risks: Training must address risks such as disinformation, bias, data privacy breaches, and threats to electoral fairness.
- Ethical use and reliability of sources: Users need training to identify AI-generated content, question the truth of AI-produced evidence, and address the "black box" (non-transparent) nature of many AI systems.
- Lifelong learning opportunities: While highlighting risks, it’s important to recognize AI’s potential for creativity, education, and enhanced access to information.
AI is a tool, not a voice
- Broadcasters can use AI as a tool to locate informational blind spots in places or contexts where reporting is impossible because of censorship, conflict, or lack of access. But decisions about the angle, tone, narrative, and reporting should remain human. AI suggests, but never decides. The journalist is the author. AI is a tool, not a voice.
Practical checklist: Before publishing AI-assisted content
Before airing or publishing AI-assisted content, broadcasters should ask:
- Was the information independently fact-checked?
AI outputs must always be verified to prevent misinformation or “hallucinations.” - Was AI use clearly disclosed?
Audiences should be informed when AI-generated voices, scripts, or other tools were significantly used. - If voice cloning was used, was prior written consent obtained and compensation agreed?
Farm Radio International recommends that no voice cloning be used without written consent and clear compensation agreements. - Was the content reviewed for bias or discrimination?
AI systems can inherit biases from their training data and must be carefully reviewed. - Is a human editor clearly accountable for the final version?
AI should support editorial work, but responsibility always remains with the broadcaster.
AI is a tool to assist human creativity and efficiency — it must never replace human judgment or accountability.
- Consider local, low-tech AI (15)
AI accessible to all radio stations
Not every station can access powerful servers, high-speed connections, or premium subscriptions to large AI platforms. But there are local AI solutions which use more modest machines or open source tools, a localized, lightweight AI adapted to the realities of each region.
Several of these kinds of tools are emerging to support radio broadcasting and automation in Africa, focusing on low-cost infrastructure, local language support, and automation.
Key tools include:
- N-ATLAS (Nigeria): An open-source, multilingual, and multimodal Large Language Model that supports Nigerian-accented English, Yoruba, Hausa, and Igbo. N-ATLAS can transcribe, summarize, and generate content for radio.
- Sunbird AI (Uganda): A grassroots initiative that focuses on voice-enabled AI and speech datasets to improve speech recognition.
- Masakhane: See above.
- Pyrate (OpenBroadcaster): A lightweight radio automation tool for Raspberry Pi, Pyrate can manage playlists and schedules, and automate audio content, and is well-suited for community radio stations with limited resources.
- Adthos: Provides AI production tools to create ads, news, and voiceovers. A mix of open-source and proprietary models for rapid, low-cost creation of content.
- OpenAirInterface: An open-source platform that can be used to build Radio Access Networks for broadcasting.
- Mozilla Common Voice: A collaborative project for collecting and building open-source speech datasets for underrepresented languages, including Luganda and other African languages.
- NaijaVoices: A dataset for Igbo, Hausa, and Yoruba, designed to improve speech recognition in local languages.
- Broader considerations (19)
- Concentration in the AI sector will have profound implications, and global competition authorities are weighing whether new regulations are needed.
- Some journalism organizations have signed licensing deals with large AI firms, and are strongly emphasizing the need for compensation and creating a market for licensing content.
- AI tools that “scrape” content from the Internet threaten the economic viability of all media, necessitating frameworks for fair compensation of publishers and journalists.
For a deeper understanding of AI and to explore its capacities, risks, and some suggested solutions for these risks at your own station, try working through UNESCO’s 13 Ideas for Celebrating 13 February, which offers summaries of important issues and capacities, further in-depth references, and questions and suggestions for radio teams to explore. You can also work through UNESCO’s Journey through the MILtiverse: media and information literacy toolkit for youth organizations, which is definitely not just for youth!
Ethical AI should follow the following principles:
- Transparency and explain-ability: Developers and users should understand how AI systems work and why they make particular decisions.
- Accountability: There must be clear assignment of responsibility for AI's actions, including procedures for monitoring and compensating harm.
- Safety and security: AI systems must be secure from attacks and designed to prevent unintended harm.
- Protection and privacy: Intellectual property protection, clear ownership of data, and transparency.
- Caution: Thoughtful use of AI-generated audio (music, voice cloning*, and deepfake audio).
*Farm Radio International recommends that no voice cloning be used without prior written consent and clear compensation agreements with the individual concerned.
- Education: Broadcasters should have training and ongoing professional development related to AI, including the ethical applications of AI.
- Assessment of compliance: Developers and users of AI tools must assess how the tools comply with national and international law in order to prevent legal liability and protect the integrity of content, as well as to protect radio station infrastructure and content from malicious attack or inadvertent AI malfunction.
Developing and using AI ethically involves being guided by principles like fairness, transparency, accountability, privacy, and safety. These will ensure that AI benefits humanity while mitigating risks like bias, job loss, and manipulation. For more on the ethical use of AI, read the United Nations Educational, Scientific, and Cultural Organization’s Recommendation on the Ethics of Artificial Intelligence, which states that everyone involved with AI—developers, users, and policymakers—share ethical responsibility to ensure that AI enhances rather than harms society.
Definitions
Algorithmic transparency: The principle that the features influencing decisions made by algorithms should be visible, or transparent, to everyone who uses, regulates, or is affected by the systems that use the algorithms.
Artificial intelligence: A wide variety of technologies and techniques to complement human characteristics such as intelligence and ability to analyze, technologies which can be defined as “self-learning, adaptive systems.” AI encompasses vision, perception, speech and dialogue, decisions and planning, problem-solving, robotics, and other applications that enable self-learning.
Automatic content generation: The use of AI, machine learning, and software to produce written, visual, or audio content with minimal human input, allowing for high-volume, consistent, and fast content creation.
Biases in AI: Systematic errors that lead to unfair outcomes for certain individuals or groups, for example, women and girls, Indigenous people, or poorer people. Biases can arise from various stages of AI development, including design, collection of training data, and usage.
Chatbot: Computer programs designed to simulate human conversation through text or voice interaction. Used extensively in customer service, virtual assistance, and information retrieval.
Collaborative Open Source Intelligence initiatives (OSINT): Projects where researchers, investigators, and volunteers work together to gather, analyze, and verify publicly available information to achieve a shared objective. Unlike traditional intelligence gathering, OSINT relies on crowdsourcing and community expertise to examine complex issues, from missing persons to human rights abuse.
Deepfakes: AI-generated or manipulated video, audio, or images used to mislead or deceive by creating realistic but false representations of people or events.
Federated learning: A method for training AI models on multiple decentralized devices (such as smartphones, servers, or hospitals) without needing to send the actual data to a central server, so no-one sees your data.
Gender-based harmful content: Any form of text, image, video, or audio that promotes, perpetuates, or encourages violence, discrimination, harassment, or hatred against individuals based on their gender, gender identity, or sexual orientation.
Deep learning: A type of machine learning that teaches computers to process information in a way inspired by the human brain.
Disinformation: False information that is intended to mislead; deliberate misinformation.
Generative AI: A type of AI that creates new, original content (for example, text, images, code, music, video) by learning patterns from massive datasets. GAI understands prompts and generates original outputs that imitate human creativity, enabling applications from chatbots to art generation.
Hallucination: In the context of AI, a hallucination is a confident and plausible-sounding response from an AI model that is actually incorrect, fabricated, or nonsensical, often due to limitations in training, data gaps, or a tendency to prioritize coherence over accuracy.
Hybrid moderation models: Content management systems that combine AI with human oversight to review user-generated content, live chat, and audio-visual material.
Lateral reading: Investigating who's behind an unfamiliar online source by leaving the webpage and opening a new tab to see what trusted websites say about the unknown source.
Machine learning: AI that enables computers to learn from data and improve at tasks without being programmed at every step. Instead of obeying strict rules, computers analyze data patterns to generate predictions, decisions, or recommendations, becoming more accurate as they continue to process information.
Media and information literacy (MIL): Knowledge, skills, and attitudes that enable individuals to effectively access, analyze, evaluate, create, and share information across media platforms. MIL combines information literacy (managing information) and media literacy (understanding media content) to cultivate critical thinking, responsive use of technology in online communities, and ethical engagement.
Misinformation: Deceitful or unintentional creation and dissemination of false and misleading content. Misinformation frequently derives from initial disinformation.
References for further reading
- Blue Prism, 2025. Blog: Debunking AI Myths: Common Misconceptions About AI. https://www.blueprism.com/resources/blog/ai-myths-misconceptions/
- Blue Prism, undated. Large Language Models: What Are Large Language Models (LLMs)? https://www.blueprism.com/guides/ai/large-language-models-llms/
- CBC (Canadian Broadcasting Corporation), 2025. How CBC News will use AI responsibly to benefit our journalism — and keep your trust. https://www.cbc.ca/news/editorsblog/cbc-news-artificial-intelligence-guidelines-9.6990760
- Centre for Journalism Innovation and Development, 2025. AI and Journalism in Africa: A New Era or a Looming Challenge? Webinar (video), April 15, 2025. https://www.youtube.com/watch?v=vf7c-dIZ3FY
- European Broadcasting Union (EBU), 2025. Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory. https://www.ebu.ch/news/2025/10/ai-s-systemic-distortion-of-news-is-consistent-across-languages-and-territories-international-study-by-public-service-broadcaste
- Gerhäusser, T., and Schwikowski, M., 2025. AI disinformation could threaten Africa's elections. Digital World. https://www.dw.com/en/ai-disinformation-could-threaten-africas-elections/a-71698840
- Girolimon, M., 2026. Understanding the Environmental Impact of Artificial Intelligence. Southern New Hampshire University. https://www.snhu.edu/about-us/newsroom/stem/ai-environmental-impact
- Jacobson, N. 2024. Deepfakes and their impact on society. CPI OpenFox. https://www.openfox.com/deepfakes-and-their-impact-on-society/
- Matsengarwodzi, D., 2024. Artificial Intelligence's Potentials and Challenges in the African Media Landscape. Al Jazeera Journalism Review. https://institute.aljazeera.net/en/ajr/article/2547
- Mutiso, R. M., 2025. Beyond the binary: Navigating AI’s uncertain future in Africa. Science: Vol. 388, No. 6742, April 3, 2025. https://www.science.org/doi/10.1126/science.adw9439
- Nkoala, S., et al, 2025. AI Hype Through an African Lens: A Critical Analysis of Language as Symbolic Action in Online News Publications. Digital Journalism, 1–20. https://www.tandfonline.com/doi/full/10.1080/21670811.2025.2528052
- RUSI (Royal United Services Institute for Defence and Security Studies), 2025. Digital Divides: Online Hate Speech, Disinformation and AI in Africa. Webinar (video), May 20, 2025. https://www.youtube.com/watch?v=09-j-GDmOXQ
- Umejei, E., et al, 2025. Artificial Intelligence and Journalism in Four African Countries: Optimists, Pessimists, and Pragmatists. Journalism Practice, Volume 19, p. 2249-2265. https://www.tandfonline.com/doi/full/10.1080/17512786.2025.2489590#abstract
- UN General Assembly, 2022. The right to privacy in the digital age: resolution / adopted by the General Assembly. https://digitallibrary.un.org/record/3999709?v=pdf&ln=en
- UNESCO, undated. 13 Ideas for Celebrating 13 February. https://www.unesco.org/en/days/world-radio/13ideas
- UNESCO, undated. Data Governance in the Digital Age. https://www.unesco.org/en/data-governance-digital-age?hub=66636
- UNESCO, 2025. Radio and AI. https://www.unesco.org/en/days/world-radio/radio-artificial-intelligence.
- UNESCO, 2025. Report of the Independent Expert Group on Artificial Intelligence and Culture. https://www.unesco.org/sites/default/files/medias/fichiers/2025/09/CULTAI_Report%20of%20the%20Independent%20Expert%20Group%20on%20Artificial%20Intelligence%20and%20Culture%20%28final%20online%20version%29%201.pdf?hub=171169
- UNESCO, 2024. AI and the future of journalism: an issue brief for stakeholders. https://unesdoc.unesco.org/ark:/48223/pf0000391214
- UNESCO, 2024. AI and the Holocaust: rewriting history? The impact of artificial intelligence on understanding the Holocaust. https://unesdoc.unesco.org/ark:/48223/pf0000390211
- UNESCO, 2024. Disability equality in the media: representation, accessibility, management; practical manual. https://unesdoc.unesco.org/ark:/48223/pf0000391032
- UNESCO, 2024. Journey Through the Miltiverse: Media and Information Literacy Toolkit for Youth Organizations. https://unesdoc.unesco.org/ark:/48223/pf0000392035?posInSet=1&queryId=d88cc6fd-aabd-46f3-be9e-d2daca05dbc8
- UNESCO, 2024. User empowerment through media and information literacy responses to the evolution of generative artificial intelligence (GAI). https://unesdoc.unesco.org/ark:/48223/pf0000388547
- UNESCO, IRCAI, 2024. Challenging systematic prejudices: an Investigation into Gender Bias in Large Language Models. https://unesdoc.unesco.org/ark:/48223/pf0000388971
- UNESCO, 2023. Reporting on artificial intelligence: a handbook for journalism educators. https://unesdoc.unesco.org/ark:/48223/pf0000384551
- UNESCO, 2022. Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
- World International Property Organization (WIPO), 2025. Twelfth Session of the WIPO Conversation: Intellectual Property and Synthetic Media, October 28-29, 2025. https://www.wipo.int/meetings/en/details.jsp?meeting_id=89408
Other useful websites:
- AI Tools Radar. https://radar.ircai.org/en/tools/ A list of AI tools focused on the media.
- Centre for Journalism Innovation and Development (CJID) website: https://thecjid.org/creative-homepage/
- Penplusbytes website: https://penplusbytes.org/
- The Poynter Institute, 2024. Artificial Intelligence, Ethics and Journalism. https://www.poynter.org/ai-ethics-journalism/
- The Poynter Institute, 2024. Your newsroom needs an AI ethics policy. Start here. https://www.poynter.org/ethics-trust/2024/how-to-create-newsroom-artificial-intelligence-ethics-policy/
Acknowledgements
Contributed by: Vijay Cuddeford, former Managing editor, Farm Radio International
Reviewed by: Nathaniel Ofori, Digital Innovation Manager • Farm Radio International (FRI)
Project: Resource supported by IRESAP
