Seeing is (dis)believing

Posted on June 05, 2025

Generative artificial intelligence (GenAI) is changing how we work, play and relax. Whether you use ChatGPT to write a brief, Midjourney to generate visuals or MuseNet to create unique soundtracks, these technologies have opened up opportunities for richer content. Want to replicate the sonorous voice of David Attenborough or the facial elasticity of Jack Nicholson? GenAI can mimic their style.

More specifically, GenAI has had a major impact in the broadcasting industry, says Professor Nelishia Pillay, who holds the National Research Foundation-Department of Science and Innovation SARChI Chair in Artificial Intelligence for Sustainable Development and is the Multichoice Joint Chair in Machine Learning at the University of Pretoria (UP).

“GenAI creates new content based on what it learns from online data,” she explains. “While it doesn’t come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones. One of the early contributions of GenAI is image generation, which has been very useful for the broadcasting industry. Compelling infographics to better explain a news story is just one example, and these can be created by simple voice prompts. The ability to create music and generate video or content has also been invaluable, as has language translation for subtitling, where GenAI is used to translate languages, while voice synthesis using GenAI techniques to convert text to speech has reduced the workload in the industry.”

Because AI systems can rapidly analyse data to understand viewer preferences, it can also help broadcasting and entertainment companies generate personalised content.

“Being able to determine what new content to make or how to tailor their offerings to specific audiences makes GenAI a valuable marketing aid,” Prof Pillay says.

However, this personalisation does raise ethical questions about the possibility of singular echo chambers, especially in highly polarised societies. Addressing biases in AI models is, therefore, crucial to prevent perpetuating stereotypes or unfair representation in generated content.

“A challenge that comes with GenAI is how to ensure the ethical use of these tools,” Prof Pillay cautions. “Deepfakes – digitally forged images or videos – can be used to produce fake news and harmful cybersecurity attacks on businesses.”

According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news. One of the examples is Checkmate – a collaboration between Germany’s Deutsche Presse-Agentur, News UK, Mexico’s DataCritica and the BBC – which is used as a real-time fact checker for broadcasts. It provides a transcript for videos and highlights in yellow any claims being made, then searches for sources to verify them and provides links to them. It was built as a tool to help journalists figure out what is accurate and what is not.

“Similarly, tools like Turnitin, that have previously been used to detect plagiarism, can now also detect whether a submission has been generated by artificial intelligence,” Prof Pillay says. “Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI.”

Other ethical implications include intellectual property rights. As GenAI blurs lines between human and artificial creativity, broadcasters and media companies need to have clear guidelines on crediting AI-generated content.

“Many AI tools rely on large datasets to train their algorithms,” Prof Pillay says. “As these are sourced from personal information such as social media posts, broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video.”

This story was originally featured in the Re.Search magazine. Check out Issue 11 here.

- Author Prof Nelishia Pillay

Copyright © University of Pretoria 2025. All rights reserved.

FAQ's Email Us Virtual Campus Share Cookie Preferences