By Ayathandwa Tsili
When artificial intelligence (AI) recognises a face, reads a medical scan or makes decisions, it often does so in ways that even its creators say they cannot fully explain. But for Rhodes University MSc Applied Mathematics graduate Georgina Fiorentinos, that wasn’t good enough – so she set out to find out for herself.
And she wanted more than a mathematical answer. She wanted to understand how machines interpret the world around them and how reliable those interpretations really are.
Her MSc research explored the inner workings of deep learning models, the type of AI behind technologies such as image recognition systems and automated decision-making tools. While these systems are widely used, the processes inside them are often difficult to interpret.
“I’ve always been interested in what is happening inside these models,” Fiorentinos explains. “We usually see the input and the output, but the actual process in between can feel like a black box. I wanted to understand how the model decides what information is important and what it ignores.”
Her interest began during her Honours research, where she studied how compression techniques could reduce noise in deep convolutional neural networks, models commonly used to analyse images. That work led her to investigate how information changes as it moves through a neural network and whether important details are preserved along the way.
Working within the Rhodes University Artificial Intelligence Research Group, and supervised by Professor Atemkeng, she focused on the relationship between ‘signal’ and ‘noise’ inside these systems. In simple terms, signal refers to useful information a model needs to make accurate predictions, while noise refers to irrelevant details that can interfere with those decisions.
“An image might seem obvious to us, but to a computer it’s just numbers,” she says. “The model has to learn which patterns matter and which don’t, and that’s not always as straightforward as it sounds.”
Rethinking complexity in AI
One of the most unexpected moments in her research came when the results challenged a common assumption in AI research.
Like many others working in the field, Fiorentinos expected newer and more complex models to perform better. Instead, she found that this was not always the case.
In situations where the data contained high levels of noise, simpler neural network architectures sometimes behaved more consistently and preserved important information more reliably than more advanced models. Rather than losing useful signal as the data moved through the network, these models were often more stable.
“That really surprised me,” she says. “You expect the more modern, more complicated models to do better, but that isn’t always true. It showed me that how a model is designed can matter more than how complex it is.”
This insight shifted the direction of her work. Instead of simply comparing models, she began examining how different architectures manage information when real-world data is imperfect.
Understanding this process matters beyond theory. In fields such as healthcare, where AI helps interpret medical images, reliability is essential. Real-world scans are rarely perfect, and models must still identify the right patterns under difficult conditions. Research like this contributes to building systems that can be trusted in practice.
Seeing how machines interpret the world
Another part of Fiorentinos’ research explored how artificial intelligence models “see” images and how this differs from human perception.
When people look at a picture, they usually recognise what it shows almost immediately. A neural network works more gradually. In the early stages, it detects simple features such as edges or contrasts between light and dark. These are then combined step by step into more complex patterns.
Her research showed that useful and irrelevant information move through the system together.
“Signal and noise don’t come neatly separated,” she explains. “Sometimes the model keeps information that isn’t actually important, and that can affect the final decision.”
Resilience in part-time study
Completing the MSc was not only an academic achievement but also a demanding personal commitment. Fiorentinos completed her degree part-time while working full-time as a data scientist.
“It definitely wasn’t the easiest route,” she says. “You have to be very disciplined, and you need to stay consistent even when you’re tired or when progress feels slow.”
One of the most challenging aspects of her research was finding ways to measure processes that cannot be directly observed.
“You can’t just look inside the model and see what it’s thinking,” she says. “You have to find ways to study it indirectly, and that takes patience.”
Despite these challenges, she describes the experience as deeply rewarding.
“You need to know why you’re doing it,” she reflects. “There will be difficult periods, and having a clear purpose makes it easier to keep going.”
Working closely with AI has also shaped how she understands its role in society.
“AI is a tool,” she says. “It can help us work faster and solve problems, but it depends on how people choose to use it.”
In her role as a data scientist, she continues applying the same principles that shaped her research at Rhodes University: understanding how information moves through systems, and how those systems can be designed to support better decisions.
By identifying which types of neural networks handle this challenge more effectively, her work contributes to the development of artificial intelligence systems that are more dependable in high-stakes environments.
This kind of research reflects Rhodes University’s broader commitment to producing knowledge that is locally responsive and globally relevant. Understanding how AI systems behave in imperfect conditions strengthens their potential to support healthcare, planning and decision-making across sectors where accuracy matters most.
Georgina Fiorentinos’ thesis is titled Signal to Noise Dynamics and Feature Robustness in Convolutional Neural Networks.
