This is a story that involves Arnold Schwarzenegger's Terminator.
It mainly involves University of Newcastle Professor Pablo Moscato. But to grasp what Pablo says, we need to think of Terminator.
Pablo, you see, is using a mathematical theory devised in 1748 to help create a new method for machine learning.
When we think of machine learning, we think of artificial intelligence [AI]. When we think of AI, we think of robots. And when we think of robots, we think of Arnie. And Sarah Connor.
Pablo started studying AI in 1985. This makes complete sense because Terminator came out that year in Argentina, where he's from.
At the time, there were only three people in Argentina actively researching neural networks. Pablo said a guy called Miguel Angel Virasoro, then Argentina's most famous physicist, gave a seminar in September 1985 that "moved me to this area and he talked about artificial neural networks".
Pablo recalled seeing Terminator later that year "in a small cinema in my home city".
"I do remember that the Terminator said something like 'my brain is a neural network' or something like that. I thought, 'Cool'."
Pablo is a super-smart dude [Can we call a professor a dude? We're going with yes because Pablo is also a super cool guy.]
Pablo talks in mathematical terms we have no idea about. He mentions things like "memetic algorithms" and "symbolic regression" and "representational bias".
But he also mentioned that his methods could be used to help companies sell things like wine and chocolate. That we understand.
He also mentioned that his methods could be used to predict the response of drugs to certain types of cancer. That got our attention, too.
"With the cancer problem, you want to give a particular drug to the people who will have the most benefit," he said.
In 2017, Pablo had a kind of "Edison with the light bulb" moment.
"That is when I thought that some of the best representations for some of these problems - wine, chocolate consumption in supermarkets, drug response in cancer - are functions of many variables with peaks," he said.
"These problems are everywhere. It is very natural to have a peak. Companies make money on the peak of consumption. For example, you need to deliver cancer drugs for those who have the peak of sensitivity in response. Well, many AI systems simply ignore the peaks."
It's at this point that our attention starts to drift. It's now that we simply smile and think of Terminator.
But Pablo grabs our attention again when he says: "I am using mathematics that go all the way back to Euler, who proved a theorem in 1748".
"It is central to what I am now proposing," he said.
He found it amazing that a combination of basic research in mathematics could "pave the way to a new method for machine learning".
We googled Euler's theorem, but couldn't comprehend it. We even watched a YouTube video called "Euler's theorem made easy".
Let's just say it wasn't easy.
Anyhow, Pablo is creating AI systems using a form of mathematics called "continued fractions".
"That is where Euler's theorem of 1748 enters into play. He proved a theorem that gave me a good basis to suggest that perhaps I should try to represent any unknown function [the thing that an AI system is trying to discover] using continued fractions."
Ahh fractions. Now that we understand.
"I brought two students from Caltech [in California] to work with me for 10 weeks in Newcastle in 2018. With one of them we built from scratch a memetic algorithm for an AI system that uses the representation of mathematical functions based on continued fractions," he said.
"This is fascinating because continued fractions go before Euler as well."
He said the first systematic approach to the study of continued fractions dated back to the end of the 17th century.
He added that the history of continued fractions could be traced back to the "algorithm of Euclid" in ancient Greece.
"The first continued fraction was used in 1572 by Bombelli to approximate the square root of 13," he said.
"In that sense some people like myself find it fascinating that, using some representation of functions already known in the 16th century, I can create a new type of AI system perhaps more powerful and with less bias [than what exists now]."
Pablo was recently awarded a $518,000 grant for memetic algorithms and computing.
His work aims to discover mathematical models that could be used for "nearly everything".
"It could address a core need of many areas of science and technology. It has no boundaries."
That, too, we understand.