Invariance as heard in the portrait video. ©Daniel Worrall

According to you, what is the biggest advantage that you could gain from geometric algebra within machine learning? 

For me the biggest advantage so far is that I really understand what is needed to scale up these models. Looking into the future, I think that natural representation is an advantage compared to many other methods. It has the potential to scale, to serve huge networks and to get much larger datasets better and better. That is, I think, the most important part of the deep learning area: to scale. 

AI has been growing very rapidly recently. Would you say GA would improve that and make its growth even faster? 

That is a tough question. Most of the explosions we experience at the moment are driven by large language models, in the area of language translation and text editing. I would place GA more on the molecule and scientific side. Over there, there will definitely be breakthroughs, very big breakthroughs even, but mainly on the science side. Frameworks like GA – that give a better representation and that have the potential to scale and compute data – will help.

You talked about having to rewrite GPU cores to improve performance. What impact would it have if they were to be rewritten?

The geometric product is doing something that removes some of the disadvantages of core implementation. Normally, you have to shift the data around a lot, if everything is built on geometric product operations, you have a denser operation space. That way you can load two components and then you can do a lot more operations with the same input. That means that in principle it should be faster, because everything happens on the GPU; you don’t have to shift away. I see the geometric product of GA as a way to circumvent the problem of always needing to shift data along the CPU-core, where you do all your operations. With the geometric product at hand, you don’t have to shift so much.

Before, you said that the growth we recently saw on AI is mostly thanks to transformers. Is it correct that it’s mostly from the huge datasets that we put into it? Do you think AI will hit a limit soon?

That iss the question, isn’t it? People say that we have been hitting a limit for quite some time now. I think that the biggest thing about transformers is the ability – although not 100% understood – to scale. Both with making bigger models and getting more data. It seems the scaling is almost linear. The more data you have, the better it gets. That is what makes them such beasts in terms of performance. 

If it is linear, would that also mean that we would have to keep producing more data and that that would reach a limit at a certain point? Do you think AI will, eventually, be able to feed itself data?

That is something that in the scientific world will definitely happen. For example, generating new molecules for some experiments. If you want to get new ideas of what could be a potential candidate to cure a certain disease, you do not have a lot of data to train the AI on. Otherwise, the problem would already have been solved. This area of generative AI is growing rapidly and therefore you will also have this feedback loop that you can feed into the network as soon as it is validated, as soon as you know it is correct. That will do some big stuff in the scientific world, for sure. 

There are a lot of people talking about AI now and about the dangers of it. Do you think AI can be dangerous for taking jobs or getting too intelligent? Or is it, in the end, just a computer? 

I stand somewhere in the middle. AI is a new tool and a revolutionary technology. Anyone who says that AI is not influencing their daily life, is wrong. I have to admit it has become really big and that I have to keep up with it myself. I have my daily routine at work, but then suddenly Ishould use Copilot or ChatGPT or this and that … It takes a while to get acquainted with that. It was like when the first smartphones came out. It took me a while to use them and to use the different apps. AI is sort of the same. It is definitely a change in the world, it has a big impact and is growing at a speed where we have to talk about regulations.  

Personally, I’m not afraid of a terminator, but there do need to be some laws, some regulations. The researchers and programmers who work on AI should play a part in that. If I make a phone call, I want to know if I’m talking to an AI or a real person. Without laws, I might eventually not know who is on the phone. 

It is quite scary to know that at a certain point, you might not be able to know if you’re talking to an AI or not. Do you think that will have a big impact on our world? 

Yes. I think that will change society. Just like when you are sitting on the bus nowadays; everyone looks at their phone. Smartphones changed how people interact and how we go through life. Such a drastic change will definitely happen over the next years as well. It is on us to make sure the change is good. 

I use AI myself as a tool and it is scary to realize that it is only going to get better. Will it replace me? Will it be able to do the job of a programmer? 

Programming is a very nice example. You can see AI as a new part of a programming language. Instead of writing your C++ code which takes like 400 lines, it suddenly takes four. But still, someone has to know how to work with it, right? It will never be flawless. It still depends on the humans who program it, who set up these large language models. It will shift a field and it will change the way things work and what is possible. There will always be enough jobs, but the jobs will shift, and you will have to shift with them.

ChatGPT is already everywhere, is it good that more and more people are using these tools?

All these AI are limited, and they always will be. But if people start using it, they will make them better. The biggest evolution of ChatGPT happened when millions of people started to use it. Now, millions of people are starting to become experts at using it. That is very good, it triggers discussions, it triggers awareness, and it shows the flaws that these models possess. That generates exponential growth in how good AI is or can be, because the more you interact with it, the better it gets.