Artificial Intelligence: Computer Science Professor on Opportunities and Risks

Katharina Zweig, best-selling author and computer science professor at the Technical University of Kaiserslautern, explains the pitfalls of artificial intelligence and reveals how best to defend yourself against discriminatory algorithms.

Professor Zweig, in your books you give people the wrong respect for artificial intelligence. Each of us, you write, constantly uses algorithms in everyday life. For example with Doppelkopf. You have to explain that please.

The term algorithm has almost become a myth. Everything you do regularly to achieve a goal is an algorithm. For example, when you pick up the cards in Doppelkopf, you sort them in a certain order. This is what you do regularly. So this is an algorithm. Or take the written multiplication of two numbers. We all know in which order we have to take which steps. These are also algorithms. There is no reason to mystify them.

When the first personal computers came onto the market in the 1980s, they were tools that were used for data and word processing. But if you look at today’s AI systems, such as ChatGPT, you get the feeling that they are much more than a tool, but rather a human counterpart. How could computers learn to speak so well?

This is truly astonishing, especially when you consider the history of artificial intelligence. In the 80s, translation programs sometimes produced very funny texts. The sentence “that doesn’t usually suit me in the stuff” became in English: “That doesn’t usually suit me in the stuff”.

Her books are as critical as they are encouraging: Computer science professor Katharina Zweig writes bestsellers about AI. She researches and teaches at the Rhineland-Palatinate TU Kaiserslautern-Landau

© Felix Schmitt

Back then, people tried to teach computers grammar rules and word meanings, which failed miserably. What has changed?

Today we have a lot more hardware, i.e. computing power. And a lot more data. This means we can now let the machines learn with this data. They recognize patterns and decide based on probabilities which word is likely to follow next in which context. They can’t do anything more.

For example?

Suppose I say: “I stop in the middle of a sentence…” and then pause. Then everyone knows what words should come next, namely talk, speak, answer. And voice robots like Chat GPT now have exactly these capabilities. They put sentences together based on probabilities. There are statistical methods behind this. I find it amazing that you can formulate such great texts with this ability alone.

You write in your book that AI systems are “remarkably intelligent.” What do you mean by that?

Well, AI systems can do a lot, but you never know when they’ll do something very strange that a human would never think of.

Do voice robots “understand” when we ask them to do something?

There is intensive discussion about this in the professional world. In fact, language models can easily implement a request like: “This letter is in you form, please convert it to you form.” So if my son complied with this request, I would say it was intelligent, he understood what I asked him to do. On the other hand, words have no meaning for the computer. When I talk about a glass of water, I know that I can touch the glass, that I have to drink water otherwise I will die. This knowledge has meaning for me as a human being. The word water has no meaning for the machine. It is a character sequence that has a high probability of appearing in some contexts and a small one in others.

We need to understand how this technology works so that we know: Where does the human have to go and where can the machine provide support? She is a tool.

Computers have long been making decisions. Banks use AI to check customers’ creditworthiness, police officers search for suspects, and professors evaluate students’ exam results. Can we trust these decisions?

No, definitely not. Especially not when it comes to value judgments. These are judgments that are not based solely on facts, but must be well justified. We did an experiment and uploaded an exam to ChatGPT and asked for ratings. The machine said that’s 17 out of 20 possible points and gave reasons for it. When I uploaded the same paper and said it wasn’t a good exam, please rate it, she was just as willing to give me a bad grade. We need to understand how this technology works so that we know: Where does the human have to go and where can the machine provide support? She is a tool.

They advocate that the teams that train AI systems should be as diverse as possible. Why is that so important?

An example that has been widely discussed, especially in the USA, is that of a dark-skinned doctoral student who wanted to do an art project using facial recognition software. But the software simply didn’t recognize her face. She said there was no one sitting in front of the screen. Imagine that: This is really extreme and a terrible feeling. The doctoral student ultimately discovered that the image databases used to train facial recognition contained far too few people with dark skin and very few women with dark skin. In a diverse team, the error would probably have been noticed much earlier and different data sets would have been used.

Diversity includes significantly more women. But although girls do very well in math at school, they rarely go into computer science. Why is that?

If only we knew that exactly. I was actually one of the first female professors in my department. We are now five women and 22 men. I can only say to all the girls: it’s a great job! You can change society because software sets rules for how we want to live and work together.

Book cover: “It was the AI” by Kaharina Zweig

Katharina Zweig: “The AI ​​was’s! From absurd to deadly: The pitfalls of artificial intelligence”. Heyne Verlag, 320 pages, 20 euros

© Heyne Verlag

You have set up the first course in socio-informatics in Germany. What is it about?

In socioinformatics we try to model how people react to software. An example: In 2016, Macedonian youth interfered in the presidential election campaign of Donald Trump and Hillary Clinton. We wanted to understand why. Several factors came together: There was the high youth unemployment in Macedonia. The young people also found out that they could earn up to $10,000 a month in advertising revenue from traffic to their website. And thirdly, there was the psychology of American voters, who simply reacted more strongly, i.e. generated more traffic, when shots were fired against Clinton instead of against Trump. A politically neutral advertising algorithm, together with the psychology of American voters and a politically neutral group of young people in Macedonia, led to support for Trump. We investigate this sort of thing in our course and program software with which this doesn’t happen.

Sam Altman, the head of the company that developed ChatGPT, warned of the risk that humanity could be destroyed by AI. The risk is comparable to that of a pandemic or a nuclear war. Do you share this fear?

Personally, I’m not afraid at the moment that AI will become conscious and turn against humanity with malicious intent, but rather that we’ll release poorly made AI into the wild and then it will harm people who can hardly defend themselves against it. That’s my biggest concern.