Q&A: ‘We should not want AI to make life or death decisions’

Spain’s former Vice Minister of Foreign Affairs says countries like Brazil, India and South Africa must have a say in devising AI regulations.

TRT World caught up with Manuel Muniz on the sidelines of the Antalya Diplomacy Forum to talk about the implications of AI on warfare, government regulations and if too much oversight threatens to stifle creativity. / Photo: AA 
AA

TRT World caught up with Manuel Muniz on the sidelines of the Antalya Diplomacy Forum to talk about the implications of AI on warfare, government regulations and if too much oversight threatens to stifle creativity. / Photo: AA 

Governments around the world are waking up to the realisation that artificial intelligence (AI) and its many applications need oversight and broad rules.

Tech firms based mostly in the United States and Europe, have developed the AI technology without much scrutiny from government officials who are concerned about its misuse as evident from the power of some applications to manipulate images and videos.

Manuel Muniz is a Provost at the IE University in Madrid and Spain’s former Vice Minister of Foreign Affairs. He was part of a panel on AI and Diplomacy at the Antalya Diplomacy Forum (ADF), a three-day international event that concluded on Sunday.

TRT World caught up with him on the sidelines of the forum to talk about the implications of AI on warfare, government regulations and if too much oversight threatens to stifle creativity. The interview has been edited for clarity.

How concerned should we be with the use of AI in warfare?

Manuel Muniz: This is a big and important debate on how AI is going to change the face of warfare.

It is already being used, particularly in cyberspace through the creation and dissemination of false information and false profiles.

One could very easily see very personalised detailed campaigns produced through AI, through text generation and video generation including very sophisticated audio generation. So at least in the hybrid space, we're going to see its use.

Now what’s more worrying is if we start seeing it integrated into lethal weapon systems. And here I think there is a very strong case to be made to regulate and to ban the use of artificial intelligence in fully automated weapon systems.

I don't think we should want an AI to make decisions on targeting, on conducting operations and life or death.

So AI is already having an impact. It could be much greater, and I think this is a space for diplomats and for policymakers to look at how to properly regulate.

Israel has admitted using AI in its war on Gaza. What could be the wider implications of this?

MN: Tech in warfare has always been a fundamental element of an edge in many cases. What's extraordinary today is that we're seeing both in Gaza and also in Ukraine - the other major conflict in Europe - a combination of very traditional use of force.

We shouldn't lose sight of that and make sure that whatever force is being used is understood, assessed, and judged properly when it's used in an inadequate way.

But if you look at the Russia-Ukraine scenario, a lot of tanks, lots of traditional artillery, traditional air force and traditional missile capability is being used.

On top of that, we are building new layers of very sophisticated weapons. Of course, AI is being embedded into tactics into target location, weaponry itself, activity in cyber disruption of telecommunications and connectivity infrastructure.

Twenty years ago we didn’t have any AI regulations or government control and that led to an explosion of AI applications. Do you think too much oversight will stifle growth?

MN: This is a long standing debate. This tension between regulation and innovation and how much of each should we have.

On the whole what's happened is regulation has always been trying to catch up with innovation. I think this is radically the case today. In fact, when I look at the world today, I see an abundance of innovation.

We live in exponentially changing societies. In two weeks of March of last year, we had the deployment of very advanced AI tools. So March, 2023, can be labelled as the beginning of the AI era.

In the space of two weeks we had GPT-4, Midjourney Version 5, which is an image generation software, AI tools for Gamba, Copilot and Bard.

I mean, all of these tools were at the time 10-15 years early. Most assessments were that they would take another 15, 20 years to come. So I do not see in the world a stifling of innovation, a lack of innovation, I see an acceleration of innovation and of deployment across the board.

In fact, we're meeting at the diplomacy forum now and a week or so ago Open AI deployed Sora, which is a video generation software - extraordinarily sophisticated and advanced.

Folks like myself who have been in government and in academia are trying to make sure that governments are up to date and that societies have a say in the direction the technology takes.

I see immense spaces that are scantily regulated where fundamental rights and fundamental interests of our societies are being decided. And they're not being decided by our parliaments or by our governments. They are being decided by very small groups and corporations around the world. I think our citizens need to have a say in this process.

Why is most of the debate around AI regulations taking place in the US and Europe? What about the voices from the Global South?

MN: Let me begin by saying that the Global South matters immensely and it will matter much more markedly in the coming years.

At a time when China-US relationship is really deteriorating, what South Africa, Brazil, and India and large parts of Asia think is fundamental to world affairs.

So I would expect that in the coming years this part of the world will play a fundamental role in defining policy as we will need to develop diplomatic strategies of engagement with the Global South. This includes the techno sphere.

How the Global South thinks about privacy, thinks about the exploitation of data, thinks about neurological rights, and the regulation of AI will be very important.

It's the place where the majority of the world's population lives. We should involve the Global South, across the board, on diplomacy and very particularly on tech governance.

We heard officials here at ADF talking about how AI and international diplomacy go hand-in-hand. Can you talk a bit about that?

MN: First of all, we need to understand that, indeed, technology and AI are a part of diplomacy. But here the field splits.

So you have the need to regulate, govern, and understand technology and AI. Then you have digital diplomacy, which is how you use technological tools.

This means having our embassies use social media, having them use AI for service delivery, whether it's for consular matters or emergency management or emergency anticipation.

I would hope that in the near future all of our diplomatic services understand these two spaces and have policies along the two lines - that is tech diplomacy strategy and then a strong digital diplomacy as well.

Read More
Read More

Antalya Diplomacy Forum kicks off with representatives from 147 countries

Route 6