It is time to regulate AI

It is time to regulate AI

Was it ever too early to install seatbelts in cars, to build guardrails on roads? These inventions saved countless lives and I don’t think anyone can object to them with a quiet conscious mind. Furthermore, governments around the world are constantly taking more measures to increase the safety of vehicles and motorways by regulating driving speed, road conditions, etcetera. These policies undoubtedly restrict our freedom and could cause us different inconveniences, but their goal — increasing public safety — eventually  makes us appreciate them. Seldom do we hear of someone protesting seatbelt requirements or the dismantling of streetlights because of that wide recognition. The same needs to happen with Artificial Intelligence (AI). 

Since the public launch of ChatGPT in November 2022, it has been clear to all that we are in the “AI moment” - in the midst of a technological revolution that will change our lives forever. Before the ink of today’s sensational stories about AI dries — good or bad — a new story already breaks, and you can’t spend a day without hearing about a new feature, breakthrough or warning. Before the launch of ChatGPT, AI was a concept understood by few and interesting to even fewer, but today everyone is an expert and AI has become a most popular dinner-party topic. However, many still don’t understand the need to regulate AI. 

For decades AI gradually became a more mature and prominent technology, and in recent years it has become an inseparable part of our lives through countless platforms. Our phones started to recognize our faces and voices, streaming services tailored their recommendations much better, and social networks became more addictive. AI models got to know us better than anyone and to predict our actions far better than we would find comfortable — and monetized it. Platforms we look at as free earn mindboggling profits by selling our data to the highest bidder and in the process put us in danger. The risks AI poses are far worse than breaching our privacy; examples that have been proven include influencing political processes, inciting genocide, and threatening democracy around the world. 

One of the first alarm bells about AI risks in this context rang when Cambridge Analytica, a British political consultancy working for the Trump and Brexit campaigns, was exposed. Whistle-blowers and journalists found that the personal data of millions of people was downloaded from Facebook, analysed, and used to radicalise and change people’s behaviour to sway the American 2016 election and the British Brexit referendum. It was done by feeding AI algorithms immense amounts of personal data until they could accurately predict people’s behaviour, and then utilise social media’s AI-based algorithms to brainwash millions.  

A second, far worse, example came in 2018. Throughout 2017–2018, Myanmar’s military executed large-scale and well-thought-out genocide against the Rohingya, a Muslim minority in the country. The army, joined by radical militias, committed mass killings, rape, village burning, and expulsion, all of which made over 700,000 Rohingya flee, and those that remain were at constant high risk of a similar fate. Consequently, the UN fact-finding mission as well as independent human rights organizations announced that in the years leading up to the genocide, Facebook played a crucial role in spreading the hate that sawed the seeds for the genocide. Facebook’s AI algorithms, designed to increase engagement, did so by radicalising and spreading fake news at break-neck speed, and in Myanmar, the situation became so bad it led to genocide. To put it plainly, this is an example of AI algorithms pathing the way and setting the scene for war and crimes against humanity. 

As is clear by these examples, along with its great promise, AI poses a grave danger to us on the personal and collective level. AI-powered algorithms effectively brainwashed, at least, millions of people around the world, were used to weaken democracies in various ways, and played a role in causing some of the worst atrocities of the 21st Century. However, this year AI is even more dangerous than before. 2024 is the biggest election year in history with billions around the world going to the polling booths in dozens of countries, think of the meaning of many Cambridge Analyticas. Even worse, today powerful generative AI tools are available to anyone, so you no longer need to have a team of specialized data scientists to do what they did.  

That is why it is time to regulate AI. This technology is so powerful it causes dramatic changes in society, and it is the governments’ job to ensure this change is for the better. Any technology that puts so many people at risk should be regulated to some extent; in fact, governments regulate and set standards for nearly everything on the market, and AI should be no different. 

Regulation should be done wisely, considering current and future risks as well as how to maintain AI’s great benefits and the ability to continue to innovate. It will undoubtedly be complex and contain many different parts, to deal with the complexity of the issues AI presents. And that is the government’s job, to ensure the safety and prosperity of its citizens.  

Gradually more countries draft and draft AI policies and regulations. Recently, the European Union (EU) was the first to approve a comprehensive AI law that has a strategy to mitigate the risks while nurturing innovation. Still, there are many ideas out there in the right way to approach this issue, and I am not here to advocate for a specific one. At the end of the day, the essence of regulation, no matter its form, is to protect the public to the highest extent from the risk as it is perceived while balancing it against other interests. No current model of regulation seems perfect, but there is no time to waste and we have to start somewhere. The risks and harms are here and now – governments, protect us. 

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.