Cutting corners with artificial intelligence could have ‘catastrophic consequences’ – academic

Tees Business Digital Media Pack

The escalating war to be pioneers in artificial intelligence (AI) could have catastrophic consequences if companies do not put safety before bragging rights.

That is the view of a Teesside University academic who is warning that safety and security could be compromised in the AI race as companies strive to get ahead of their competitors – playing straight into the hands of terrorists and cyber hackers.

Dr The Anh Han, Senior Lecturer in Computer Science at Teesside University, says that there is a current bidding war in AI with many countries potentially putting lives at risk by compromising safety in order to get ahead.

Alongside international researchers Professor Luis Moniz Pereira, New University of Lisbon, and Professor Tom Lenaerts, Université Libre de Bruxelles and Vrije Universiteit Brussel, Dr Han has been awarded a prestigious research grant to investigate the issue further.

The grant from the Future of Life Institute will enable the researchers to study the escalating bidding war for AI excellence, as well as the mechanisms that could be put in place to avoid catastrophic outcomes.

Dr Han explained: “There is a temptation to cut corners on safety compliance in order to move more quickly than competitors. But if AI is not developed in a safe way, it could have catastrophic consequences.

“Movies have been made about potential AI catastrophes. It could include problems with driverless car technology, putting lives at risk. In an extreme, worst case scenario, robots could run havoc with the world if the companies creating them don’t take safety and security seriously.

“Many countries and organisations are battling to be the first to develop powerful AI. Google and Facebook, for example, have created big research labs for AI development.

“Terrorists want to develop powerful AI which is of course a huge danger to society. Cyber hackers can also take advantage if technology is not developed in a safe way.”

Dr Han says that the biggest players in AI at the moment include the United States of America and China, but countries and companies from all over the world are trying to participate in the technology race.

“Several European countries are catching up and the UK in particular is heavily investing in AI technology. Lots of money and effort is being put into becoming the first to develop powerful AI,” said Dr Han.

“Through our research, we want to understand what sort of behaviours emerge and how we can use different, efficient incentives to drive the race in a more beneficial direction.”

The research grant from the Future of Life Institute means that Teesside University is able to employ a post-doctorate level researcher to undertake mathematical modelling over the next two years.

The Future of Life Institute is a volunteer-run research and outreach organisation that works to mitigate existential risks facing humanity and it is currently focused on keeping artificial intelligence beneficial.

Its founders include Massachusetts Institute of Technology physicist and cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, and its board of advisers have included entrepreneur Elon Musk and the late cosmologist Stephen Hawking.

Dr Han added: “We are delighted to have been awarded this grant by the Future of Life Institute to progress our research. Our ambition is to understand the dynamics of safety compliant behaviours within the ongoing AI research and development race.

“We hope to provide advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants around prevention of undesirable race escalation.”


Be the first to comment

Leave a Reply