Home > Peace in the Arts > Principled Discussion on Artificial Intelligence Touches on Arms Race and War | Future of Life Institute

Principled Discussion on Artificial Intelligence Touches on Arms Race and War | Future of Life Institute

Two years ago, after an exciting conference in Puerto Rico that included many of the top minds in AI, The Future of Life Institute produced two open letters — one on beneficial AI and one on autonomous weapons — which were signed and supported by tens of thousands of people. But that was just one step along the path to creating artificial intelligence that will benefit us all.

This month, they brought together even more AI researchers, entrepreneurs, and thought leaders for our second Beneficial AI Conference, held in Asilomar, Calif. Speakers and panelists discussed the future of AI, economic impacts, legal issues, ethics, and more. And during breakout sessions, groups gathered to discuss what basic principles we could all agree on that could help shape a future of beneficial AI.

The organizers of the conference said they found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence.

Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

This sense among the attendees echoes a wider societal engagement with AI that has heated up dramatically over the past few years. Due to this rising awareness of AI, dozens of major reports have emerged from academia (e.g. the Stanford 100 year report), government (e.g. two major reports from the White House), industry (e.g. materials from the Partnership on AI), and the nonprofit sector (e.g. a major IEEE report).

For much more information and more links to more interviews, please visit: A Principled AI Discussion in Asilomar – Future of Life Institute

Two such instances where conferees discussed weapons and the use of weapons are noted below. Follow the links to more on their stories and other interviews that came out of the recent  BAI 2017 conference.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“One reason that I got involved in these discussions is that there are some topics I think are very relevant today, and one of them is the arms race that’s happening amongst militaries around the world already, today. This is going to be very destabilizing. It’s going to upset the current world order when people get their hands on these sorts of technologies. It’s actually stupid AI that they’re going to be fielding in this arms race to begin with and that’s actually quite worrying – that it’s technologies that aren’t going to be able to distinguish between combatants and civilians, and aren’t able to act in accordance with international humanitarian law, and will be used by despots and terrorists and hacked to behave in ways that are completely undesirable. And that’s something that’s happening today. You have to see the recent segment on 60 Minutes to see the terrifying swarms of robot UAVs that the American military is now experimenting with.”

-Toby Walsh, Guest Professor at Technical University of Berlin, Professor of Artificial Intelligence at the University of New South Wales, and leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research
Read his complete interview here.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“I’m not a fan of wars, and I think it could be extremely dangerous. Obviously I think that the technology has a huge potential, and even just with the capabilities we have today it’s not hard to imagine how it could be used in very harmful ways. I don’t want my contributions to the field and any kind of techniques that we’re all developing to do harm to other humans or to develop weapons or to start wars or to be even more deadly than what we already have.”

-Stefano Ermon, Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory
Read his complete interview here.

For much more information and more links to more interviews, please visit: A Principled AI Discussion in Asilomar – Future of Life Institute

Leave a Reply

Your email address will not be published. Required fields are marked *

*

What is 8 + 24 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)
I footnotes