How to stop Artificial Intelligence from Ending Humanity | Professor Stuart Russell


What is it that gives us
power over the world? It’s the fact that we’re the most intelligent. What happens when we’re
not the most intelligent? How would we maintain power
over these entities, that are much more intelligent therefore
much more powerful than us? Most observers and experts would say, we’re on this path towards
superhuman intelligence, and we’re not prepared for success. So we’re investing hundreds of billions
of dollars into a technology, that if eventually it succeeds
could be civilisation ending, could be a huge catastrophe. There are many misuses that
are already occurring. For example autonomous weapons
guided by AI software, can actually go out find potential targets
decide who to kill and kill them. Because it doesn’t require
any human supervision, one person can launch an attack
with millions of weapons. They can be selective so you can wipe out
just the males between 12 – 70, or people of a particular ethnic group. They don’t leave a huge radioactive
smoking crater, and they’re cheap and easy to proliferate. It’s hard to see any dimension
on which they’re not, more dangerous than nuclear weapons. Major countries the U.S. the UK, Russia, are currently blocking progress on a treaty. Turkish arms manufacturer
who’s selling them, quadcopters that can autonomously
find, track and kill human beings. They are for sale and Turkey
is promising to use them, against the Kurds in their
forthcoming offensive. So that’s one misuse: surveillance, control, authoritarian governments, are
already starting to use AI to, keep track of there citizens in the way
the Stasi used to do in East Germany. Except that Stasi required almost 20%
of the adult population to do this job. The AI is gonna do it for you so you’ll
have your own personal Stasi agent, keeping an eye on you 24 hours a day. You can also use it for control, because if you know exactly what people
are doing 24/7 you can give them a score. sort of like we use credit scores except
it will be much more pervasive. I guess I should mention social media. On social media content selection, newsfeed selection,
these are all made by algorithms. They’re learning algorithms and they’re
designed to optimise one thing: Revenue generation for the platform. They want you to click on things, and you might think okay the best way
to get you to click on things, is to send you stuff that you like
and that’s probably what I think, they thought the algorithms
were gonna do. What those methods did instead
was actually to modify people, so that they would be more predictable
and therefore better sources of revenue. The result is that the algorithms
have modified people, to have much more extreme political views, or tastes in violence or taste in pornography. So I think for many observers
this is a catastrophe, that it’s basically unraveling our democracies, it may be unraveling our social relationships. This is actually an example
of the general problem, with how we think about AI. If you make systems that
are either intelligent, or deployed on a global scale or both, and they are tasked to pursue an objective, that is not correctly specified,
very hard to undo. This is the case now even with these
simple learning algorithms. If we have AI systems that are more intelligent
in a general sense than human beings. It’s going to be impossible to interfere, with their pursuit of the objective
that we’ve given them. Super intelligent AI climate
control system says okay: Want to reduce carbon dioxide
to pre-industrial levels. The system figures out that the cause of
all this carbon dioxide is people, So get rid of the people solve the problem. Second wish, if you have a second wish,
I didn’t mean that what I meant was, get the carbon dioxide under control
without killing anybody. Okay fine then we’ll just run a
multi-decade, social media, social engineering campaign to convince
people not to have any children. And then in 75 years or so we’ll
have got rid of all the people, and then we can fix the climate problem. maybe you don’t get a
third wish after that. So let’s not design AI systems that way, let’s not put fixed objectives
into the Machine. This is advice not just for AI
but many other disciplines, that form a big part of our civilisation. In economics corporations
try to maximise profit, we know that maximising profit while
ignoring externalities is a really bad thing. In fact look at the climate crisis, it’s the result of a super intelligent
machine, the fossil fuel industry, outwitting the human race in pursuit
of a badly specified objective. Namely profit while ignoring
the costs to the climate, they executed a 50-year plan to
keep pumping out oil, carbon dioxide. While subverting the government’s so that
we couldn’t interfere with this process. So this notion of putting in a
fixed objective is a form of, design or engineering that
actually doesn’t work. So what do we do instead? We design machines so that
their only purpose in life, is to be of benefit to human beings. Meaning to act in such a way
that human objectives, Not machine objectives but human
objectives are satisfied. Once we have methods that are provably safe
and provably beneficial we have to have, strict regulation to prevent poorly designed
a AI systems running out of control. There’s still another problem which
is well what about the bad actors, who read all the technical papers
they develop their own AI, but they don’t want to put
the safety part in. I think that’s a real problem
I don’t have a solution for it, but I can tell you that it’s gonna make the
current malware cybercrime crisis, look like child’s play so we need to
get our act together on that. The other issue we need to worry about
is the fact that we’re sort of lazy. We might find that it’s just too tempting
to have the AI system, just go look after everything for us. If you’ve seen the movie WALL-E the
human race is on this cruise ship, that just goes on forever
and they’re passengers. The machines look after everything
and the humans become obese, and completely stupid they don’t
learn anything there’s no point, why would you go to school for 15 years
and learn something, when the AI system already does it better. So you could get this situation where we
have infantilised, enfeebled species. that is essentially completely
dependent on its machines. It’s happening in some ways already
machines are taking over, some functions like navigating in cities. I think it’s a slow insidious process
we need a cultural response to it. it’s not a technological problem
it’s up to our culture, to re-emphasise the value of knowledge,
of learning, of capability. To recognise dependency when when we
see it and make it socially undesirable. There are many different actors in this play. Within the AI community, there’s a defensive kind of
denialism creeping in. People don’t want to admit
that what they’re doing, the direction they’re driving the
human race is actually a cliff. So we have a big task ahead
of us to convince everyone, that they have to take this seriously, but also to provide a replacement. If we’re saying don’t design AI systems
that way do it this way, but that way, the old way, we have 70 years
of technology that we’ve built up. But the new way most of the
technology doesn’t exist yet, so we have our work cut out for us. Join Double Down News on Patreon, and be a part of the future of journalism, we still need humans to do this work.

12 Comments

Add a Comment

Your email address will not be published. Required fields are marked *