Philosopher Nick Bostrom talks about the existential risks faced by Humanity


Thanks for making it up
on a Sunday morning. It actually means we will reach
the future slightly ahead of everybody else. So we are ahead of the game. There’s a big trapdoor here. If you see me disappearing,
I will just fallen down one step. I thought we wanted to take a
step back and try to take in the really big picture here. So we’re going to zoom out for
the next 15 minutes or so, look at the grand sweep
of human history. So this was back then. That used to be us. We’ve made a huge amount of
technological progress since then, come a long way. And we now have this thing that
we think of as the human condition, which we think of as
the normal way for things to be, the idea that we have
more food than we need, that we go to work in offices, that
we read and write and do all of these things that constitute
modern life. This seems like normalcy
to us. And any hypothesis that suggests
that things could be radically different seems to
require a huge burden of proof because it seems abnormal. But if you take a step back and
consider this normal human condition, from almost any point
of view, it seems like a huge anomaly if we
look at it from a historical point of view. This modern human condition is
a very brief slice of all of history, not to mention
ecological time or geological time scales. Again, if we look at it from a
spatial perspective, almost everything that there
is, is just vacuum. And we live on this surface of
this little crumb floating around in an almost
infinite void. So it’s very strange, this
is kind of thing that we take for normal. And if we consider things from a
more abstract point of view, it seems that this human
condition constitutes a narrow band of the space of all
possible ways that things could be. If we plot here on the y-axis
some kind of capability, say, levels of technology or general
economic production capability, we inhabit
this zone. If our level of capability
fell beneath a certain threshold, we would dwindle
to extinction. We couldn’t sustain ourselves. There is a concept
known as minimum viable population size. And if a species goes below
that, there are not enough members left of the species,
and they go extinct. I think there is another
threshold which I called single self-sustainability
threshold, such that if we exceed that, then there’s
another attractor state, which is basically colonising all of
the accessible universe. Like once you develop the
technology to create self-replicating probes that
can be sent out into space, they can make more copies. And in that scenario, we have
maybe pretty much guaranteed to survive for billions of years
and to spread through the universe. And these might be two sort
of stable states. One is you are extinct,
you’re extinct. One is you start spreading,
you continue to spread. But we are in this intermediate
zone where things could go either way. I’d like to put out for your
consideration this hypothesis, which I call the Technological
Completion Conjecture, which says that if science and
technology continue to develop, then eventually, all
important basic capabilities that could be obtained through
some possible technology will be obtained. So this is almost a totality,
but not quite. You could imagine that there
were several different technology trees. And if you started climbing
one, you could continue to make progress forever
on that tree. You get more and more
advanced on that. But there would be some other
set of technologies that you would never maybe get
around to develop. I don’t think that’s
exactly how it is. I think it’s more like maybe,
to use metaphor, like a box of sand. And by funding research or,
like, collecting sometimes [INAUDIBLE], you pour some sand into this box of discovery. And where you pour it
help determine where the sand builds up. But if you keep pouring sand,
eventually the whole box will fill up. The sand kind of spreads out. I think that science and
technology is more like that. There are spillover effects. And it doesn’t so much matter
where you start, if you just keep on doing it, you’ll
eventually realise all the technologies that are physically
possible; unless, of course, we go extinct,
which is a real possibility as well. So if we look back to historical
events, I mean there might be only really two
things that have made a fundamental change to
the human condition. We have the Agricultural
Revolution first, which changed the growth rate
of the human economy. With agriculture, more people
could live on the same plot of land. They have higher densities, more
people, and more people that can come up
with new ideas. And as the rate of idea
generation picked up dramatically, we also get
writing because you want to be able to administer empires. You have states coming that
want extract surplus. And they need to keep track
of who owes what taxes. And you get all of these
side effects as well. You get social stratification. You get extreme inequality,
which you didn’t have before agriculture. You get slavery and
many other things. And another big watershed
transition in human history is the Industrial Revolution where,
for the first time, you get this phenomenon where the
rate of economic growth starts to outpace the rate of
population growth. So before that, the world
economy was growing, but the population grew at
the same rate. Like basically, we were at the
Malthusian limit, plus or minus fluctuations. But with the Industrial
Revolution, the economy starts to grow so rapidly that
population can’t keep up. And that means then that average
income starts to rise, which leads us to this modern
human condition. We also get other things. We get different forms of
warfare and world wars and nuclear weapons and
all the other accoutrements of modernity. So if we ask what the next big
transition might be, my best guess would be that if we’re
going to break through to this post-human condition that it
will be through something that creates greater than
human intelligence. There are two possible
paths towards this. One would be some sort of
biological enhancement. Ultimately, I think that machine
intelligence has a far greater potential and will
surpass biological intelligence. And obviously, that’s a whole
different topic on its own, but I just want to introduce
that here. If we look at things that might
bring us beneath the threshold that might cause
extinction, again, machine intelligence, I think, would
rank among those technologies with that potentially
transformative impact. There are some others here
that might also post existential risks, including
I’ve listed some unknowns there. Because if you think, if we had
asked this question even just 100 years ago, which is not
really that long in this context, what the biggest
threats to human survival was to the survival of our species,
that is, what would people have said? Well, they certainly wouldn’t
have proposed synthetic biology as the great threat. I mean, it didn’t exist,
neither did molecular nanotechnology nor
geoengineering nor artificial intelligence. So all the things that now, if
we look forward, look like really big threats weren’t
even in the conceptual inventory of people
100 years ago. So presumably, if we’re still
around 100 years ago, then there might be new things that
we haven’t even thought could be dangerous that will have
been added to this list. So if you notice, by the way,
from this list, they all related to human activities, and
then more specifically, to technological inventions. There are risks from nature as
well– asteroids, volcano eruptions, and all kind
of things like that. But we have survived all of
those for 100,000 years. So it’s unlikely that they
will do us in within the next 100 years. Whereas in this century, we will
introduce radically new factors into the world
that we have no track record of surviving. If there were going to be big
existential risks, they’re going to be from these
new things. So think of it as a big urn full
of possible ideas that we can discover– new technologies,
new scientific breakthroughs. And by doing research and by
just trying out different things around the world, we are
pulling balls from this big urn, one by one. And these balls come in
different colours. The white balls, they
are the good ones– purely benign discoveries. And then there are a lot of
grey balls that we have discovered as well, like how
to split the atom, nuclear power plants, but also,
nuclear weapons. So far we haven’t picked out
the black ball, one that spells doom for humanity. However, if we keep pulling
balls from this urn, and if there is a black ball in there,
then eventually, it looks like we will
discover it. Suppose, for example, that it
had turned out that nuclear weapons, instead of requiring
rare raw materials to construct– like, highly enriched uranium
or plutonium that’s very difficult to get and requires industrial-sized plants to create– suppose it had turned out that
it was something you could do in your microwave oven
by baking sand and something like that. That doesn’t work physically,
but before we made that discovery in physics, how could
we have known a power that was now such very
destructive technology that was very easy to make? If that had turned out to be
the case, what would have happened to human
civilisation? It might well have been the end
of it at that point if the destructive power of a nuclear
weapon could be instantiated as easily by baking sand
in your microwave oven. So the risk is that we’ll pull
up a black ball, and we don’t have the ability to
put it back again. We can’t uninvent things. We don’t have the ability as a
species, really, to undiscover important things. And as long as we remain
fractured in the way we are, then we just have to hope that
every ball we pull out will be white or grey, but at least, not
this kind of unsurvivable black type. Now, on an individual basis,
what should we do in response to this kind of set
of possibilities? On the one hand, the possibility
of transcension into some kind of post-human
state. On the other hand, the
possibility of extinction or some other form of existential
catastrophe. So an existential risk is either
an extinction risk or some other way that we could
permanently and drastically destroy our future. Well, from an individual point
of view, if you only care about yourself, you might argue,
as this blog commenter “washbash” did, that “I
instinctively think go faster. Not because I think this is
better for the world. Why should I care about the
world when I’m dead and gone? I want it to go fast, damn it! This increases the chances I
have of experiencing a more technologically advanced
future.” Because the default is that you
all die, this is kind of what’s happened to most people
who have lived, not all. I mean about 90% or so. It’s surprising, actually, there
might be 5 to 10% of everybody who ever lived is
alive now just because of the population growth. But still, the odds are
stacked against us. So we all, unless we die
prematurely, we just grow old, and then decay and die. Whereas, if there were some
radical technological transition, if machine
superintelligence came on to the playing field, then that
would shake things up. Maybe there would be a chance
then of living for cosmological time scales
rather than for decades through the invention of like. So superintelligence would sort
of telescope the entire future, because with
superintelligence doing the research, you get research on
digital time scales instead of biological time scales. So you might have 1,000 years
or a million years of discovery in one year or
uploading into computers. So from an individual point
of view, might want to roll the dice here. Like the default is we’re going
to die anyway, so let’s speed this up a little bit,
maybe get this happening before we are all dead. And we’d have a chance there. From a altruistic or impersonal
perspective, it’s much less clear. If what you want to do is
maximise the expected value in the world, it might still be the
case that, ultimately, we want to realise all these
technological capabilities that are physically possible. We need these in order to
harvest humanity’s cosmic endowment, all these resources
out there in the universe that are completely inaccessible
to us now. But in the fullness of time, our
remote descendants might go on to use all these solar
systems and gas clouds to build fantastic civilizations
with quintillion or so people living wonderful lives beyond
our imagination. So it might be that eventually
we do want to reach here up some high level on this
technology axis to fully realise the possible values
that we can create. But there are also some other
dimensions here– the amount of insight or wisdom
that we have and the degree to which we can solve
our global coordination problems and work
together rather than oppose one another. It might be that, ultimately, to
realise the best feature to get to utopia, we have to really
max out on all of these dimensions. We need to abolish war. We need to have great wisdom to
use the technology wisely. And we need the technology,
actually, to sort of conquer the material world. Now, that still leaves open
which order we want to develop these in. It might be the case that even
though we ultimately want as much technology as possible,
that we first want to make more progress on the
coordination axis or on the wisdom axis so that once we do
develop these very dangerous technological capabilities, we
will then not immediately use them for warfare or just
foolishly in some way that blows up in our face. So I would propose this
concept of dynamic sustainability. That rather than thinking of
our goal as to achieve some static state of sustainability
where you extract resources from the natural environment
only at the same rate at which they are replenished, that we
should think more in terms of this dynamic sustainability,
which is try to get on to a trajectory that we can continue
on indefinitely and that will take us in
a good direction. So think of, like, a
rocket in midair as burning a lot of fuel. And you could say to make this
more sustainable, we should decrease the rate at which it
is burning fuel so that it just hovers in the air. Then it can last longer than
if you burn it faster. On the other hand, you might
just say that we should try to reach for escape velocity, maybe
even burning more fuel temporarily to escape
the gravitational field of the Earth. So here, the dynamic concept of
sustainability would come apart with the static one. I’m not suggesting we take this
metaphor too literally, saying that we should just burn
up more fossil fuels. But you get the general
idea there. And in terms of technology
development there, one could think of some kind of principle
of differential technology development. So even if we have an underlying
technological determinism of the kind I hinted
at before, that in the long run, if we just continue
science and technology, we might discover all technologies
that are generally useful. It doesn’t mean that it’s
irrelevant what we do, ie, the overall rate at technology
discovery can make a difference insofar as we might
get more wisdom or more coordination first, but also,
the sequence in which individual technologies are
developed can be important. You want to get to the vaccine
before you get the biological warfare agents. You want to figure out how to
make AI safe before you figure out how to make AI, and so on. So one can get a more
fine-grained picture of what it is that we really should be
doing from a moral point of view in technology. If you think not just, can we
stop the technology or not? Well, it’s going to
happen anyway. So let’s, at least, make some
money by being the first to introduce it by thinking in
terms of what we can do on the margin, whether we can
accelerate something relative to some other thing, and how
that then will affect the overall prospects for humanity
in the long run. Thank you very much. [APPLAUSE]

2 Comments

Add a Comment

Your email address will not be published. Required fields are marked *