Existential Risk Prevention as Global Priority

DOIhttp://doi.org/10.1111/1758-5899.12002
AuthorNick Bostrom
Date01 February 2013
Published date01 February 2013
Existential Risk Prevention as Global
Priority
Nick Bostrom
University of Oxford
Abstract
Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively
small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding
human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential
risk and develop an improved classif‌ication scheme. I discuss the relation between existential risks and basic issues in
axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding
principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about
the ideal of sustainability.
Policy Implications
Existential risk is a concept that can focus long-term global efforts and sustainability concerns.
The biggest existential risks are anthropogenic and related to potential future technologies.
A moral case can be made that existential risk reduction is strictly more important than any other global public
good.
Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sus-
tainable state.
Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience and
reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to
improve humanity’s ability to deal with the larger existential risks that will arise later in this century. This will
require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordi-
nated response to anticipated existential risks.
Perhaps the most cost-effective way to reduce existential risks today is to fund analysis of a wide range of existen-
tial risks and potential mitigation strategies, with a long-term perspective.
1. The maxipok rule
Existential risk and uncertainty
An existential risk is one that threatens the premature
extinction of Earth-originating intelligent life or the perma-
nent and drastic destruction of its potential for desirable
future development (Bostrom, 2002). Although it is often
diff‌icult to assess the probability of existential risks, there
are many reasons to suppose that the total such risk con-
fronting humanity over the next few centuries is signif‌icant.
Estimates of 10–20 per cent total existential risk in this cen-
tury are fairly typical among those who have examined the
issue, though inevitably such estimates rely heavily on sub-
jective judgment.
1
The most reasonable estimate might be
substantially higher or lower. But perhaps the strongest rea-
son for judging the total existential risk within the next few
centuries to be signif‌icant is the extreme magnitude of the
values at stake. Even a small probability of existential catas-
trophe could be highly practically signif‌icant (Bostrom,
2003; Matheny, 2007; Posner, 2004; Weitzman, 2009).
Humanity has survived what we might call natural
existential risks for hundreds of thousands of years; thus
it is prima facie unlikely that any of them will do us in
within the next hundred.
2
This conclusion is buttressed
when we analyse specif‌ic risks from nature, such as
asteroid impacts, supervolcanic eruptions, earthquakes,
gamma-ray bursts, and so forth: Empirical impact distri-
butions and scientif‌ic models suggest that the likelihood
of extinction because of these kinds of risk is extremely
small on a time scale of a century or so.
3
In contrast, our species is introducing entirely new kinds
of existential risk—threats we have no track record of
Global Policy Volume 4 . Issue 1 . February 2013
Global Policy (2013) 4:1 doi: 10.1111/1758-5899.12002 ª2013 University of Durham and John Wiley & Sons, Ltd.
Research Article
15
surviving. Our longevity as a species therefore offers no
strong prior grounds for conf‌ident optimism. Consider-
ation of specif‌ic existential-risk scenarios bears out the sus-
picion that the great bulk of existential risk in the
foreseeable future consists of anthropogenic existential
risks—that is, those arising from human activity. In particu-
lar, most of the biggest existential risks seem to be linked
to potential future technological breakthroughs that may
radically expand our ability to manipulate the external
world or our own biology. As our powers expand, so will
the scale of their potential consequences—intended and
unintended, positive and negative. For example, there
appear to be signif‌icant existential risks in some of the
advanced forms of biotechnology, molecular nanotechnol-
ogy, and machine intelligence that might be developed in
the decades ahead. The bulk of existential risk over the next
century may thus reside in rather speculative scenarios to
which we cannot assign precise probabilities through any
rigorous statistical or scientif‌ic method. But the fact that
the probability of some risk is diff‌icult to quantify does not
imply that the risk is negligible.
Probability can be understood in different senses. Most
relevant here is the epistemic sense in which probability
is construed as (something like) the credence that an ide-
ally reasonable observer should assign to the risk’s mate-
rialising based on currently available evidence.
4
If
something cannot presently be known to be objectively
safe, it is risky at least in the subjective sense relevant to
decision making. An empty cave is unsafe in just this
sense if you cannot tell whether or not it is home to a
hungry lion. It would be rational for you to avoid the
cave if you reasonably judge that the expected harm of
entry outweighs the expected benef‌it.
The uncertainty and error-proneness of our f‌irst-order
assessments of risk is itself something we must factor
into our all-things-considered probability assignments.
This factor often dominates in low-probability, high-
consequence risks—especially those involving poorly
understood natural phenomena, complex social dynamics,
or new technology, or that are diff‌icult to assess for other
reasons. Suppose that some scientif‌ic analysis Aindicates
that some catastrophe Xhas an extremely small probability
P(X) of occurring. Then the probability that Ahas some
hidden crucial f‌law may easily be much greater than P(X).
5
Furthermore, the conditional probability of Xgiven that A
is crucially f‌lawed, P(X|A), may be fairly high. We may
then f‌ind that most of the risk of Xresides in the uncer-
tainty of our scientif‌ic assessment that P(X) was small
(Figure 1) (Ord, Hillerbrand and Sandberg, 2010).
Qualitative risk categories
Since a risk is a prospect that is negatively evaluated,
the seriousness of a risk—indeed, what is to be regarded
as risky at all—depends on an evaluation. Before we can
determine the seriousness of a risk, we must specify a
standard of evaluation by which the negative value of a
particular possible loss scenario is measured. There are
several types of such evaluation standard. For example,
one could use a utility function that represents some
particular agent’s preferences over various outcomes.
This might be appropriate when one’s duty is to give
decision support to a particular decision maker. But here
we will consider a normative evaluation, an ethically war-
ranted assignment of value to various possible out-
comes. This type of evaluation is more relevant when we
are inquiring into what our society’s (or our own individ-
ual) risk-mitigation priorities ought to be.
There are conf‌licting theories in moral philosophy
about which normative evaluations are correct. I will not
here attempt to adjudicate any foundational axiological
disagreement. Instead, let us consider a simplif‌ied ver-
sion of one important class of normative theories. Let us
suppose that the lives of persons usually have some sig-
nif‌icant positive value and that this value is aggregative
(in the sense that the value of two similar lives is twice
that of one life). Let us also assume that, holding the
quality and duration of a life constant, its value does not
depend on when it occurs or on whether it already
exists or is yet to be brought into existence as a result
of future events and choices. These assumptions could
be relaxed and complications could be introduced, but
we will conf‌ine our discussion to the simplest case.
Within this framework, then, we can roughly characte-
rise a risk’s seriousness using three variables: scope (the
size of the population at risk), severity (how badly this
population would be affected), and probability (how
likely the disaster is to occur, according to the most rea-
sonable judgment, given currently available evidence).
Using the f‌irst two of these variables, we can construct a
qualitative diagram of different types of risk (Figure 2).
Figure 1. Meta-level uncertainty.
Source: Ord et al., 2010. Factoring in the fallibility of our f‌irst-
order risk assessments can amplify the probability of risks
assessed to be extremely small. An initial analysis (left side) gives
a small probability of a disaster (black stripe). But the analysis
could be wrong; this is represented by the grey area (right side).
Most of the all-things-considered risk may lie in the grey area
rather than in the black stripe.
Nick Bostrom
16
ª2013 University of Durham and John Wiley & Sons, Ltd. Global Policy (2013) 4:1

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT