In philosophy and mathematics, Newcomb's paradox, also referred to as Newcomb's problem, is a thought experiment involving a game between two players, one of whom is able to predict the future.
Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed in a philosophy paper by Robert Nozick in 1969, and appeared in the March 1973 issue of Scientific American, in Martin Gardner's "Mathematical Games." Today it is a much debated problem in the philosophical branch of decision theory.
Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed in a philosophy paper by Robert Nozick in 1969, and appeared in the March 1973 issue of Scientific American, in Martin Gardner's "Mathematical Games." Today it is a much debated problem in the philosophical branch of decision theory.
The problem
There
is an infallible predictor, a player, and two boxes designated A and B.
The player is given a choice between taking only box B, or taking both
boxes A and B. The player knows the following:[4]
- Box A is clear, and always contains a visible $1,000.
- Box B is opaque, and its content has already been set by the predictor:
- If the predictor has predicted the player will take both boxes A and B, then box B contains nothing.
- If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.
The player does not know what the predictor predicted or what box B contains while making his/her choice.
Game theory strategies
Predicted choice | Actual choice | Payout |
---|---|---|
A + B | A + B | $1,000 |
A + B | B | $0 |
B | A + B | $1,001,000 |
B | B | $1,000,000 |
In his 1969 article, Nozick noted that "To almost everyone, it is
perfectly clear and obvious what should be done. The difficulty is that
these people seem to divide almost evenly on the problem, with large
numbers thinking that the opposing half is just being silly." The problem continues to divide philosophers today.
Game theory offers two strategies for this game that rely on different principles: the expected utility principle and the strategic dominance principle. The problem is called a paradox
because two analyses that both sound intuitively logical give
conflicting answers to the question of what choice maximizes the
player's payout.
- Considering the expected utility when the probability of the predictor being right is almost certain or certain, the player should choose box B. This choice statistically maximizes the player's winnings, setting them at about $1,000,000 per game.
- Under the dominance principle, the player should choose the strategy that is always better; choosing both boxes A and B will always yield $1,000 more than only choosing B. However, the expected utility of "always $1,000 more than B" depends on the statistical payout of the game; when the predictor's prediction is almost certain or certain, choosing both A and B sets player's winnings at about $1,000 per game.
David Wolpert and Gregory Benford
point out that paradoxes arise when not all relevant details of a
problem are specified, and there is more than one "intuitively obvious"
way to fill in those missing details. They suggest that in the case of
Newcomb's paradox, the conflict over which of the two strategies is
"obviously correct" reflects the fact that filling in the details in
Newcomb's problem can result in two different noncooperative games, and
each of the strategies is reasonable for one game but not the other.
They then derive the optimal strategies for both of the games, which
turn out to be independent of the predictor's infallibility, questions
of causality, determinism, and free will.
Causality and free will
Predicted choice | Actual choice | Payout |
---|---|---|
A + B | A + B | $1,000 |
B | B | $1,000,000 |
Causality issues arise when the predictor is posited as infallible and incapable of error; Nozick avoids this issue by positing that the predictor's predictions are "almost
certainly" correct, thus sidestepping any issues of infallibility and
causality. Nozick also stipulates that if the predictor predicts that
the player will choose randomly, then box B will contain nothing. This
assumes that inherently random or unpredictable events would not come
into play anyway during the process of making the choice, such as free will or quantum mind processes.
However, these issues can still be explored in the case of an
infallible predictor. Under this condition, it seems that taking only B
is the correct option. This analysis argues that we can ignore the
possibilities that return $0 and $1,001,000, as they both require that
the predictor has made an incorrect prediction, and the problem states
that the predictor is never wrong. Thus, the choice becomes whether to
take both boxes with $1,000 or to take only box B with $1,000,000—so
taking only box B is always better.
William Lane Craig has suggested that, in a world with perfect predictors (or time machines, because a time machine could be used as a mechanism for making a prediction), retrocausality can occur.
If a person truly knows the future, and that knowledge affects their
actions, then events in the future will be causing effects in the past.
The chooser's choice will have already caused the predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will
and choosers will do whatever they're fated to do. Taken together, the
paradox is a restatement of the old contention that free will and determinism
are incompatible, since determinism enables the existence of perfect
predictors. Put another way, this paradox can be equivalent to the grandfather paradox;
the paradox presupposes a perfect predictor, implying the "chooser" is
not free to choose, yet simultaneously presumes a choice can be debated
and decided. This suggests to some that the paradox is an artifact of
these contradictory assumptions.
Gary Drescher argues in his book Good and Real
that the correct decision is to take only box B, by appealing to a
situation he argues is analogous—a rational agent in a deterministic
universe deciding whether or not to cross a potentially busy street.
Andrew Irvine argues that the problem is structurally isomorphic to Braess' paradox, a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.
Simon Burgess has argued that the problem can be divided into two
stages: the stage before the predictor has gained all the information
on which the prediction will be based, and the stage after it. While the
player is still in the first stage, they are presumably able to
influence the predictor's prediction, for example by committing to
taking only one box. Burgess argues that after the first stage is done,
the player can decide to take both boxes A and B without influencing the
predictor, thus reaching the maximum payout.
This assumes that the predictor cannot predict the player's thought
process in the second stage, and that the player can change their mind
at the second stage without influencing the predictor's prediction.
Burgess says that given his analysis, Newcomb's problem is akin to the toxin puzzle.
This is because both problems highlight the fact that one can have a
reason to intend to do something without having a reason to actually do
it.
Consciousness
Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person.
Suppose we take the predictor to be a machine that arrives at its
prediction by simulating the brain of the chooser when confronted with
the problem of which box to choose. If that simulation generates the
consciousness of the chooser, then the chooser cannot tell whether they
are standing in front of the boxes in the real world or in the virtual
world generated by the simulation in the past. The "virtual" chooser
would thus tell the predictor which choice the "real" chooser is going
to make.
Fatalism
Newcomb's paradox is related to logical fatalism
in that they both suppose absolute certainty of the future. In logical
fatalism, this assumption of certainty creates circular reasoning ("a
future event is certain to happen, therefore it is certain to happen"),
while Newcomb's paradox considers whether the participants of its game
are able to affect a predestined outcome.
Extensions to Newcomb's problem
Many thought experiments similar to or based on Newcomb's problem have been discussed in the literature. For example, a quantum-theoretical version of Newcomb's problem in which box B is entangled with box A has been proposed.
The meta-Newcomb problem
Another related problem is the meta-Newcomb problem.
The setup of this problem is similar to the original Newcomb problem.
However, the twist here is that the predictor may elect to decide
whether to fill box B after the player has made a choice, and the player
does not know whether box B has already been filled. There is also
another predictor: a "meta-predictor" who has reliably predicted both
the players and the predictor in the past, and who predicts the
following: "Either you will choose both boxes, and the predictor will
make its decision after you, or you will choose only box B, and the
predictor will already have made its decision."
In this situation, a proponent of choosing both boxes is faced
with the following dilemma: if the player chooses both boxes, the
predictor will not yet have made its decision, and therefore a more
rational choice would be for the player to choose box B only. But if the
player so chooses, the predictor will already have made its decision,
making it impossible for the player's decision to affect the predictor's
decision.