Moral uncertainty

From Lesswrongwiki
Revision as of 05:28, 29 June 2012 by Steven0461 (talk | contribs)
Jump to: navigation, search

Moral uncertainty (or normative uncertainty) is uncertainty about what you should do — not merely uncertainty about external facts, like what consequences will follow a given course of action, but uncertainty about the moral implications of those facts. You might know you're in a position to save three strangers at the cost of your own life, and still not know what to do, because you're not sure whether to apply utilitarian or egoist morality.

Expected utility is well-established as the right way of dealing with ordinary uncertainty. However, it is not straightforward to apply expected utility to moral uncertainty. Many moral theories don't assign utilities. Those theories that do assign utilities are faced with the problem that they can multiply all their utilities by a constant, and still prefer the same outcomes and actions. This would raise the question of what constant to pick in each case: in other words, how to calibrate the value of a util across theories.

Another approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way?

Nick Bostrom and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord's proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.

Blog posts

External links

See also