Moral uncertainty

From Lesswrongwiki
Revision as of 13:44, 27 June 2012 by Steven0461 (talk | contribs) (Blog posts)
Jump to: navigation, search

Moral uncertainty (or normative uncertainty) is uncertainty about what you should do — not merely uncertainty about external facts, like what consequences will follow a given course of action, but uncertainty about the moral implications of those facts. You might know you're in a position to save three strangers at the cost of your own life, and still not know what to do, because you're not sure whether to apply utilitarian or egoist morality.

Expected utility is well-established as the right way of dealing with ordinary uncertainty. However, it is not straightforward to apply expected utility to moral uncertainty. Many moral theories don't assign utilities. A theory that does assign utilities can scale all the utilities it assigns by a constant without it affecting what outcomes or actions it prefers.

Another approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way?

Nick Bostrom and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord's proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.

Blog posts

External links

See also