Difference between revisions of "Moral uncertainty"

From Lesswrongwiki
Jump to: navigation, search
m
Line 1: Line 1:
'''Moral uncertainty''' (or '''normative uncertainty''') is uncertainty about what you should do — not merely uncertainty about external facts, like what consequences will follow a given course of action, but uncertainty about the ''moral implications'' of those facts. You might know you're in a position to save three strangers at the cost of your own life, and still not know what to do, because you're not sure whether to apply utilitarian or egoist morality.
+
'''Moral uncertainty''' (or '''normative uncertainty''') is uncertainty about how to act given the diversity of moral doctrines. It includes a level of uncertainty above the more usual uncertainty of [[Decision theory|what to do given incomplete information]], since it deals also with uncertainty about which moral theory is right. Even with complete information about the world this kind of uncertainty would still remain <ref> Crouch, William. (2010) “Moral Uncertainty and Intertheoretic Comparisons of Value” BPhil Thesis, 2010. p. 6. Available at: http://oxford.academia.edu/WilliamCrouch/Papers/873903/Moral_Uncertainty_and_Intertheoretic_Comparisons_of_Value </ref> In lower levels of uncertainty, one can have doubts on how to act because all the relevant empirical information isn’t available, for example choosing whether to implement or not a new technology (e.g.: [[AGI]], [[Biological Cognitive Enhancement]],[[Mind Uploading]]) not fully knowing about its consequences and nature. But even if we ideally get to know each and every consequences of a new technology, we would still need to know which is the right ethical perspective for analyzing these consequences.  For example, if we known for certain that a new technology would enable more humans to live in another planet with slight less welfare than on Earth [[Wikipedia:Terraforming]]. An average [[utilitarianism|utilitarian]] would consider these consequences as bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do?
 +
One approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way? A better approach is to “perform the action with the highest expected moral value. We get the expected moral value of an action by multiplying the subjective probability that some theory is true by the value of that action if it is true, doing the same for all of the other theories, and adding up the results.” <ref>Sepielli, Andrew. (2008) “Moral Uncertainty and the Principle of Equity among Moral Theories". ISUS-X, Tenth Conference of the International Society for Utilitarian Studies, Kadish Center for Morality, Law and Public Affairs, UC Berkeley. Available at: http://escholarship.org/uc/item/7h5852rr.pdf </ref> However, we would still need method of comparing value intertheories, a [[utilon]] in one theory may not be the same with a utilon in another theory. Outside [[consequentialism]] there many ethical theories that doesn’t use utilions or any quantifiable values.  This is still a open problem.
  
[[Expected utility]] is well-established as the right way of dealing with ordinary uncertainty. However, it is not straightforward to apply expected utility to moral uncertainty. Many moral theories don't assign utilities. Those theories that do assign utilities are faced with the problem that they can multiply all their utilities by a constant, and still prefer the same outcomes and actions. This would raise the question of what constant to pick in each case: in other words, how to calibrate the value of a [[Utils|util]] across theories.
+
[[Nick Bostrom]] and [[Toby Ord]] have proposed a [http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html parliamentary model]. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord's proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.
  
Another approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way?
 
 
[[Nick Bostrom]] and [[Toby Ord]] have proposed a [http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html parliamentary model]. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord's proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.
 
  
 
==Blog posts==
 
==Blog posts==
Line 13: Line 11:
 
* [http://mss3.libraries.rutgers.edu/dlr/showfed.php?pid=rutgers-lib:26567 Practical rationality and normative uncertainty] (Andrew Sepielli's dissertation on the subject)
 
* [http://mss3.libraries.rutgers.edu/dlr/showfed.php?pid=rutgers-lib:26567 Practical rationality and normative uncertainty] (Andrew Sepielli's dissertation on the subject)
 
* [http://blog.practicalethics.ox.ac.uk/2012/01/practical-ethics-given-moral-uncertainty/ Practical ethics given moral uncertainty]
 
* [http://blog.practicalethics.ox.ac.uk/2012/01/practical-ethics-given-moral-uncertainty/ Practical ethics given moral uncertainty]
 
+
==References==
 +
{{Reflist|2}}
 
==See also==
 
==See also==
 
* [[Expected utility]]
 
* [[Expected utility]]
 
* [[Value learning]]
 
* [[Value learning]]
 
* [[Metaethics]]
 
* [[Metaethics]]
 +
 +
[[Category:Concepts]]

Revision as of 05:53, 23 September 2012

Moral uncertainty (or normative uncertainty) is uncertainty about how to act given the diversity of moral doctrines. It includes a level of uncertainty above the more usual uncertainty of what to do given incomplete information, since it deals also with uncertainty about which moral theory is right. Even with complete information about the world this kind of uncertainty would still remain [1] In lower levels of uncertainty, one can have doubts on how to act because all the relevant empirical information isn’t available, for example choosing whether to implement or not a new technology (e.g.: AGI, Biological Cognitive Enhancement,Mind Uploading) not fully knowing about its consequences and nature. But even if we ideally get to know each and every consequences of a new technology, we would still need to know which is the right ethical perspective for analyzing these consequences. For example, if we known for certain that a new technology would enable more humans to live in another planet with slight less welfare than on Earth Wikipedia:Terraforming. An average utilitarian would consider these consequences as bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do? One approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way? A better approach is to “perform the action with the highest expected moral value. We get the expected moral value of an action by multiplying the subjective probability that some theory is true by the value of that action if it is true, doing the same for all of the other theories, and adding up the results.” [2] However, we would still need method of comparing value intertheories, a utilon in one theory may not be the same with a utilon in another theory. Outside consequentialism there many ethical theories that doesn’t use utilions or any quantifiable values. This is still a open problem.

Nick Bostrom and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord's proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.


Blog posts

External links

References

  1. Crouch, William. (2010) “Moral Uncertainty and Intertheoretic Comparisons of Value” BPhil Thesis, 2010. p. 6. Available at: http://oxford.academia.edu/WilliamCrouch/Papers/873903/Moral_Uncertainty_and_Intertheoretic_Comparisons_of_Value
  2. Sepielli, Andrew. (2008) “Moral Uncertainty and the Principle of Equity among Moral Theories". ISUS-X, Tenth Conference of the International Society for Utilitarian Studies, Kadish Center for Morality, Law and Public Affairs, UC Berkeley. Available at: http://escholarship.org/uc/item/7h5852rr.pdf

See also