Difference between revisions of "Fun theory"

From Lesswrongwiki
Jump to: navigation, search
(Fun Theory and complex values)
Line 1: Line 1:
 
'''Fun theory''' addresses the following problem: If we can arrange to live for a very long time with greatly increased  intelligence and physical abilities, how can we continue to have any fun?
 
'''Fun theory''' addresses the following problem: If we can arrange to live for a very long time with greatly increased  intelligence and physical abilities, how can we continue to have any fun?
  
When we have intellects that can deeply comprehend every movie, novel, and concert ever created in an instant, and instantly solve any problem which might challenge a human today; or when superhumanly intelligent machines take care of our every need--then what fun will remain?
+
==Keep curiosity and challenge alive==
 +
Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.
  
==The risk of boredom==
+
If we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds. Alternatively, we could change our mental architecture so that even the most trivial entertainments keep us amused forever. This too, is a future that we don't want.
Unless we can answer this question, we might be faced by endless boredom. Alternatively, we could eliminate boredom from our mental makeup. We could change our mental architecture so that even the most trivial entertainments keep us amused forever. This too, is a future that we don't want.
+
 
 +
Likewise, when superhumanly intelligent machines take care of our every need--then what fun will remain?
  
 
These scenarios could be seen as an argument against key hopes of [[transhumanism]] for the improvement of the human condition, including lifespan extension, human intelligence enhancement,  physical enhancement, and Friendly AI.
 
These scenarios could be seen as an argument against key hopes of [[transhumanism]] for the improvement of the human condition, including lifespan extension, human intelligence enhancement,  physical enhancement, and Friendly AI.
  
 +
==A desirable Utopia==
 
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell  [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it]  Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually ''want'' to live.  
 
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell  [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it]  Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually ''want'' to live.  
  
Line 15: Line 18:
  
 
==Fun Theory and complex values==
 
==Fun Theory and complex values==
A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is [[Complexity of value|complicated]]--there are many properties which contribute to a life worth living. To experience a fulfilled life, we humans require many things: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.
+
A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is [[Complexity of value|complicated]]--there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.
  
 
It is a common mistake in discussion of future AI  extract one element of the human preferences and advocate that it  alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system  is absent the human future will likely be very unpleasant.  
 
It is a common mistake in discussion of future AI  extract one element of the human preferences and advocate that it  alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system  is absent the human future will likely be very unpleasant.  
  
 
Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.
 
Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.
 
==Boredom==
 
Among the improvements  to the human condition that we want to work for, intelligence enhancement is among the most important. can work towards. Because of this, among other values that must be satisfied, Fun Theory puts a particular emphasis on the values opposed to boredom: curiosity, learning, and intellectual exploration. The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds.
 
 
==Relation to Friendly AI==
 
An artificial general intelligence which is created to help humanity ([[Friendly AI]]), and which grows to be much more powerful than us, must have as its goal the promotion of the human value system.
 
 
  
 
==External links==
 
==External links==

Revision as of 02:02, 3 September 2012

Fun theory addresses the following problem: If we can arrange to live for a very long time with greatly increased intelligence and physical abilities, how can we continue to have any fun?

Keep curiosity and challenge alive

Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.

If we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds. Alternatively, we could change our mental architecture so that even the most trivial entertainments keep us amused forever. This too, is a future that we don't want.

Likewise, when superhumanly intelligent machines take care of our every need--then what fun will remain?

These scenarios could be seen as an argument against key hopes of transhumanism for the improvement of the human condition, including lifespan extension, human intelligence enhancement, physical enhancement, and Friendly AI.

A desirable Utopia

Transhumanists work towards a much better human future--a Utopia--but, as George Orwell aptly described it Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually want to live.

It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be.

Fun Theory and complex values

A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is complicated--there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.

It is a common mistake in discussion of future AI extract one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost any element of our value system is absent the human future will likely be very unpleasant.

Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.

External links

See also