Difference between revisions of "Fun theory"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
'''Fun theory''' addresses the following problem: If we can arrange to live for a very long time with greatly increased  intelligence and physical abilities, how can we continue to have any fun?
+
'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.
  
==Keep curiosity and challenge alive==
+
==The argument against Enlightment==
Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.
+
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as an argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. As such, we should avoid that path of development.  
 
 
If we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds. Alternatively, we could change our mental architecture so that even the most trivial entertainments keep us amused forever. This too, is a future that we don't want.
 
 
 
Likewise, when superhumanly intelligent machines take care of our every need--then what fun will remain?
 
  
These scenarios could be seen as an argument against key hopes of [[transhumanism]] for the improvement of the human condition, including lifespan extension, human intelligence enhancement, physical enhancement, and Friendly AI.
+
The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds. Likewise, when superhumanly intelligent machines take care of our every need, what fun and challenges will remain?
  
==A desirable Utopia==
+
== Utopia==
 
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell  [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it]  Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually ''want'' to live.  
 
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell  [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it]  Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually ''want'' to live.  
  
Line 21: Line 17:
  
 
It is a common mistake in discussion of future AI  extract one element of the human preferences and advocate that it  alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system  is absent the human future will likely be very unpleasant.  
 
It is a common mistake in discussion of future AI  extract one element of the human preferences and advocate that it  alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system  is absent the human future will likely be very unpleasant.  
 +
Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.
  
Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.
+
==Keep curiosity and challenge alive==
 +
Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.
  
 
==External links==
 
==External links==

Revision as of 00:17, 11 October 2012

Fun theory is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.

The argument against Enlightment

While discussing transhumanism and related fields such as cryonics or lifespan extension, it has been brought up as an argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. As such, we should avoid that path of development.

The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds. Likewise, when superhumanly intelligent machines take care of our every need, what fun and challenges will remain?

Utopia

Transhumanists work towards a much better human future--a Utopia--but, as George Orwell aptly described it Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually want to live.

It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be.

Fun Theory and complex values

A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is complicated--there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.

It is a common mistake in discussion of future AI extract one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost any element of our value system is absent the human future will likely be very unpleasant. Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.

Keep curiosity and challenge alive

Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.

External links

See also