Difference between revisions of "Fun theory"

From Lesswrongwiki
Jump to: navigation, search
 
(26 intermediate revisions by 3 users not shown)
Line 1: Line 1:
'''Fun theory''' addresses the following problem: If we can arrange to live for a very long time with every increasing intelligence, how can we continue to have any fun. When we have intellects that can deeply comprehend every movie, novel, and concert ever created in an instant, that intimately know every twist of every forest path, and with enhanced bodies can also swim under the Atlantic in a day and navigate a snowboard down Mount Everest as easily as you or I could steer a bicycle down the street, what will remain?
+
'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify fun, how desirable fun is, and how fun relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.
  
Unless we can answer this question, we might be faced by  endless boredom. This could be seen as an argument against key hopes of [[transhumanism]], such as lifespan extension, human intelligence enhancement, and physical enhancement.  
+
==The argument against Enlightenment==
 +
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, fun theory has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, if superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we have to find other options.  
  
Transhumanists work towards a better future, but Utopians in the past have generally imagined futures that no one would want to live in. George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun commented] on the inability of Utopians including Socialists, Enlightenment thinkers, and Christians to imagine futures where anyone would actually *want* to live.
+
The implicit open question is whether the universe will offer, or whether we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.
 +
 
 +
== The concept of Utopia==
 +
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:
  
 
<blockquote>
 
<blockquote>
It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be.  
+
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be."
 
</blockquote>
 
</blockquote>
Going into the details of Fun Theory helps you see that eudaimonia is [[Complexity of value|complicated]] - that there are many properties which contribute to a life worth living.  Which helps you appreciate just how worthless a galaxy would end up looking (with very high probability) if the galaxy was optimized by something with a utility function rolled up at random.  The narrowness of this target is the motivation to create AIs with precisely chosen goal systems ([[Friendly AI]]).
 
  
Fun Theory is built on top of the [[metaethics sequence|naturalistic metaethics]] summarized in [http://lesswrong.com/lw/sx/inseparably_right_or_joy_in_the_merely_good/ Joy in the Merely Good]; as such, its arguments ground in "On reflection, don't you think this is what you would actually want (for yourself and others)?"
+
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.
  
==Blog posts==
+
==Complex values and fun theory's solution==
 +
A key insight of fun theory, in its current embryonic form, is that ''eudaimonia'' - the classical framework where happiness is the ultimate human goal - is [[Complexity of value|complicated]]. That is, there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.
  
*[http://lesswrong.com/lw/xy/the_fun_theory_sequence/ The Fun Theory Sequence] by [[Eliezer Yudkowsky]] describes some of the many complex considerations that determine ''what sort of happiness'' we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.
+
It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would neglect all other human values. For example, if we simply optimize for pleasure or happiness, [[wireheading|"wirehead"]], we'll stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent, then the human future will likely be very unpleasant.  
  
 +
Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, fun theory must seek to maximize the value system that we have, rather than inventing new values.
 +
 +
Fun theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall into a classical Utopian future.
 +
 
==External links==
 
==External links==
 
+
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]
*George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]
+
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] ([[Abolitionism]]) provides a more nuanced alternative to wireheading.
*[http://paradise-engineering.com/ Paradise Engineering]
 
*[http://www.hedweb.com/hedab.htm The Hedonistic Imperative]
 
  
 
==See also==
 
==See also==
 
+
*[[The Fun Theory Sequence]]
 
*[[Complexity of value]]
 
*[[Complexity of value]]
*[[Joy in the merely real]]
 
*[[Transhumanism]]
 
*[[Friendly AI]]
 
*[[The Fun Theory Sequence]]
 
 
*[[Metaethics sequence]]
 
*[[Metaethics sequence]]
 +
*[[Abolitionism]]
  
 
[[Category:Theses]]
 
[[Category:Theses]]

Latest revision as of 04:47, 28 June 2017

Fun theory is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify fun, how desirable fun is, and how fun relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.

The argument against Enlightenment

While discussing transhumanism and related fields such as cryonics or lifespan extension, fun theory has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, if superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we have to find other options.

The implicit open question is whether the universe will offer, or whether we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.

The concept of Utopia

Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell aptly describes it, as an Utopia:

"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be."

Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.

Complex values and fun theory's solution

A key insight of fun theory, in its current embryonic form, is that eudaimonia - the classical framework where happiness is the ultimate human goal - is complicated. That is, there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.

It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would neglect all other human values. For example, if we simply optimize for pleasure or happiness, "wirehead", we'll stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost any element of our value system is absent, then the human future will likely be very unpleasant.

Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, fun theory must seek to maximize the value system that we have, rather than inventing new values.

Fun theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall into a classical Utopian future.

External links

See also