Difference between revisions of "Fun theory"

From Lesswrongwiki
Jump to: navigation, search
(Fun Theory and complex values)
 
(23 intermediate revisions by 3 users not shown)
Line 1: Line 1:
'''Fun theory''' addresses the following problem: If we can arrange to live for a very long time with every increasing intelligence, how can we continue to have any fun.  
+
'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify fun, how desirable fun is, and how fun relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.
  
When we have intellects that can deeply comprehend every movie, novel, and concert ever created in an instant, that intimately know every twist of every forest path, that can bring joy or tranquillity to themselves at will; intellects which, with enhanced bodies, can  swim under the Atlantic in a day and navigate a snowboard down Mount Everest as easily as you or I could steer a bicycle down the street, then what will remain?
+
==The argument against Enlightenment==
 +
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, fun theory has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, if superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we have to find other options.
  
==The risk of boredom==
+
The implicit open question is whether the universe will offer, or whether we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.
Unless we can answer this question, we might be faced by  endless boredom. This could be seen as an argument against key hopes of [[transhumanism]], such as  lifespan extension, human intelligence enhancement, and physical enhancement.  
 
  
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it] Utopians of all stripts, including Socialists, Enlightenment thinkers, and Christians, have generally been unable to imagine futures where anyone would actually *want* to live.
+
== The concept of Utopia==
 +
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:
  
 
<blockquote>
 
<blockquote>
It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be.  
+
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be."
 
</blockquote>
 
</blockquote>
  
==Fun Theory and complex values==
+
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.
A key insight of Fun Theory in its current embryonic form is that eudaimonia is [[Complexity of value|complicated]]--there are many properties which contribute to a life worth living. To experience fulfilled life, we humans require many things: safety, aesthetic stimulation, love, social interaction, learning, challenge, and much much more. If any one of these is absent, with learning and challenge particularly relevant in the context of Fun Theory, the human future will likely be very unpleasant  This is true of humans today, it will also in general be true of enhanced humans. (It might not be true to the extent that we alter our own value system. Our value system today does, perhaps, prefer a situation in which some human values are removed, like bloodlust, but there are values which we, on reflection want to keep, like those mentioned above. We do not want to eliminate curioisity!)
 
  
==Relation to Friendly AI==
+
==Complex values and fun theory's solution==
An artificial general intelligence which is created to help humanity ([[Friendly AI]]), and which grows to be much more powerful than us, must have as its goal the promotion of the human value system.
+
A key insight of fun theory, in its current embryonic form, is that ''eudaimonia'' - the classical framework where happiness is the ultimate human goal - is [[Complexity of value|complicated]]. That is, there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.
  
==Blog posts==
+
It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would neglect all other human values. For example, if we simply optimize for pleasure or happiness, [[wireheading|"wirehead"]], we'll stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent, then the human future will likely be very unpleasant.
  
*[http://lesswrong.com/lw/xy/the_fun_theory_sequence/ The Fun Theory Sequence] by [[Eliezer Yudkowsky]] describes some of the many complex considerations that determine ''what sort of happiness'' we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.
+
Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, fun theory must seek to maximize the value system that we have, rather than inventing new values.
  
 +
Fun theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall into a classical Utopian future.
 +
 
==External links==
 
==External links==
 
+
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]
*George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]
+
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] ([[Abolitionism]]) provides a more nuanced alternative to wireheading.
*[http://paradise-engineering.com/ Paradise Engineering]
 
*[http://www.hedweb.com/hedab.htm The Hedonistic Imperative]
 
  
 
==See also==
 
==See also==
 
+
*[[The Fun Theory Sequence]]
 
*[[Complexity of value]]
 
*[[Complexity of value]]
*[[The Fun Theory Sequence]]
 
 
*[[Metaethics sequence]]
 
*[[Metaethics sequence]]
 +
*[[Abolitionism]]
  
 
[[Category:Theses]]
 
[[Category:Theses]]

Latest revision as of 03:47, 28 June 2017

Fun theory is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify fun, how desirable fun is, and how fun relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.

The argument against Enlightenment

While discussing transhumanism and related fields such as cryonics or lifespan extension, fun theory has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, if superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we have to find other options.

The implicit open question is whether the universe will offer, or whether we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.

The concept of Utopia

Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell aptly describes it, as an Utopia:

"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be."

Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.

Complex values and fun theory's solution

A key insight of fun theory, in its current embryonic form, is that eudaimonia - the classical framework where happiness is the ultimate human goal - is complicated. That is, there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.

It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would neglect all other human values. For example, if we simply optimize for pleasure or happiness, "wirehead", we'll stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost any element of our value system is absent, then the human future will likely be very unpleasant.

Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, fun theory must seek to maximize the value system that we have, rather than inventing new values.

Fun theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall into a classical Utopian future.

External links

See also