Fun theory is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify fun, how desirable fun is, and how fun relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.
The argument against Enlightenment
While discussing transhumanism and related fields such as cryonics or lifespan extension, fun theory has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, if superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we have to find other options.
The implicit open question is whether the universe will offer, or whether we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.
The concept of Utopia
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell aptly describes it, as an Utopia:
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be."
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.
Complex values and fun theory's solution
A key insight of fun theory, in its current embryonic form, is that eudaimonia - the classical framework where happiness is the ultimate human goal - is complicated. That is, there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.
It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would neglect all other human values. For example, if we simply optimize for pleasure or happiness, "wirehead", we'll stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost any element of our value system is absent, then the human future will likely be very unpleasant.
Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, fun theory must seek to maximize the value system that we have, rather than inventing new values.
Fun theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall into a classical Utopian future.
- George Orwell, Why Socialists Don't Believe in Fun
- David Pearce, Paradise Engineering and The Hedonistic Imperative provides a more nuanced alternative to wireheading.