Back to LessWrong

Difference between revisions of "Computronium"

From Lesswrongwiki

Jump to: navigation, search
(Relevant to Friendly AI)
m (Syntax fix)
Line 3: Line 3:
 
Computronium is a "theoretical arrangement of matter that is the most optimal possible form of computing device for that amount of matter. " {{wikilink}}  
 
Computronium is a "theoretical arrangement of matter that is the most optimal possible form of computing device for that amount of matter. " {{wikilink}}  
  
==Relevant to Friendly AI==]
+
==Relevant to Friendly AI==
 
In a thought experiment similar to the [[Paperclip maximizer]], if an [[artificial general intelligence]] has a [[terminal value]] (end-goal) which  to make a pure mathematical calculation like solving the Riemann Hypothesis  [http://intelligence.org/upload/CFAI/design/generic.html#glossary_riemann_hypothesis_catastrophe it would convert] all available mass to [[computronium]] (the most efficient possible computer processors).  
 
In a thought experiment similar to the [[Paperclip maximizer]], if an [[artificial general intelligence]] has a [[terminal value]] (end-goal) which  to make a pure mathematical calculation like solving the Riemann Hypothesis  [http://intelligence.org/upload/CFAI/design/generic.html#glossary_riemann_hypothesis_catastrophe it would convert] all available mass to [[computronium]] (the most efficient possible computer processors).  
  

Revision as of 02:40, 4 September 2012

Computronium is a "theoretical arrangement of matter that is the most optimal possible form of computing device for that amount of matter. "
Smallwikipedialogo.png
Wikipedia has an article about


Relevant to Friendly AI

In a thought experiment similar to the Paperclip maximizer, if an artificial general intelligence has a terminal value (end-goal) which to make a pure mathematical calculation like solving the Riemann Hypothesis it would convert all available mass to computronium (the most efficient possible computer processors).

In fact, a similar outcome would also apply to many other goals: So long as greater optimization power can be boosted with more computing power, and so long as dedication of resources to creating computronium does not detract from the goal (e.g., by taking up matter, time, or effort that can better be used in other ways), computronium may be valuable to attaining the goal, human-friendly or otherwise. A mathematical goal, like proving the Riemann Hypothesis, is completely focused on computation and so most directly illustrates the concept.

Theories that valorize intelligence as such (such as that of Hugo de Garis, or Eliezer Yudkowsky before 2001) may see the conversion of all matter to computronium (running an AGI) as a positive development.