Difference between revisions of "Superexponential conceptspace"

From Lesswrongwiki
Jump to: navigation, search
m (byline removal)
 
Line 1: Line 1:
 
In order to do inference, we constantly need to make use of categories and concepts: it is neither possible nor desirable to deal with every unique arrangement of quarks and leptons on an individual basis. Fortunately, we can talk about repeatable higher-level regularities in the world instead: we can distinguish particular configurations of matter as instantiations of object concepts like ''chair'' or ''human'', and say that these objects have particular properties, like ''red'' or ''alive''.
 
In order to do inference, we constantly need to make use of categories and concepts: it is neither possible nor desirable to deal with every unique arrangement of quarks and leptons on an individual basis. Fortunately, we can talk about repeatable higher-level regularities in the world instead: we can distinguish particular configurations of matter as instantiations of object concepts like ''chair'' or ''human'', and say that these objects have particular properties, like ''red'' or ''alive''.
  
The sheer number of [[Thingspace|distinct configurations in which matter could be arranged]] is [[Scope insensitivity|unimaginably]] vast, but the '''superexponential conceptspace''' of the number of different ways to ''categorize'' these possible objects is even vaster.  
+
The sheer number of [[configuration space|distinct configurations]] in which matter could be arranged is [[Scope insensitivity|unimaginably]] vast, but the '''superexponential conceptspace''' of the number of different ways to ''categorize'' these possible objects is even vaster.  
  
 
For example, given an object that can either have or not have each of ''n'' properties, there are 2^''n''  
 
For example, given an object that can either have or not have each of ''n'' properties, there are 2^''n''  
Line 9: Line 9:
 
Without an [[inductive bias]], restricting attention to only a small portion of possible concepts, it's not possible to navigate the conceptspace: to learn a concept, a "fully general" learner would need to see all the individual examples that define it. Using [[probability]] to mark the extent to which each possibility belongs to a concept is another approach to express [[prior|prior information]] and its control over the process of learning.
 
Without an [[inductive bias]], restricting attention to only a small portion of possible concepts, it's not possible to navigate the conceptspace: to learn a concept, a "fully general" learner would need to see all the individual examples that define it. Using [[probability]] to mark the extent to which each possibility belongs to a concept is another approach to express [[prior|prior information]] and its control over the process of learning.
  
==Main post==
+
==Blog posts==
  
 +
*[http://lesswrong.com/lw/o2/mutual_information_and_density_in_thingspace/ Mutual Information, and Density in Thingspace]
 
*[http://lesswrong.com/lw/o3/superexponential_conceptspace_and_simple_words/ Superexponential Conceptspace, and Simple Words]
 
*[http://lesswrong.com/lw/o3/superexponential_conceptspace_and_simple_words/ Superexponential Conceptspace, and Simple Words]
  
Line 17: Line 18:
 
*[[Locate the hypothesis]]
 
*[[Locate the hypothesis]]
 
*[[Inductive bias]]
 
*[[Inductive bias]]
 +
*[[Configuration space]]
  
 
[[Category:Machine learning]]
 
[[Category:Machine learning]]

Latest revision as of 15:24, 12 March 2012

In order to do inference, we constantly need to make use of categories and concepts: it is neither possible nor desirable to deal with every unique arrangement of quarks and leptons on an individual basis. Fortunately, we can talk about repeatable higher-level regularities in the world instead: we can distinguish particular configurations of matter as instantiations of object concepts like chair or human, and say that these objects have particular properties, like red or alive.

The sheer number of distinct configurations in which matter could be arranged is unimaginably vast, but the superexponential conceptspace of the number of different ways to categorize these possible objects is even vaster.

For example, given an object that can either have or not have each of n properties, there are 2^n different descriptions corresponding to the possible objects of that kind (a number exponential in n). The number of possible concepts, each of which either includes a given description or doesn't, is one exponential higher: 2^(2^n)

Without an inductive bias, restricting attention to only a small portion of possible concepts, it's not possible to navigate the conceptspace: to learn a concept, a "fully general" learner would need to see all the individual examples that define it. Using probability to mark the extent to which each possibility belongs to a concept is another approach to express prior information and its control over the process of learning.

Blog posts

See also