Difference between revisions of "User:Alex Altair"

From Lesswrongwiki
Jump to: navigation, search
(My first draft of my own user page.)
 
Line 1: Line 1:
I joined Less Wrong on 21 July 2010 after reading EY's Harry Potter and the Methods of Rationality. I have a career goal to create AGI (Artificial General Intelligence). Specific topics I am researching are Solomonoff Induction and Neural Networks. I am a theorist and don't plan to do any "coding" for a very, very long time.
+
I joined Less Wrong on 21 July 2010 after reading EY's Harry Potter and the Methods of Rationality. I have a career goal to create FAI (Friendly Artificial Intelligence). Specific topics I am researching are Solomonoff Induction and Neural Networks. I am a theorist and don't plan to do any "coding" for a very, very long time.
  
 
== Suspicions ==
 
== Suspicions ==
Line 9: Line 9:
 
*Although both are Turing complete, Neural Networks will be much faster at computing approximations of Solomonoff Induction than standard serial computing.
 
*Although both are Turing complete, Neural Networks will be much faster at computing approximations of Solomonoff Induction than standard serial computing.
 
*Therefore it might be only reasonable to create fundamentally different hardware for AGI.
 
*Therefore it might be only reasonable to create fundamentally different hardware for AGI.
 
== Philosophy ==
 
 
I am an Objectivist. This means I believe in objective reality, rational egoism, and libertarianism. I'm primarily interested in talking about the validity of the ideas, but I'll talk about Ayn Rand and the history of Objectivism for the sake of talking about interesting things.
 

Revision as of 05:09, 14 June 2012

I joined Less Wrong on 21 July 2010 after reading EY's Harry Potter and the Methods of Rationality. I have a career goal to create FAI (Friendly Artificial Intelligence). Specific topics I am researching are Solomonoff Induction and Neural Networks. I am a theorist and don't plan to do any "coding" for a very, very long time.

Suspicions

These are some tentative conclusions I've come to regarding AGI. Not strong enough to be called beliefs. I encourage messages about these ideas.

  • AGI should be achieved by finding the right processes by which the machine learns on it's own. All attempts to program in knowledge or concepts are missing the point.
  • The same fundamental algorithm can be used to understand any data, to learn any subject. Solomonoff Induction is the idealized formulation of this.
  • Although both are Turing complete, Neural Networks will be much faster at computing approximations of Solomonoff Induction than standard serial computing.
  • Therefore it might be only reasonable to create fundamentally different hardware for AGI.