Difference between revisions of "User:Alex Altair"
From Lesswrongwiki
Alex Altair (talk | contribs) (My first draft of my own user page.) |
Alex Altair (talk | contribs) |
||
Line 1: | Line 1: | ||
− | I joined Less Wrong on 21 July 2010 after reading EY's Harry Potter and the Methods of Rationality. I have a career goal to create | + | I joined Less Wrong on 21 July 2010 after reading EY's Harry Potter and the Methods of Rationality. I have a career goal to create FAI (Friendly Artificial Intelligence). Specific topics I am researching are Solomonoff Induction and Neural Networks. I am a theorist and don't plan to do any "coding" for a very, very long time. |
== Suspicions == | == Suspicions == | ||
Line 9: | Line 9: | ||
*Although both are Turing complete, Neural Networks will be much faster at computing approximations of Solomonoff Induction than standard serial computing. | *Although both are Turing complete, Neural Networks will be much faster at computing approximations of Solomonoff Induction than standard serial computing. | ||
*Therefore it might be only reasonable to create fundamentally different hardware for AGI. | *Therefore it might be only reasonable to create fundamentally different hardware for AGI. | ||
− | |||
− | |||
− | |||
− |
Revision as of 04:09, 14 June 2012
I joined Less Wrong on 21 July 2010 after reading EY's Harry Potter and the Methods of Rationality. I have a career goal to create FAI (Friendly Artificial Intelligence). Specific topics I am researching are Solomonoff Induction and Neural Networks. I am a theorist and don't plan to do any "coding" for a very, very long time.
Suspicions
These are some tentative conclusions I've come to regarding AGI. Not strong enough to be called beliefs. I encourage messages about these ideas.
- AGI should be achieved by finding the right processes by which the machine learns on it's own. All attempts to program in knowledge or concepts are missing the point.
- The same fundamental algorithm can be used to understand any data, to learn any subject. Solomonoff Induction is the idealized formulation of this.
- Although both are Turing complete, Neural Networks will be much faster at computing approximations of Solomonoff Induction than standard serial computing.
- Therefore it might be only reasonable to create fundamentally different hardware for AGI.