I joined Less Wrong on 21 July 2010 after reading EY's Harry Potter and the Methods of Rationality. I have a career goal to create FAI (Friendly Artificial Intelligence). Specific topics I am researching are Solomonoff Induction and Neural Networks. I am a theorist and don't plan to do any "coding" for a very, very long time.
These are some tentative conclusions I've come to regarding AGI. Not strong enough to be called beliefs. I encourage messages about these ideas.
- AGI should be achieved by finding the right processes by which the machine learns on it's own. All attempts to program in knowledge or concepts are missing the point.
- The same fundamental algorithm can be used to understand any data, to learn any subject. Solomonoff Induction is the idealized formulation of this.
- Although both are Turing complete, Neural Networks will be much faster at computing approximations of Solomonoff Induction than standard serial computing.
- Therefore it might be only reasonable to create fundamentally different hardware for AGI.