In May 2012, Karnofsky posted Thoughts on the Singularity Institute (SI), which became the most-upvoted article ever on Less Wrong. It offered a detailed critique of what is now the Machine Intelligence Research Institute, and spawned a great deal of discussion.
MIRI staff posted two replies:
- Eliezer Yudkowsky, Reply to Holden on 'Tool AI'
- Luke Muehlhauser, Reply to Holden on The Singularity Institute
Paul Crowley ("ciphergoth") posted discussion articles for each point raised:
- Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.
- Objection 2: SI appears to neglect the potentially important distinction between "tool" and "agent" AI.
- Objection 3: SI's envisioned scenario is far more specific and conjunctive than it appears at first glance, and I believe this scenario to be highly unlikely.
- Is SI the kind of organization we want to bet on?
- Other objections to SI's views
- Phil Goetz, Holden's Objection 1: Friendliness is dangerous