Difference between revisions of "Accelerating AI Safety Adoption in Academia"
Toonalfrink (talk | contribs) m |
Deku-shrub (talk | contribs) (→The state of the project & getting involved) |
||
Line 38: | Line 38: | ||
A detailed outline of next steps can be found in [https://workflowy.com/s/D0Q9.b3CGt97HeY this Workflowy tree] | A detailed outline of next steps can be found in [https://workflowy.com/s/D0Q9.b3CGt97HeY this Workflowy tree] | ||
− | |||
− | |||
== External links == | == External links == |
Revision as of 06:31, 1 July 2017
Accelerating AI Safety Adoption in Academia (AASAA) is an initiative from u/toonalfrink/ to improve the pipeline for AI safety researchers, especially by creating a MOOC
Motivation
AI safety is a small field. It has only about 50 researchers. The field is mostly talent-constrained. Given the dangers of an uncontrolled intelligence explosion, increasing the amount of AIS researchers is crucial for the long-term survival of humanity.
Within the LW community there are plenty of talented people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage. One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.
AI safety is in an innovator phase. Innovators are highly risk-averse and have a large amount of agency, which allows them to survive an environment with little guidance or supporting infrastructure. Let community organisers not fall for the typical mind fallacy, expecting risk-averse people to move into AI safety all by themselves. Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge. Plenty of measures can be made to make getting into AI safety more like an "It's a small world"-ride:
- Let there be a tested path with signposts along the way to make progress clear and measurable.
- Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.
- Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.
The pipeline
What follows is an account of how things might be, should this project come to fruition.
1. Tim Urban's Road to Superintelligence is a popular article. Hundreds of thousands of people have read it. At the end of the article is a link, saying "if you want to work on this, these guys can help". It sends one to an Arbital page, reading "welcome to "prerequisites for "introduction to AI Safety""".
2. What follows is a series of articles explaining the math one should understand to be able to read AIS papers. It covers probability, game theory, computability theory, and a few other things. Most students with a technical major can follow along easily. Even some talented high school graduates do. When one comes to the end to the arbital sequence, one is congratulated: "you are now ready to study AI safety". A link to the MOOC appears at the bottom of the page.
3. The MOOC teaches an array of subfields. Technical subjects like corrigibility, value learning and vingean reflection, but also some high-level subjects like preventing arms races around AI. Assignments are designed in such a way that they don't need manual grading, but do give some idea of the student's competence. Sometimes, there is an assignment about an open problem. Students are given the chance to try to solve it by themselves. Interesting submissions are noted. When a student completes the course, they are awarded an official-looking certificate. Something to print and hang on the wall.
4. The course ends with an invitation to apply to a research institution.
The state of the project & getting involved
If you're enthusiastic about volunteering, fill in this form
To be low-key notified of progress, join this facebook group
A detailed outline of next steps can be found in this Workflowy tree