Difference between revisions of "Accelerating AI Safety Adoption in Academia"

From Lesswrongwiki
Jump to: navigation, search
(The pipeline)
(Replaced content with "This project has been renamed [https://wiki.lesswrong.com/wiki/Road_to_AI_Safety_Excellence RAISE]")
Line 1: Line 1:
'''Accelerating AI Safety Adoption in Academia''' (AASAA) is an initiative from [http://lesswrong.com/user/toonalfrink/ u/toonalfrink/] to improve the pipeline for AI safety researchers, especially by creating a MOOC
+
This project has been renamed [https://wiki.lesswrong.com/wiki/Road_to_AI_Safety_Excellence RAISE]
 
 
== Motivation ==
 
 
 
AI safety is a small field. It has only about 50 researchers. The field is mostly [https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/#ai-safety-research talent-constrained]. Given the dangers of an uncontrolled intelligence explosion, increasing the amount of AIS researchers is crucial for the long-term survival of humanity.
 
 
 
Within the LW community there are plenty of talented people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.
 
One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are [http://distill.pub/2017/research-debt/ undistilled]. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.
 
 
 
AI safety is in an [https://en.wikipedia.org/wiki/Diffusion_of_innovations innovator phase]. Innovators are highly risk-averse and have a large amount of agency, which allows them to survive an environment with little guidance or supporting infrastructure. Let community organisers not fall for the typical mind fallacy, expecting risk-averse people to move into AI safety all by themselves.
 
Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.
 
Plenty of measures can be made to make getting into AI safety more like an [https://www.youtube.com/watch?v=28123GsMzU8 "It's a small world"-ride]:
 
 
 
- Let there be a tested path with signposts along the way to make progress clear and measurable.
 
 
 
- Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.
 
 
 
- Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.
 
 
 
== The pipeline ==
 
 
 
What follows is an account of how things might be, should this project come to fruition.
 
 
 
1. Tim Urban's [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html Road to Superintelligence] is a popular article. Hundreds of thousands of people have read it. At the end of the article is a link, saying "if you want to work on this, these guys can help". It sends one to an [[Arbital]] page, reading "welcome to "prerequisites for "introduction to AI Safety""".
 
 
 
2. What follows is a series of articles explaining the math one should understand to be able to read AIS papers. It covers probability, game theory, computability theory, and a few other things. Most students with a technical major can follow along easily. Even some talented high school graduates do. When one comes to the end to the arbital sequence, one is congratulated: "you are now ready to study AI safety". A link to the MOOC appears at the bottom of the page.
 
 
 
3. The MOOC teaches an array of subfields. Technical subjects like corrigibility, value learning and vingean reflection, but also some high-level subjects like preventing arms races around AI. Assignments are designed in such a way that they don't need manual grading, but do give some idea of the student's competence. Sometimes, there is an assignment about an open problem. Students are given the chance to try to solve it by themselves. Interesting submissions are noted. When a student completes the course, they are awarded an official-looking certificate. Something to print and hang on the wall.
 
 
 
4. The course ends with an invitation to apply to a research institution.
 
 
 
== The state of the project & getting involved ==
 
 
 
If you're enthusiastic about volunteering, fill in [https://goo.gl/forms/m38tKbmDBFMgSyMz1 this form]
 
 
 
To be low-key notified of progress, join [https://www.facebook.com/groups/1421511671230776/ this facebook group]
 
 
 
A detailed outline of next steps can be found in [https://workflowy.com/s/D0Q9.b3CGt97HeY this Workflowy tree]
 
 
 
== External links ==
 
* [http://lesswrong.com/r/discussion/lw/p5e/announcing_aasaa_accelerating_ai_safety_adoption/ Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere)]
 
* [https://docs.google.com/document/d/1m2sc4lP_wRrBC-HQStayhsSxq4X6ZLWg1PiFDw17rN4/edit Launch Google Doc]
 
* [[Slack]] - https://aisafetygroup.slack.com - by invitation
 
* [https://workflowy.com/s/D0Q9.ePY2sZX55d curriculum & course structure tree]
 
[[Category:Rationalistsphere]]
 
[[Category:AI safety]]
 
[[Category:Chat]]
 

Revision as of 23:00, 3 July 2017

This project has been renamed RAISE