AI safety is a small field. It has only about 50 researchers, and it’s mostly talent-constrained. I believe this number should be drastically higher.
A: the missing step from zero to hero
I have spoken to many intelligent, self-motivated people that bear a sense of urgency about AI. They are willing to switch careers to AIS research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.
One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.
Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge. I believe plenty of measures can be made to make getting into AI safety more like an "It's a small world"-ride:
Let there be a tested path with signposts along the way to make progress clear and measurable.
Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.
Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.
B: the giant unrelenting research machine that we don’t use
- The majority of researchers nowadays build their careers through academia. The typical story is for an academic to become acquainted with various topics during their study, pick one that is particularly interesting, and work on it for the rest of their career.
I have learned through personal experience that AI safety can be very interesting, and the reason it isn’t so popular yet is all about lack of exposure. If students were to be acquainted with the field early on, I believe a sizable amount of them would end up working in it (though checking that assumption is something to be done).
- AI safety is in an innovator phase. Innovators are highly risk-averse and have a large amount of agency, which allows them to survive an environment with little guidance or supporting infrastructure. Let us not fall for the typical mind fallacy, expecting risk-averse people to move into AI safety all by themselves. Academia can provide that supporting infrastructure that they need.
AASAA is the project to adress both of these issues. It has 2 phases:
A: Distill the field of AI safety into a high-quality MOOC: “Introduction to AI safety” B: Use the MOOC as a proof of concept to convince universities to teach the field
read more... We are currently bottlenecked for volunteers and ideas. Would you like to make a contribution? Sign up using this form and I will invite you to the slack and get you involved.
AI safety is a small field. It has only about 50 researchers, and it’s mostly talent-constrained. I believe this number should be drastically higher.
A: the missing step from zero to hero
I have spoken to many intelligent, self-motivated people that bear a sense of urgency about AI. They are willing to switch careers to AIS research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.
One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.
Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.
I believe plenty of measures can be made to make getting into AI safety more like an "It's a small world"-ride:
Let there be a tested path with signposts along the way to make progress clear and measurable.
Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.
Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.
B: the giant unrelenting research machine that we don’t use
- The majority of researchers nowadays build their careers through academia. The typical story is for an academic to become acquainted with various topics during their study, pick one that is particularly interesting, and work on it for the rest of their career.
I have learned through personal experience that AI safety can be very interesting, and the reason it isn’t so popular yet is all about lack of exposure. If students were to be acquainted with the field early on, I believe a sizable amount of them would end up working in it (though checking that assumption is something to be done).
- AI safety is in an innovator phase. Innovators are highly risk-averse and have a large amount of agency, which allows them to survive an environment with little guidance or supporting infrastructure. Let us not fall for the typical mind fallacy, expecting risk-averse people to move into AI safety all by themselves. Academia can provide that supporting infrastructure that they need.
AASAA is the project to adress both of these issues. It has 2 phases:
A: Distill the field of AI safety into a high-quality MOOC: “Introduction to AI safety” B: Use the MOOC as a proof of concept to convince universities to teach the field
read more... We are currently bottlenecked for volunteers and ideas. Would you like to make a contribution? Sign up using this form and I will invite you to the slack and get you involved.