AI safety is a small field. It has only about 50 researchers, and it’s mostly talent-constrained. I believe this number should be drastically higher.

A: the missing step from zero to hero

I have spoken to many intelligent, self-motivated people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.

One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.

Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.
I believe plenty of measures can be made to make getting into AI safety more like an "It's a small world"-ride:

  • Let there be a tested path with signposts along the way to make progress clear and measurable.

  • Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.

  • Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.


B: the giant unrelenting research machine that we don’t use

The majority of researchers nowadays build their careers through academia. The typical story is for an academic to become acquainted with various topics during their study, pick one that is particularly interesting, and work on it for the rest of their career.

I have learned through personal experience that AI safety can be very interesting, and the reason it isn’t so popular yet is all about lack of exposure. If students were to be acquainted with the field early on, I believe a sizable amount of them would end up working in it (though this is an assumption that should be checked).

AI safety is in an innovator phase. Innovators are highly risk-tolerant and have a large amount of agency, which allows them to survive an environment with little guidance, polish or supporting infrastructure. Let us not fall for the typical mind fallacy, expecting less risk-tolerant people to move into AI safety all by themselves. Academia can provide that supporting infrastructure that they need.


AASAA adresses both of these issues. It has 2 phases:

A: Distill the field of AI safety into a high-quality MOOC: “Introduction to AI safety”

B: Use the MOOC as a proof of concept to convince universities to teach the field

 

read more...

 

We are bottlenecked for volunteers and ideas. If you'd like to help out, even if just by sharing perspective, fill in this form and I will invite you to the slack and get you involved.

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 7:20 AM

In addition to generally liking this initiative, I specifically appreciate the article on research debt.

I came to a similar conclusion years ago, but when I tried to communicate it, typical reaction was "just admit that you suck at research". Full disclosure: I do suck at research. But that perhaps makes it even easier to notice that come of the complexity is essential - the amount, complexity, and relatedness of the ideas - but a lot of it is accidental.

For example, it’s normal to give very mediocre explanations of research, and people perceive that to be the ceiling of explanation quality. On the rare occasions that truly excellent explanations come along, people see them as one-off miracles rather than a sign that we could systematically be doing better.

People who excel at doing research are usually not the ones who excel at explaining stuff, so this is what happens by default.

The problem of research debt is indeed huge. But the best explanations I know were written by researchers doing explanation part-time, not dedicated explainers. I think getting researchers involved in teaching is a big part of why universities succeed. Students aren't vessels to be filled, they are torches to be lit, and you can only light a torch from another torch. (I was lucky to attend a high school where math was taught partly by mathematicians, and it pretty much set me for life.) Maybe MIRI should make researchers spend half of their time writing explanations and rate their popularity.

I think getting researchers involved in teaching is a big part of why universities succeed.

This is how universities also sometimes get teachers who hate teaching, and who are sometimes very unpleasant to learn from.

Students aren't vessels to be filled, they are torches to be lit

Sounds like a false dilemma. Ceteris paribus, wouldn't getting more knowledge easier be better? The less time and energy you spend on learning X, the more time and energy you can spend on learning or researching Y. Also, having a topic more clearly explained can make it accessible to students at younger age.

I specifically appreciate the article on research debt.

Since I was confused by this when I first read this, I want to clarify: As far as I can tell the article is not written by anybody associated with AASAA. You're saying it was nice of toonalfrink to link to it.

(I'm not sure if this comment is useful, since I don't expect a lot of people to have the same misunderstanding I did.)

[-][anonymous]7y00

Am not associated. Just found the article in the MIRI newsletter

Well, I am grateful both to the person who wrote the article, and the person who brought it to my attention. I didn't realize originally they may not be the same person or organization.

Any chance it could be called AGI Safety instead of AI safety? I think that getting us to consistently use that terminology would help people to know that we are worrying about something greater than current deep learning systems and other narrow AI (although investigating safety in these systems is a good stepping stone to the AGI work).

I'll help out how I can. I think these sorts of meta approaches are a great idea!

No-doom-AGI

I really don't think you should try to convince mid-career professionals to switch careers to AI safety risk research. Instead, you should focus on recruiting talented young people, ideally people who are still in university or at most a few years out.

[-][anonymous]7y30

I agree.

I must admit that the "convince academics" part of the plan is still a bit vague. It's unclear to me how new fields become fashionable in academia. How does one even figure that out? I'd love to know.

The project focuses on the "create a MOOC" part right now, which is plenty of value in itself.

How about having a list of possible AGI safety related topics that could provide material for a bachelor or master thesis?

[-][anonymous]7y00

What about the research agendas that have already been published?

It's hard to know from the outside which problems are tractable enough to write a bachelor thesis on them.

It has only about 50 researchers, and it’s mostly talent-constrained.

What's the evidence that it's mostly talent-constrained?

[-][anonymous]7y10

As stated here:

FHI and CSER recently raised large academic grants to fund safety research, and may not be able to fill all their positions with talented researchers. Elon Musk recently donated $10m through the Future of Life Institute, and Open Phil donated a further $1m, which was their assessment of how much was needed to fund the remaining high-quality proposals. I’m aware of other major funders, including billionaires, who would like to fund safety researchers, but don’t think there’s enough talent in the pool. The problem is that it takes many years to gain the relevant skill set and few people are interested in the research, so even raising salaries won’t help significantly. Other funders are concerned that the research isn’t actually tractable, so the main priority is having someone demonstrate that progress can be made. Previous efforts to demonstrate progress have yielded large increases in funding.

But to be fair, that's november 2015, so let me know if I should update.

I don't have any special insight.

I imagine that top level talent is hard to get but the amount of Phd students that have enough skills to be able to do Phd research in the area might be higher. As far as I understand the open Phd positions are very competitive but I base my impression on a single conversation.

This looks solid.

Can you go into a bit of detail on the level / spectrum of difficulty of the courses you're aiming for, and the background knowledge that'll be expected? I suspect you don't want to discourage people, but realistically speaking, it can hardly be low enough to allow everyone who's interested to participate meaningfully.

[-][anonymous]7y10

Thank you!

Difficulty/prerequisites is one of the uncertainties that will have to be addressed. Some AI safety only requires algebra skills while other stuff needs logic/ML/RL/category theory/other, and then there is stuff that isn't formalized at all.

But there are other applied mathematics fields with this problem, and I expect that we can steal a solution by having a look there.