Top line: If you think you could write a substantial pull request for a major machine learning library, then major AI safety labs want to interview you today.
I work for Anthropic, an industrial AI research lab focussed on safety. We are bottlenecked on aligned engineering talent. Specifically engineering talent. While we'd always like more ops folk and more researchers, our safety work is limited by a shortage of great engineers.
I've spoken to several other AI safety research organisations who feel the same.
Why engineers?
May last year, OpenAI released GPT-3, a system that did surprisingly well at a surprisingly broad range of tasks. While limited in many important ways, a lot of AI safety... (read 971 more words →)
(a)
Look, we already have superhuman intelligences. We call them corporations and while they put out a lot of good stuff, we're not wild about the effects they have on the world. We tell corporations 'hey do what human shareholders want' and the monkey's paw curls and this is what we get.
Anyway yeah that but a thousand times faster, that's what I'm nervous about.
(b)
Look, we already have superhuman intelligences. We call them governments and while they put out a lot of good stuff, we're not wild about the effects they have on the world. We tell governments 'hey do what human voters want' and the monkey's paw curls and this is what we get.
Anyway yeah that but a thousand times faster, that's what I'm nervous about.