I've been thinking for a bit what I would like the AGI risk community to look like. I'm curious what all your thoughts are.

I'll be posting all my ideas, but I encourage other people to post their own ideas.

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 7:46 AM

Periodically, there are job openings for positions of power that seem like they might matter quite a bit - executive director of the Partnerships for AI, person-in-charge-of-giving-out-government-grants-on-AI-stuff, etc, and... it seems like we have very few people who are remotely qualified for those positions, and willing to move to Washington DC or work in a very corporate, traditionally-signally-environment.

I think we need people who are willing to do that, and to acquire the skills and background that'd make them marketable for that.

Bits of government are changing. A palatable potential strategy would be to try and get in and make contacts by being in usds. I suspect they will be trying to do some machine learning at some point (fraud/intrusion detection if nothing else), if someone could position yourself as a subject matter expert/technical architect on this (with conference talks like AI in government), they might be in a good position.

Members helping each other reach positions of power and influence so we can slightly reduce the probably of our light-cone being destroyed by a paperclip maximizer.

I think humans naturally would help each other in this way, if they trust each other. Having this as a cultural norm seems like it would be attractive to people who pay lip service to the AGI risk stuff and were just in it for the boost. Perhaps this would work if you had to invest a lot upfront.

I have a natural aversion to this. I'm curious what form you think this would take?

It would mostly work on a one-on-one basis were we use connections and personal knowledge to give advice and help.

An organization that is solution agnostic (not a research institute, there are conflicts of interest there) and is dedicated to discovering the current state of AGI research and informing its members. It would not just search under the lampposts of current AI work but also organize people to look at emerging work. It would output summaries of new things and may even try to do something like the doomsday clock but for AGI.

More specifically, what should the role of government be in AI safety? I understand tukabel's intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.

The guest list at the Asilomar Conference should give you a pretty good idea of what the AGI risk community already looks like.

My question is what do you find inadequate about the current status of the AGI community?

I'd like more talks like:

"How can we slow the start of the agi arms race?"

"How can we make sure the AGI risk community has an accurate view of what is going on in AGI relevant research (be it neuroscience or AI)?"

FHI has done a fair amount of work trying to determine the implications of different types of policies and strategies regarding AGI development and the long term effect of those strategies on risk. One of those issues is the topic of openness in AI development which is especially relevant given the existence of OpenAI, and how the approach to openness may or may not increase the likelihood of an AI arms race under different scenarios.

I think at this point the AGI risk community and the machine learning/ neuroscience communities are pretty well connected and aware of each other's overall progress. You'll notice that Demis Hassabis, Ilya Sustkever, Yoshua Bengio, Yann LeCun, to name just a few, are all experts in machine learning development and were attendees of the Asilomar conference.

Neuroscience != Neural Networks and machine learning probably isn't the only relevant bit of AI work going on currently to AGI. This is what I would like .

I'm also interested in who is currently trying to execute on getting the policies and strategies followed for minimising an arms race. Not just writing research papers.

An organisation that regularly surveys AI and AGI researchers/students on safety topics and posts research into different ways with engaging with them.

let's better start with what it should NOT look like...

e.g.

  • no government (some would add word "criminals")
  • no evil companies (especially those who try to deceive the victims with "no evil" propaganda)
  • no ideological mindfcukers (imagine mugs from hardcore religious circles shaping the field - does not matter whether it's traditional stone age or dark age cult or modern socialist religion)

no ideological mindfcukers

That rules out paying too much attention to the rest of your comment.