You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dagon comments on Earning money with/for work in AI safety - Less Wrong Discussion

7 Post author: rmoehn 18 July 2016 05:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: rmoehn 19 July 2016 01:17:15AM 2 points [-]

In the likely case that your marginal contribution to x-risk doesn't save the world

So you think that other people could contribute much more to x-risk, so I should go into areas where I can have a lot of impact? Otherwise, if everyone says »I'll only have a small impact on x-risk. I'll do something else.«, nobody would work on x-risk. Are you trying to get a better justification for work on x-risk out of me? At the moment I only have this: x-risk is pretty important, because we don't want to go extinct (I don't want humanity to go extinct or into some worse state than today). Not many people are working on x-risk. Therefore I do work on x-risk, so that there are more people working on it. Now you will tell me that I should start using numbers.

the fact that you won't consider leaving Kagoshima is an indication that you aren't as fully committed as you claim

What did I claim about my degree of commitment? And yes, I know that I would be more effective at improving the state of humanity if I didn't have certain preferences about family and such.

Anyway, thanks for pushing me towards quantitative reasoning.

Comment author: Dagon 19 July 2016 01:52:53PM 0 points [-]

So you think that other people could contribute much more to x-risk

"marginal" in that sentence was meant literally - the additional contribution to the cause that you're considering. Actually, I think there's not much room for anybody to contribute large amounts to x-risk mitigation. Most people (and since I know nothing of you, I put you in that class) will do more good for humanity by working at something that improves near-term situations than by working on theoretical and unlikely problems.

Comment author: rmoehn 20 July 2016 06:58:06AM *  0 points [-]

So you think there's not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?

Oh, and why do you consider AI safety a "theoretical [or] unlikely" problem?

Comment author: Dagon 20 July 2016 04:26:03PM 0 points [-]

I think that there's not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.

I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.

Comment author: rmoehn 21 July 2016 04:53:52AM 0 points [-]

So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?

Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?

Comment author: Dagon 21 July 2016 02:52:40PM 1 point [-]

Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.

Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn't be your primary drive.