rmoehn comments on Earning money with/for work in AI safety - All

7 Post author: rmoehn 18 July 2016 05:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: rmoehn 20 July 2016 06:58:06AM *  0 points [-]

So you think there's not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?

Oh, and why do you consider AI safety a "theoretical [or] unlikely" problem?

Comment author: Dagon 20 July 2016 04:26:03PM 0 points [-]

I think that there's not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.

I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.

Comment author: rmoehn 21 July 2016 04:53:52AM 0 points [-]

So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?

Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?

Comment author: Dagon 21 July 2016 02:52:40PM 1 point [-]

Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.

Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn't be your primary drive.