Manfred comments on To contribute to AI safety, consider doing AI research - Less Wrong

26 Post author: Vika 16 January 2016 08:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: LessWrong 17 January 2016 05:35:19PM 1 point [-]

Upvoted to encouraging people to get hands-on. Learning is good. Trying to go for a higehr level of understanding in whatever you do is a core rationality skill.

Sadly you stopped there though. For the sake of discussion, I've heard Artificial Intelligence: A Modern Approach is a good book on the subject. Hopefully a discussion could start here; perhaps there's something flawed, or perhaps the book is outdated. If anyone here, and I'm looking at you, the AI, AGI, FAI, IDK and other acronym-users whom I can't keep up with can provide some more directions for the potentially aspiring AI researchers lurking around, it would be very appreciated.

Comment author: Manfred 18 January 2016 03:41:14AM 3 points [-]

Assuming you have some exposure to linear algebra, calculus, and a little programming, I recommend Andrew Ng's machine learning course on youtube. AI: A Modern Approach is still a good textbook, but I think machine learning specifically is where interesting stuff is happening right now.

Comment author: Dr_Manhattan 20 January 2016 06:08:59PM 0 points [-]

There is also an argument for doing stuff that's less in vogue right now.

Comment author: Manfred 20 January 2016 09:34:02PM 0 points [-]

Sure... but machine learning is very important for AGI, it's not going to suddenly get replaced with hand-designed agents. This advice might apply better to subfields, like deep neural networks vs. hierarchical Bayesian models.