Manfred comments on To contribute to AI safety, consider doing AI research - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
Upvoted to encouraging people to get hands-on. Learning is good. Trying to go for a higehr level of understanding in whatever you do is a core rationality skill.
Sadly you stopped there though. For the sake of discussion, I've heard Artificial Intelligence: A Modern Approach is a good book on the subject. Hopefully a discussion could start here; perhaps there's something flawed, or perhaps the book is outdated. If anyone here, and I'm looking at you, the AI, AGI, FAI, IDK and other acronym-users whom I can't keep up with can provide some more directions for the potentially aspiring AI researchers lurking around, it would be very appreciated.
Assuming you have some exposure to linear algebra, calculus, and a little programming, I recommend Andrew Ng's machine learning course on youtube. AI: A Modern Approach is still a good textbook, but I think machine learning specifically is where interesting stuff is happening right now.
There is also an argument for doing stuff that's less in vogue right now.
Sure... but machine learning is very important for AGI, it's not going to suddenly get replaced with hand-designed agents. This advice might apply better to subfields, like deep neural networks vs. hierarchical Bayesian models.