jessicat comments on Rodney Brooks talks about Evil AI and mentions MIRI [LINK] - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (6)
Well, he's right that intentionally evil AI is highly unlikely to be created:
which happens to be the exact reason why Friendly AI is difficult. He doesn't directly address things that don't care about humans, like paperclip maximizers, but some of his arguments can be applied to them.
He's totally right that AGI with intentionality is an extremely difficult problem. We haven't created anything that is even close to practically approximating Solomonoff induction across a variety of situations, and Solomonoff induction is insufficient for the kind of intentionality you would need to build something that cares about universe states while being able to model the universe in a flexible manner. But, you can throw more computation power at a lot of problems to get better solutions, and I expect approximate Solomonoff induction to become practical in limited ways as computation power increases and moderate algorithmic improvements are made. This is true partially because greater computation power allows one to search for better algorithms.
I do agree with him that human-level AGI within the next few decades is unlikely and that significantly slowing down AI research is probably not a good idea right now.