Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
You are viewing a comment permalink. View the original post to see all
comments and the full post content.
You are viewing a single comment's thread.
Show more comments above.
No, I was referring to the AI reflection problem in the grandparent.
I don't know if that would make AGI much easier. Even with a good reflective decision theory, you'd still need a more efficient framework for inference than an AIXI-style brute force algorithm. On the other hand, if you could do inference well, you might still make a working AI without solving reflection, but it would be harder to understand its goal system, making it less likely to be friendly. The lack of reflectivity could be an obstacle, but I think that it is more likely that, given a powerful inference algorithm and a lack of concern for the dangers of AI, it wouldn't be that hard to make something dangerous.
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?