Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
You are viewing a comment permalink. View the original post to see all
comments and the full post content.
You are viewing a single comment's thread.
Show more comments above.
I strongly suspect that a lot of this confusion comes from looking at a very deep rabbit hole from five yards away and trying to guess what's in it.
There is no doubt that Eliezer's team has made some progress in understanding the FAI problem, which is the second of the Four Steps:
1. Identify the problem.
2. Understand the problem.
3. Solve the problem.
4. Waterslide down rainbows for eternity
I sympathize entirely with the wish to summarize all the progress made so far into a sentence-long definition. But I don't think it's completely reasonable. As timtyler suggests, I think the thing to do at this point is use the pornography rule: "I know it when I see it."
It's entirely unclear that we know it when we see it either. Humans don't qualify as a human-friendly intelligence, for example (or us creating an uFAI wouldn't be a danger). We might know something's not it when we see it, but that is not the same thing as knowing something is it when we see it.
I agree. But it's the best we've got when we're not domain experts, or so it seems.
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?