This is a monthly thread for posting any interesting rationality-related quotes you've seen on LW/OB.
- Please post all quotes separately (so that they can be voted up/down separately) unless they are strongly related/ordered.
- Do not quote yourself.
- Do not post quotes that are NOT comments/posts on LW/OB - there is a separate thread for this.
- No more than 5 quotes per person per thread, please.
Perhaps the key point is that for any potential "problem", there is some level of intelligence according to which the "problem" is utterly transparent in the sense that it is either obviously answerable, obviously unanswerable in principle, or obviously unanswerable in practice given a finite universe or other constraints. If there is a level of intelligence that effortlessly sees which of these is the case and solves it if it is solvable, then I don't think it makes sense to say the problem is intrinsically hard.
There are mathematical problems that are non-obviously not answerable. Meaning that in any finite universe, it is unanswerable whether the problem is answerable. (As opposed to in some fixed one).
You could assume the existence of infinite intelligences in infinite universes, but then you may run into infinite problems that are necessarily unsolvable.
So, I would agree with Dan that the quote is quite wrong, and hollow at best. (Hollow: if finite problems and infinite intelligences are assumed).
I am not surprised Eliezer marked that article "as wrong, obsolete, deprecated by an improved version, or just plain old".