Scott_Aaronson2 comments on Complexity and Intelligence - Less Wrong

24 Post author: Eliezer_Yudkowsky 03 November 2008 08:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Scott_Aaronson2 04 November 2008 05:41:06PM 2 points [-]

Otherwise, of course a larger environment can outsmart you mathematically.

No, not of course. For example, suppose P were equal to PSPACE. Then even though a larger environment could fundamentally outsmart you mathematically (say by solving the halting problem), it couldn't prove to you that it was doing so. In other words, the situation with polynomial-time computation would be more-or-less the same as it is with unlimited computation: superintelligent machines could only prove their superintelligence to other superintelligent machines.

That the situation with efficient computation appears to be different---i.e., that it appears superintelligent machines can indeed prove their superintelligence to fundamentally dumber machines---is (if true) a profound fact about the world that seems worth calling attention to. Sure, of course you can nullify it by assuming away all complexity considerations, but why? :-)