Mitchell_Porter comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (146)
It's a valid way to arrive at a state-machine model of something. It just won't tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology.
I do know that there's lots of work to be done. But this is what Eliezer's sequence will be about.
I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program's IQ depends on the domain being tested), but on a practical level, there's no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI's goals are. Otherwise all chess programs would be equally good.