Kawoomba comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (146)
OK.
In all seriousness, there's a lot you're saying that seems contradictory at first glance. A few snippets:
If computation epistemology is not the full story, if true epistemology for a conscious being is "something more", then you are saying that it is so incomplete as to be invalid. (Doesn't Searle hold similar beliefs, along the lines of "consciousness is something that brain matter does"? No uploading for you two!)
I'm not sure you appreciate the distance to go "just" in regards of provable friendliness theory, let alone a workable foundation of strong AI, a large scientific field in its own right.
The question of which "apriori beliefs" are supposed to be programmed or not programmed in the AI is so far off as to be irrelevant.
Also note that if it those beliefs turn out not be an invariant in respect to friendliness (and why should they?), they are going to be updated until they converge towards more accurate beliefs anyways.
"Ontology + morals" corresponds to "model of the current state of the world + actions to change it", and the efficiency of those actions equals "intelligence". An agent's intelligence is thus an emergent property of being able to manipulate the world in accordance to your morals, i.e. it is not an additional property but is inherent in your so-called "true ontology".
Still upvoted.
It's a valid way to arrive at a state-machine model of something. It just won't tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology.
I do know that there's lots of work to be done. But this is what Eliezer's sequence will be about.
I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program's IQ depends on the domain being tested), but on a practical level, there's no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI's goals are. Otherwise all chess programs would be equally good.