roko3 comments on Optimization - Less Wrong

20 Post author: Eliezer_Yudkowsky 13 September 2008 04:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

Sort By: Old

You are viewing a single comment's thread.

Comment author: roko3 13 September 2008 07:22:57PM 0 points [-]

Eli: I think that your analysis here, and the longer analysis presented in "knowability of FAI" misses a very important point. The singularity is a fundamentally different process than playing chess or building a saloon car. The important distinction is that in building a car, the car-maker's ontology is perfectly capable of representng all of the high-level properties of the desired state, but the I stigators of the singularity are, by definition lacking a sufficiently complex representation system to represent any of the important properties of the desired state: post singularity earth. You have had the insight required to see this: you posted about " dreams of XML in a universe of quantum mechanics" a couple of posts back. I posted about this on my blog: "ontologies, approximations and fundamentalists" too.

It suffices to say that an optimization process which takes place with respect to a fixed background ontology or set of states is fundamentally different to a process which I might call vari-optimization, where optimization and ontology change happen at the same time. The singularity (whether an AI singularity or non AI) will be of the latter type.