Armok_GoB comments on Meditation, insight, and rationality. (Part 1 of 3) - Less Wrong

35 Post author: DavidM 28 April 2011 08:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: Armok_GoB 13 May 2011 09:34:29PM 1 point [-]

I am unable to remove the Friendly AI concept without destroying the concepts of "good", "bad", "value", "person", "worthwhile", "preferable", "person", "concious", "subjective experience", "humanity", "reality", "meaning", etc. the list just goes on and on and on. They're all directly or indirectly defined in therms of it. Further, since without those there is no reason for truth to be preferable to falsehood with this removed any model of my mind won't try to optimize for it and just turns to gibberish.

Comment author: AdeleneDawner 13 May 2011 09:50:25PM 3 points [-]

Ouch. Okay, the above advice is probably too late to be useful at all, then.

If those are all defined in terms of CEV (subjective experience? really? I'm not sure I want to know how you managed that one, nor humanity or reality), then what's left for CEV to be defined in terms of?

Comment author: Armok_GoB 14 May 2011 07:02:55PM 0 points [-]

Math?

Ok, granted, I used a kind of odd definition of "definition" in the above post, but the end result is the same; The model I use to reason about all LW type things (and most other things as well) consists of exactly two parts; Mathematical structures, and the utility function the math matters according to. The later one is synonymous to CEV as closely as I can determine. Every concept that can't be directly reduced to 100% pure well defined math much be caused by the utility function and thus removing that removes all those concepts. (obviously this is a simplification but that's the general structure of it.)