Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

DaFranker comments on The curse of identity - Less Wrong

125 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (298)

You are viewing a single comment's thread. Show more comments above.

Comment author: DaFranker 26 July 2012 09:03:38PM *  2 points [-]

The shorter your encoded message, the longer the encryption / compression algorithm, until eventually the algorithm is the full raw unencoded message and the encoded message is a single null-valued signal that, when received, decodes into the full message as it is contained within the algorithm.

"look at the optimization targets of the processes that created the process that is me"

...isn't nearly as short or simple as it sounds. This becomes obvious once you try to replace those words with their associated meaning.

Comment author: Will_Newsome 07 August 2012 03:16:06AM -2 points [-]

My point was that it's easier to program ("simpler") than "maximize paperclips", not that it's as simple as it sounds. (Nothing is as simple as it sounds, duh.)

Comment author: DaFranker 07 August 2012 03:32:38AM *  1 point [-]

I fail to see how coding a meta-algorithm to select optimal extrapolation and/or simulation algorithm in order for those chosen algorithms to determine the probable optimization target (which is even harder if you want a full PA proof) is even remotely in the same order of complexity as a machine learner that uses natural selection for algorithms that increase paperclip-count, which is one of the simplest paperclip maximizers I can think of.

Comment author: Will_Newsome 07 August 2012 03:40:19AM *  -2 points [-]

It might not be possible to make such a machine learner into an AGI, which is what I had in mind—narrow AIs only have "goals" and "values" and so forth in an analogical sense. Cf. derived intentionality. If it is that easy to create such an AGI, then I think I'm wrong, e.g. maybe I'm thinking about the symbol grounding problem incorrectly. I still think that in the limit of intelligence/rationality, though, specifying goals like "maximize paperclips" becomes impossible, and this wouldn't be falsified if a zealous paperclip company were able to engineer a superintelligent paperclip maximizer that actually maximized paperclips in some plausibly commonsense fashion. In fact I can't actually think of a way to falsify my theory in practice—I guess you'd have to somehow physically show that the axioms of algorithmic information theory and maybe updateless-like decision theories are egregiously incoherent... or something.

(Also your meta-algorithm isn't quite what I had in mind—what I had in mind is a lot more theoretically elegant and doesn't involve weird vague things like "extrapolation"—but I don't think that's the primary source of our disagreement.)