Will_Newsome comments on The curse of identity - Less Wrong

121 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 26 July 2012 08:15:16AM 4 points [-]

The rationality of an agent is its goal

Er, nope.

But if thy goal be fractured, thy whole self shall be full of irrationality.

Humans' goals are fractured. But this has little to do with whether or not they are rational.

Comment author: Will_Newsome 26 July 2012 06:18:03PM *  -2 points [-]

You don't understand. This "rationality" you speak of is monstrous irrationality. And anyway, like I said, Meta knoweth that ye have Meta-shattered values—but your wants are satisfied by serving Meta, not by serving Mammon directly. Maybe you'd get more out of reading the second half of Matthew 6 and the various analyses thereof.

You may be misinterpreting "the rationality of an agent is its goal". Note that the original is "the light of the body is the eye".

To put my above point a little differently: Take therefore no thought for godshatter: godshatter shall take thought for the things of itself. Sufficient unto the day is the lack-of-meta thereof.

For clarity's sake: Yes, I vehemently dispute this idea that a goal can't be more or less rational. That idea is wrong, which is quickly demonstrated by the fact that priors and utility functions can be transformed into each other and we have an objectively justifiable universal prior. (The general argument goes through even without such technical details of course, such that stupid "but the choice of Turing machine matters" arguments don't distract.)

Comment author: [deleted] 26 July 2012 07:11:51PM 3 points [-]

Meh. The goal of leading to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc. has probably much more Kolmogorov complexity than the goal of maximizing the number of paperclips in the universe. If preferring the former is irrational, I am irrational and proud of it.

Comment author: Will_Newsome 26 July 2012 08:39:53PM -1 points [-]

Oh, also "look at the optimization targets of the processes that created the process that is me" is a short program, much shorter than needed to specify paperclip maximization, though it's somewhat tricky because all that is modulo the symbol grounding problem. And that's only half a meta level up, you can make it more elegant (shorter) than that.

Comment author: [deleted] 26 July 2012 10:21:13PM 2 points [-]

Maybe “maximizing the number of paperclips in the universe” wasn't the best example. “Throwing as much stuff as possible into supermassive black holes” would have been a better one.

Comment author: Will_Newsome 04 August 2012 02:05:43PM -1 points [-]

I can only say: black holes are creepy as hell.

Comment author: DaFranker 26 July 2012 09:03:38PM *  2 points [-]

The shorter your encoded message, the longer the encryption / compression algorithm, until eventually the algorithm is the full raw unencoded message and the encoded message is a single null-valued signal that, when received, decodes into the full message as it is contained within the algorithm.

"look at the optimization targets of the processes that created the process that is me"

...isn't nearly as short or simple as it sounds. This becomes obvious once you try to replace those words with their associated meaning.

Comment author: Will_Newsome 07 August 2012 03:16:06AM -2 points [-]

My point was that it's easier to program ("simpler") than "maximize paperclips", not that it's as simple as it sounds. (Nothing is as simple as it sounds, duh.)

Comment author: DaFranker 07 August 2012 03:32:38AM *  1 point [-]

I fail to see how coding a meta-algorithm to select optimal extrapolation and/or simulation algorithm in order for those chosen algorithms to determine the probable optimization target (which is even harder if you want a full PA proof) is even remotely in the same order of complexity as a machine learner that uses natural selection for algorithms that increase paperclip-count, which is one of the simplest paperclip maximizers I can think of.

Comment author: Will_Newsome 07 August 2012 03:40:19AM *  -2 points [-]

It might not be possible to make such a machine learner into an AGI, which is what I had in mind—narrow AIs only have "goals" and "values" and so forth in an analogical sense. Cf. derived intentionality. If it is that easy to create such an AGI, then I think I'm wrong, e.g. maybe I'm thinking about the symbol grounding problem incorrectly. I still think that in the limit of intelligence/rationality, though, specifying goals like "maximize paperclips" becomes impossible, and this wouldn't be falsified if a zealous paperclip company were able to engineer a superintelligent paperclip maximizer that actually maximized paperclips in some plausibly commonsense fashion. In fact I can't actually think of a way to falsify my theory in practice—I guess you'd have to somehow physically show that the axioms of algorithmic information theory and maybe updateless-like decision theories are egregiously incoherent... or something.

(Also your meta-algorithm isn't quite what I had in mind—what I had in mind is a lot more theoretically elegant and doesn't involve weird vague things like "extrapolation"—but I don't think that's the primary source of our disagreement.)

Comment author: [deleted] 26 July 2012 10:37:48PM 1 point [-]
Comment author: Will_Newsome 04 August 2012 11:49:03AM *  -1 points [-]

Why do you think of a statistical tendency toward higher rates of replication at the organism level when I say "the processes that created the process that is [you]"? That seems really arbitrary. Feel the inside of your teeth with your tongue. What processes generated that sensation? What decision policies did they have?

(ETA: I'd upvote my comment if I could.)

Comment author: [deleted] 05 August 2012 11:49:50AM 4 points [-]

You mean, why did I bother wearing braces for years so as to have straight teeth? <gd&rVF!>

Comment author: Will_Newsome 06 August 2012 10:46:10AM *  -2 points [-]

I mean that, and an infinite number of questions more and less like that, categorically, in series and in parallel. (I don't know how to interpret "<gd&rVF!>", but I do know to interpret it that it was part of your point that it is difficult to interpret, or analogous to something that is difficult to interpret, perhaps self-similarly, or in a class of things that is analogous to something or a class of things that is difficult to interpret, perhaps self-similarly; also perhaps it has an infinite number of intended or normatively suggested interpretations more or less like those.)

(This comment also helps elucidate my previous comment, in case you had trouble understanding that comment. If you can't understand either of these comments then maybe you should read more of the Bible, or something, otherwise you stand a decent chance of ending up in hell. This applies to all readers of this comment, not just army1987. You of course have a decent change of ending up in hell anyway, but I'm talking about marginals here, naturally.)

Comment author: SusanBrennan 06 August 2012 11:10:08AM *  0 points [-]

otherwise you stand a decent chance of ending up in hell.

Comments like this are better for creating atheists, as opposed to converting them.

Comment author: Mitchell_Porter 06 August 2012 11:56:42AM 8 points [-]

When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he's talking about and what people like Aquinas were talking about - that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this - I'd say it sometimes comes from an advanced state of whimsical despair at ever being understood - but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.

Comment author: fubarobfusco 06 August 2012 05:15:13PM 0 points [-]

I don't know how to interpret "<gd&rVF!>"

"gd&r" is an old Usenet expression, roughly "sorry for the horrible joke"; literally "grins, ducks, and runs".
I expect "VF" stands for "very fast".

Comment author: nshepperd 27 July 2012 12:38:52AM 0 points [-]

Optimization processes (mainly stupid ones such as evolution) can create subprocesses with different goals.

Comment author: wedrifid 27 July 2012 02:12:21AM 2 points [-]

Optimization processes (mainly stupid ones such as evolution) can create subprocesses with different goals.

(And stupid ones like humans.)

Comment author: nshepperd 27 July 2012 06:21:08AM 0 points [-]

(Unfortunately.)