Comment author: nhamann 15 July 2011 05:03:27PM *  5 points [-]

In the past year I've been involved in two major projects at SIAI. Steve Rayhawk and I were asked to review existing AGI literature and produce estimates of development timelines for AGI.

You seem to suggest that this work is incomplete, but I'm curious: is this available anywhere or is it still a work in-progress? I would be very interested in reading this, even if its incomplete. I would even be interested in just seeing a bibliography.

Comment author: Peter_de_Blanc 17 July 2011 08:26:31AM 3 points [-]

It is not available. The thinking on this matter was that sharing a bibliography of (what we considered) AGI publications relevant to the question of AGI timelines could direct researcher attention towards areas more likely to result in AGI soon, which would be bad.

Comment author: Wei_Dai 09 June 2011 01:56:15AM 2 points [-]

I wrote earlier:

Where I've seen people use PDUs in AI or philosophy, they weren't confused, but rather chose to make the assumption of perception-determined utility functions (or even more restrictive assumptions) in order to prove some theorems.

Well, here's a recent SIAI paper that uses perception-determined utility functions, but apparently not in order to prove theorems (since the paper contains no theorems). The author was advised by Peter de Blanc, who two years ago wrote the OP arguing against PDUs. Which makes me confused: does the author (Daniel Dewey) really think that PDUs are a good idea, and does Peter now agree?

Comment author: Peter_de_Blanc 11 June 2011 01:34:05PM 0 points [-]

I don't think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn't make it into this paper.

Comment author: Stuart_Armstrong 10 June 2011 12:38:36PM 1 point [-]

"I am a god" is to simplistic. I can model it better as a probability, that varies with N, that you are able to move the universe to UN(N). This tracks how good a god you are, and seems to make the paradox disappear.

Comment author: Peter_de_Blanc 11 June 2011 05:31:22AM 0 points [-]

This tracks how good a god you are, and seems to make the paradox disappear.

How? Are you assuming that P(N) goes to zero?

Comment author: orthonormal 10 June 2011 07:33:49AM 0 points [-]

LCPW isn't even necessary: do you really think that it wouldn't make a difference that you'd care about?

Comment author: Peter_de_Blanc 10 June 2011 08:27:20AM 0 points [-]

LCPW cuts two ways here, because there are two universal quantifiers in your claim. You need to look at every possible bounded utility function, not just every possible scenario. At least, if I understand you correctly, you're claiming that no bounded utility function reflects your preferences accurately.

Comment author: drethelin 08 June 2011 05:16:16PM 1 point [-]

resources, whether physical or computational. Presumably the AI is programmed to utilize resources in a parsimonious manner, with terms governing various applications of the resources, including powering the AI, and deciding on what to do. If the AI is programmed to limit what it does at some large but arbitrary point, because we don't want it taking over the universe or whatever, then this point might end up actually being before we want it to stop doing whatever it's doing.

Comment author: Peter_de_Blanc 09 June 2011 08:09:59AM 0 points [-]

That doesn't sound like an expected utility maximizer.

Comment author: orthonormal 08 June 2011 11:02:52PM 1 point [-]

A small risk of losing the utility it was previously counting on.

Of course you can do intuition pumps either way- I don't feel like I'd want the AI to sacrifice everything in the universe we know for a 0.01% chance of making it in a bigger universe- but some level of risk has to be worth a vast increase in potential fun.

Comment author: Peter_de_Blanc 09 June 2011 08:09:08AM 1 point [-]

It seems to me that expanding further would reduce the risk of losing the utility it was previously counting on.

Comment author: orthonormal 08 June 2011 07:42:52AM 1 point [-]

Upvoted because the objection makes me uncomfortable, and because none of the replies satisfy my mathematical/aesthetic intuition.

However, requiring utilities to be bounded also strikes me as mathematically ugly and practically dangerous– what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

Thus I view this as a currently unsolved problem in decision theory, and a better intuition-pump version than Pascal's Mugging. Thanks for posting.

Comment author: Peter_de_Blanc 08 June 2011 08:41:20AM 5 points [-]

what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

It's not worth what?

Comment author: [deleted] 26 May 2011 06:52:21PM 7 points [-]
  1. A physically plausible scenario would involve growing up under a monochromatic light source.

  2. Growing up without sensory input actually affects the brain; see Wikipedia's article on monocular deprivation. I'm actually an example of this - I was born without the Mystic Eyes Of Depth Perception so I'll never know what stereoscopic vision "feels like".

  3. I propose that "qualia" is a word that, like "microevolution", is mainly used by people who are very confused (and dissolving the question is the appropriate approach).

Comment author: Peter_de_Blanc 28 May 2011 01:06:46PM 0 points [-]

Depth perception can be gained through vision therapy, even if you've never had it before. This is something I'm looking into doing, since I also grew up without depth perception.

Comment author: gjm 28 May 2011 12:57:40AM 8 points [-]

"Rational people can't agree to disagree" is an oversimplification. Rational people can perfectly well reach a conclusion of the form: "Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong. We choose, instead, to leave the matter unresolved until either it matters more or we see better prospects of resolving it."

Imperfectly rational people who are aware of their imperfect rationality (note: this is in fact the nearest any of us actually come to being rational people) might also reasonably reach a conclusion of this form: "Perhaps clear enough thinking on both sides would suffice to let us resolve this. However, it's apparent that at least one of us is currently sufficiently irrational about it that trying to reach agreement poses a real danger of spoiling the good relations we currently enjoy, and while clearly that irrationality is a bad thing it doesn't seem likely that trying to resolve our current disagreement now is the best way to address it, so let's leave it for now."

I suspect (with no actual evidence) that when two reasonably-rational people say they're agreeing to disagree, what they mean is often approximately one of the above or a combination thereof, and that they're often wise to "agree to disagree". The fact that there are theorems saying that two perfect rationalists who care about nothing more than getting the right answer to the question they're currently disputing won't "agree to disagree" seems to me to have little bearing on this.

Eliezer, if you're reading this: You may remember that a while back on OB you and Robin Hanson discussed the prospects of rapidly improving artificial intelligence in the nearish future. By no means did you resolve your differences in that discussion. Would it be fair to characterize the way it ended as "agreeing to disagree"? From the outside, it sure looks like that's what it amounted to, whatever you may or may not have said to one another about it. Perhaps you and/or Robin might say "Yeah, but the other guy isn't really rational about this". Could be, but if the level of joint rationality required for "can't agree to disagree" is higher than that of {Eliezer,Robin} then it's not clear how widely applicable the principle "rational people can't agree to disagree" really is. (Note for the avoidance of doubt: The foregoing is not intended to imply that Eliezer and Robin are equally rational; I do not intend to make any further comment on my opinions, if any, on that matter.)

Comment author: Peter_de_Blanc 28 May 2011 04:52:50AM 2 points [-]

Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong.

You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not "both right" or "both wrong", but "both -3 decibels."

Comment author: ata 16 May 2011 08:51:14AM *  0 points [-]

Obviously I didn't mean that being broke (or anything) is infinite disutility. Am I mistaken that the utility of money is otherwise modeled as logarithmic generally?

In response to comment by ata on Circular Altruism
Comment author: Peter_de_Blanc 16 May 2011 10:25:21AM 1 point [-]

Obviously I didn't mean that being broke (or anything) is infinite disutility.

Then what asymptote were you referring to?

View more: Prev | Next