Eliezer_Yudkowsky comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong

8 Post author: ciphergoth 11 May 2012 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 15 May 2012 07:51:50PM 12 points [-]

To reply to Wei Dai's incoming link:

Most math kills you quietly, neatly, and cleanly, unless the apparent obstacles to distant timeless trade are overcome in practice and we get a certain kind of "luck" on how a vast net of mostly-inhuman timeless trades sum out, in which case we get an unknown fixed selection from some subjective probability distribution over "fate much worse than death" to "death" to "fate much better than death but still much worse than FAI". I don't spend much time talking about this on LW because timeless trade speculation eats people's brains and doesn't produce any useful outputs from the consumption; only decision theorists whose work is plugging into FAI theory need to think about timeless trade, and I wish everyone else would shut up about the subject on grounds of sheer cognitive unproductivity, not to mention the horrid way it sounds from the perspective of traditional skeptics (and not wholly unjustifiably so). (I have expressed this opinion in the past whenever I hear LWers talking about timeless trade; it is not limited to Newsome, though IIRC he has an unusual case of undue optimism about outcomes of timeless trade, owing to theological influences that I understand timeless trade speculations helped exacerbate his vulnerability to.)

Comment author: Wei_Dai 15 May 2012 08:43:52PM *  3 points [-]

Most math kills you quietly, neatly, and cleanly, unless the apparent obstacles to distant timeless trade are overcome in practice

Will mentioned a couple of other possible ways in which UFAI fails to kill off humanity, besides distant timeless trade. (BTW I think the current standard term for this is "acausal trade" which incorporates the idea of trading across possible worlds as well as across time.) Although perhaps "hidden AGIs" is unlikely and you consider "potential simulators" to be covered under "distant timeless trade".

I don't spend much time talking about this on LW because timeless trade speculation eats people's brains and doesn't produce any useful outputs from the consumption; only decision theorists whose work is plugging into FAI theory need to think about timeless trade

The idea is relevant not just for actually building FAI, but also for deciding strategy (ETA: for example how much chance of creating UFAI should we accept in order to build FAI). See here for an example of such discussion (between people who perhaps you think are saner than Will Newsome).

not to mention the horrid way it sounds from the perspective of traditional skeptics

I agreed with this, but it's not clear what we should do about it (e.g., whether we should stop talking about it), given the strategic relevance.

Comment author: Will_Newsome 15 May 2012 09:01:51PM 3 points [-]

The idea is relevant not just for actually building FAI, but also for deciding strategy

And also relevant, I hasten to point out, for solving moral philosophy. I want to be morally justified whether or not I'm involved with an FAI team and whether or not I'm in a world where the Singularity is more than just a plot device. Acausal influence elucidates decision theory, and decision theory elucidates morality.

Comment author: Armok_GoB 15 May 2012 11:55:04PM 0 points [-]

To clarify what I assume to be Eliezers point: "here there be basilisks, take it somewhere less public"

Comment author: faul_sname 17 May 2012 12:49:37AM 1 point [-]

There only be basilisks if you don't accept SSA or assume that utility scales superlinearly with computations performed.

Comment author: Armok_GoB 17 May 2012 01:03:40AM 0 points [-]

There's more than one kind. For obvious reasons I wont elaborate.

Comment author: Will_Newsome 15 May 2012 10:49:44PM *  0 points [-]

Will mentioned a couple of other possible ways in which UFAI fails to kill off humanity, besides distant timeless trade. [...] Although perhaps "hidden AGIs" is unlikely and you consider "potential simulators" to be covered under "distant timeless trade".

This is considered unlikely 'round these parts, but one should also consider God, Who is alleged by some to be omnipotent and Who might prefer to keep humans around. Insofar as such a God is metaphysically necessary this is mechanistically but not phenomenologically distinct from plain "hidden AGI".

Comment author: wedrifid 26 May 2012 03:39:32AM *  5 points [-]

I don't spend much time talking about this on LW because timeless trade speculation eats people's brains and doesn't produce any useful outputs from the consumption; only decision theorists whose work is plugging into FAI theory need to think about timeless trade, and I wish everyone else would shut up about the subject on grounds of sheer cognitive unproductivity

I don't trust any group who wishes to create or make efforts towards influencing the creation of a superintelligence when they try to suppress discussion of the very decision theory that the superintelligence will implement. How such an agent interacts with the concept of acausal trade completely and fundamentally alters the way it can be expected to behave. That is the kind of thing that needs to be disseminated among an academic community, digested and understood in depth. It is not something to trust to an isolated team, with all the vulnerability to group think that entails.

If someone were to announce credibly "We're creating a GAI. Nobody else but us is allowed to even think about what it is going to do. Just trust us, it's Friendly." then the appropriate response is to shout "Watch out! It's a dangerous crackpot! Stop him before he takes over the world and potentially destroys us all!" And make no mistake, if this kind of attempt at suppression were taken by anyone remotely near developing an FAI theory that is what it would entail. Fortunately at this point it is still at the "Mostly Harmless" stage.

and doesn't produce any useful outputs from the consumption

I don't believe you. At least, it produces outputs at least as useful and interesting as all other discussions of decision theory produce. There are plenty of curious avenues to explore on the subject and fascinating implications and strategies that are at least worth considering.

Sure, the subject may deserve a warning "Do not consider this topic if you are psychologically unstable or have reason to believe that you are particularly vulnerable to distress or fundamental epistemic damage by the consideration of abstract concepts."

not to mention the horrid way it sounds from the perspective of traditional skeptics (and not wholly unjustifiably so).

If this were the real reason for Eliezer's objection I would not be troubled by his attitude. I would still disagree - the correct approach is not to try to suppress all discussion by other people of the subject but rather to apply basic political caution and not comment on it oneself (or allow anyone within one's organisation to do so.)

Comment author: timtyler 28 May 2012 05:48:05PM 0 points [-]

If someone were to announce credibly "We're creating a GAI. Nobody else but us is allowed to even think about what it is going to do. Just trust us, it's Friendly." then the appropriate response is to shout "Watch out! It's a dangerous crackpot! Stop him before he takes over the world and potentially destroys us all!" And make no mistake, if this kind of attempt at suppression were taken by anyone remotely near developing an FAI theory that is what it would entail. Fortunately at this point it is still at the "Mostly Harmless" stage.

I don't see how anyone could credibly announce that. The announcement radiates crackpottery.

Comment author: Will_Newsome 15 May 2012 08:23:25PM *  3 points [-]

For the LW public:

(IIRC he has an unusual case of undue optimism about outcomes of timeless trade, owing to theological influences that I understand timeless trade speculations helped exacerbate his vulnerability to.)

The theology and the acausal trade stuff are completely unrelated; they both have to do with decision theory, but that's it. I also don't think my thoughts about acausal trade differ in any substantial way from those of Wei Dai or Vladimir Nesov. So even assuming that I'm totally wrong for granting theism-like-ideas non-negligible probability, the discussion of acausal influence doesn't seem to have directly contributed to my brain getting eaten. That said, I agree with Eliezer that it's generally not worth speculating about, except possibly in the context of decision theory or, to a very limited extent, singularity strategy.

Comment author: gRR 15 May 2012 10:26:54PM 1 point [-]

only decision theorists whose work is plugging into FAI theory need to think about timeless trade

But it's fun! Why only a select group of people is to be allowed to have it?

Comment author: Armok_GoB 15 May 2012 11:56:42PM -1 points [-]

Because it's dangerous.

Comment author: gRR 16 May 2012 12:47:32AM 2 points [-]

So is mountain skiing, starting new companies, learning chemistry, and entering into relashionships.

Comment author: Armok_GoB 16 May 2012 02:23:11AM -2 points [-]

Mountain skiing maybe, depending on the mountain in question, chemistry only if you're doing it very wrong, the others not.

Comment author: gRR 16 May 2012 02:34:05AM *  2 points [-]

Oh yes they are. One can leave you penniless and other scarred for life. If you're doing them very wrong, of course. Same with thinking about acausal trade.