jimrandomh comments on Reflections on a Personal Public Relations Failure: A Lesson in Communication - Less Wrong

37 Post author: multifoliaterose 01 October 2010 12:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread.

Comment author: jimrandomh 01 October 2010 01:58:00AM 5 points [-]

Would you provide an updated probability estimate for the statement "Eliezer's attempt to unilaterally build a Friendly AI that will go FOOM in collaboration with a group of a dozen or fewer people will succeed"? You've seen a lot of new evidence since you made the last estimate. (Original estimate, new wording from the last paragraph of #6)

Comment author: multifoliaterose 01 October 2010 02:42:43AM *  7 points [-]

In line with my remarks under "Mistake #6," I plan on gradually developing the background behind my thinking in a sequence of postings. This will give me the chance to provide others with appropriate context and refine my internal model according to the feedback that I get so that I can arrive at a more informed view.

Recurring to an earlier comment that I've made, I think that there's an issue of human inability to assign numerical probabilities which is especially pronounced when one is talking about small and unstable probabilities. So I'm not sure how valuable it would be for me to attempt to give a numerical estimate. But I'll think about answering your question after making some more postings.

Comment author: jimrandomh 01 October 2010 04:36:14AM 4 points [-]

I feel like you still haven't understood the main criticism of your posts. You have acknowledged every mistake except for having an incorrect conclusion. All the thousands of words you've written avoid confronting the main point, which is whether people should donate to SIAI. To answer this, we need four numbers:

  1. The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction
  2. The utilities of friendly AI and of human extinction
  3. The utility of the marginal next-best use of money.

We don't need exact numbers, but we emphatically do need orders of magnitude. If you get the order magnitude of any one of 1-3 wrong, then your conclusion is shot. The problem is, estimating orders of magnitude is a hard skill; it can be learned, but it is not widespread. And if you don't have that skill, you cannot reason correctly about the topic.

So far, you have given exactly one order of magnitude estimate, and it was shot down as ridiculous by multiple qualified people. Since then, you have consistently refused to give any numbers whatsoever. The logical conclusion is that, like most people, you lack the order of magnitude estimation skill. And unfortunately, that means that you cannot speak credibly on questions where order of magnitude estimation is required.

Comment author: orthonormal 01 October 2010 02:13:37PM 11 points [-]

When someone says they're rethinking an estimate and don't want to give a number right now, I think that's respectable in the same way as someone who's considering a problem and refuses to propose solutions too soon. There's an anchoring effect that kicks in when you put down a number.

From my private communications with multifolaterose, I believe he's acting in good faith by refusing to assign a number, for essentially that reason.

Comment author: jimrandomh 01 October 2010 02:28:23PM 5 points [-]

The link to Human inability to assign numerical probabilities, and the distance into the future which he deferred the request, gave me the impression that it was a matter of not wanting to assign a number at all, not merely deferring it until later. Thank you for pointing out the more charitable interpretation; you seem to have some evidence that I don't.

Comment author: multifoliaterose 01 October 2010 02:49:08PM *  2 points [-]

Orthonormal correctly understands where I'm coming from. I feel that I have very poor information on the matter at hand and want to collect a lot more information before evaluating the cost-effectiveness of donating to SIAI relative to other charities. I fully appreciate your point that in the end it's necessary to make quantitative comparisons and plan on doing so after learning more.

I'll also say that I agree with rwallace's comment that rather than giving an estimate of the probability at hand, it's both easier and sufficient to give an estimate of

The relative magnitudes of the marginal effects of spending a dollar on X vs Y.

Comment author: multifoliaterose 01 October 2010 02:58:14PM 1 point [-]

Thanks for articulating my thinking so accurately and concisely.

Comment author: rwallace 01 October 2010 01:25:15PM 6 points [-]

Setting aside for the moment the other questions surrounding this topic, and addressing just your main point in this comment:

The fact of the matter is that we do not have the data to meaningfully estimate numbers like this, not even to an order of magnitude, not even to ten orders of magnitude, and it is best to admit this.

Fortunately, we don't need an order of magnitude to make meaningful decisions. What we really need to know, or at least try to guess with better than random accuracy, is:

  1. The sign (as opposed to magnitude) of the marginal effect of spending a dollar on X.

  2. The relative magnitudes of the marginal effects of spending a dollar on X vs Y.

Both of these are easier to at least coherently reason about, than absolute magnitudes.

Comment author: CarlShulman 01 October 2010 07:57:52AM 14 points [-]

The tone of the last paragraph seems uncalled for. I doubt that a unitary "order of magnitude estimation skill" is the key variable here. To put a predictive spin on this I doubt that you'd find a very strong correlation between results in a Fermi calculation contest and estimates of the above probabilities among elite hard sciences PhD students.

Comment author: multifoliaterose 01 October 2010 05:13:35AM 12 points [-]

All the thousands of words you've written avoid confronting the main point, which is whether people should donate to SIAI.

I agree that my most recent post does not address the question of whether people should donate to SIAI.

So far, you have given exactly one order of magnitude estimate, and it was shot down as ridiculous by multiple qualified people. Since then, you have consistently refused to give any numbers whatsoever. The logical conclusion is that, like most people, you lack the order of magnitude estimation skill. And unfortunately, that means that you cannot speak credibly on questions where order of magnitude estimation is required.

There are many ways in which I could respond here, but I'm not sure how to respond because I'm not sure what your intent is. Is your goal to learn more from me, to teach me something new, to discredit me in the eyes of others, or something else?

Comment author: jimrandomh 01 October 2010 02:06:17PM 1 point [-]

Actually, my goal was to get you to give some numbers which to test whether you've really updated in response to criticism, or are just signalling that you have. I had threshold values in mind and associated interpretations. Unfortunately, it doesn't seem to have worked (I put you on the defensive instead), so the test is inconclusive.

Comment author: timtyler 01 October 2010 05:47:26PM *  -1 points [-]

The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction.

P(eventual human extinction) looks enormous - since the future will be engineered. It depends on exactly what you mean, though. For example, is it still "extinction" if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?

Also, what is a "friendly AI"? Say a future machine intelligence looks back on history - and tries do decide whether what happened was "friendly". Is there some decision process they could use to determine this? If so, what is it?

At any rate, the whole analysis here seems misconceived. The "extinction of all humans" could be awful - or wonderful - depending on the circumstances and on the perspective of the observer. Values are not really objective facts that can be estimated and agreed upon.

Comment author: NancyLebovitz 01 October 2010 06:30:17PM 2 points [-]

For example, is it still "extinction" if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?

Or if all humans have voluntarily [1] changed into things we can't imagine?

[1] I sloppily assume that choice hasn't changed too much.