Comment author: Jiro 26 November 2014 03:51:32AM *  0 points [-]

If "built" refers to building the AI itself rather than the AI building a torture simulator, then refusing to be blackmailed doesn't prevent the AI from being built. The building of the AI, and the AI's deduction that it should precommit to torture, are two separate events. It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?

Comment author: wedrifid 26 November 2014 06:00:41AM -2 points [-]

It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?

In this case "be blackmailed" means "contribute to creating the damn AI". That's the entire point. If enough people do contribute to creating it then those that did not contribute get punished. The (hypothetical) AI is acausally creating itself by punishing those that don't contribute to creating it. If nobody does then nobody gets punished.

Comment author: ike 26 November 2014 03:17:05AM 0 points [-]

I'll be sure to ask you the next time I need to write an imaginary comment.

It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?

Comment author: wedrifid 26 November 2014 05:32:08AM 1 point [-]

I'll be sure to ask you the next time I need to write an imaginary comment.

I wasn't the pedant. I was the tangential-pedantry analyzer. Ask Lumifer.

It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?

Your comment was fine. It would be true of most people, I'm not sure if Faul is one of the exceptions.

Comment author: ike 26 November 2014 02:59:28AM 1 point [-]

Realistically speaking?

Comment author: wedrifid 26 November 2014 03:06:12AM *  3 points [-]

Realistically speaking?

Unfortunately this still suffers from the whole "Time Traveller visits you" part of the claim - our language doesn't handle it well. It's a realistic claim about counterfactual response of a real brain to unrealistic stimulus.

Comment author: dxu 25 November 2014 05:21:40AM 1 point [-]

This seems weird to me. While I acknowledge that there are widespread social stigmas associated with broadcasting your own intelligence, it hardly seems productive to actively downplay your intelligence either. XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact. So it seems odd that he would choose to "repeatedly [claim] that he is not a smart person". Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?

Comment author: wedrifid 26 November 2014 03:01:57AM *  3 points [-]

This seems weird to me.

It seemed weird enough to me that it stuck in my memory more clearly than any of his anti-MIRI comments.

XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact.

I concur.

Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?

My best guess is an ethical compulsion towards sincere expression of reality as he perceives it. For what it is worth that sincerity did influence my evaluation of his behaviour and personality. XiXiDu doesn't seem like a troll, even when he does things that trolls also would do. My impression is that I would like him if I knew him in person.

Comment author: Lumifer 26 November 2014 02:00:10AM 0 points [-]

Factually speaking

I don't think it's literally factually :-D

Comment author: wedrifid 26 November 2014 02:53:33AM *  3 points [-]

I don't think it's literally factually :-D

I think you're right. It's closer to, say... "serious counterfactually speaking".

Comment author: Viliam_Bur 25 November 2014 09:23:40AM *  9 points [-]

False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.

(Unless the context was something like "intelligence lower than extremely high"; i.e. something like "I have IQ 130, but compared with people with IQ 160 I feel stupid".)

Comment author: wedrifid 26 November 2014 02:52:04AM 2 points [-]

False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.

From the context I ruled out countersignalling and for what it is worth my impression was that the humility was real, not false. Given that I err on the side of cynical regarding hypocrisy and had found some of XiXiDu's comments disruptive I give my positive evaluation of Xi's sincerity some weight.

I agree that the hypothesis of low intelligence is implausible despite the testimony. Addition possible factors I considered:

  • Specific weakness in intelligence (eg. ADHD, dyslexia or something less common) that produced low self esteem in intelligence despite overall respectable g.
  • Perfectionistic or obsessive tendencies which would lead to harsh self judgements relative to an unrealistic ideal. (Potentially similar to the kind of tendencies which would cause the idealism failure mode described in the opening post.)
  • Not realising just how stupid 'average' is. (This is a common error. This wasn't the first time I've called 'bullshit' on claims to be below average IQ. Associating with highly educated nerds really biases the sample.)

(Unless the context was something like "intelligence lower than extremely high"; i.e. something like "I have IQ 130, but compared with people with IQ 160 I feel stupid".)

That would have been more accurate, but no, the context ruled that out.

I'm curious whether XiXiDu's confidence/objective self evaluation has changed over the intervening years. I hope it has.

Comment author: Stuart_Armstrong 25 November 2014 10:06:28PM 11 points [-]

I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.

Comment author: wedrifid 26 November 2014 02:34:28AM 1 point [-]

I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.

I'm impressed. (And will look them up when I get a chance.)

Comment author: bogus 26 November 2014 01:17:33AM 1 point [-]

I don't think you understand acausal trade.

For what it's worth, I don't think anybody understands acausal trade. And I don't claim to understand it either.

Comment author: wedrifid 26 November 2014 02:33:16AM -1 points [-]

For what it's worth, I don't think anybody understands acausal trade.

It does get a tad tricky when combined with things like logical uncertainty and potentially multiple universes.

Comment author: Jiro 25 November 2014 10:16:43AM *  1 point [-]

Precommitment isn't meaningless here just because we're talking about acausal trade. What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was. As long as it irreversibly is in the state "AI that will simulate and torture people who don't give in to blackmail" while your decision whether to give into blackmail is still inside a box that it has not yet opened, then that serves as a precommitment.

(If you are thinking "the AI is already in or not in the world where the human refuses to submit to blackmail, so the AI's precommitment cannot affect the measure of such worlds", it can "affect" that measure acausally, the same as deciding whether to one-box or two-box in Newcomb can "affect" the contents of the boxes).

If you could precommit to not giving in to blackmail before you analyze what the AI's precommitment would be, you can escape this doom, but as a mere human, you probably are not capable of binding your future post-analysis self this way. (Your human fallibility can, of course, precommit you by making you into an imperfect thinker who never gives in to acausal blackmail because he can't or won't analyze the Basilisk to its logical conclusion.)

Comment author: wedrifid 26 November 2014 01:03:13AM -1 points [-]

Precommitment isn't meaningless here just because we're talking about acausal trade.

Except in special cases which do not apply here, yes it is meaningless. I don't think you understand acausal trade. (Not your fault. The posts containing the requisite information were suppressed.)

What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was.

The time of this kind decision is irrelevant.

Comment author: Jiro 25 November 2014 03:26:18PM *  0 points [-]

we can also reasonably know that since we refuse, it doesn't get built in the first place.

The key is that the AI precommits to building it whether we refuse or not.

If we actually do refuse, this precommitment ends up being bad for it, since it builds it without any gain. However, this precommitment, by preventing us from saying "if we refuse, it doesn't get built", also decreases the measure of worlds where it builds it without gaining.

Comment author: wedrifid 26 November 2014 12:59:19AM 0 points [-]

The key is that the AI precommits to building it whether we refuse or not.

The 'it' bogus is referring to is the torture-AI itself. You cannot precommit to things until you exist, no matter your acausal reasoning powers.

View more: Prev | Next