Comment author: Brillyant 26 November 2014 09:32:38PM -2 points [-]

I see.

by threating or hypnotising a human

This is the gist of the AI Box experiment, no?

Comment author: wedrifid 26 November 2014 09:51:48PM 0 points [-]

This is the gist of the AI Box experiment, no?

No. Bribes and rational persuasion are fair game too.

Comment author: Jiro 26 November 2014 03:18:39PM 0 points [-]

In this case "be blackmailed" means "contribute to creating the damn AI".

To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI that decided the good from bringing an FAI into the world a few days earlier (saving ~150,000 lives per day earlier it gets here)". The AI acausally blackmails people into building it sooner, not into building it at all. So failing to give into the blackmail results in the AI still being built but later and it is capable of punishing people.

Comment author: wedrifid 26 November 2014 09:46:08PM -2 points [-]

To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI

I don't know who you are quoting but they are someone who considers AIs that will torture me to be friendly. They are confused in a way that is dangerous.

The AI acausally blackmails people into building it sooner, not into building it at all.

It applies to both - causing itself to exist at a different place in time or causing itself to exist at all. I've explicitly mentioned elsewhere in this thread that merely refusing blackmail is insufficient when there are other humans who can defect and create the torture-AI anyhow.

You asked "How could it?". You got an answer. Your rhetorical device fails.

Comment author: MrMind 26 November 2014 11:02:36AM 0 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Comment author: wedrifid 26 November 2014 12:34:07PM 2 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.

Comment author: ThisSpaceAvailable 26 November 2014 09:09:59AM 0 points [-]

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Comment author: wedrifid 26 November 2014 11:46:57AM -1 points [-]

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).

Comment author: Jiro 26 November 2014 03:51:32AM *  0 points [-]

If "built" refers to building the AI itself rather than the AI building a torture simulator, then refusing to be blackmailed doesn't prevent the AI from being built. The building of the AI, and the AI's deduction that it should precommit to torture, are two separate events. It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?

Comment author: wedrifid 26 November 2014 06:00:41AM -2 points [-]

It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?

In this case "be blackmailed" means "contribute to creating the damn AI". That's the entire point. If enough people do contribute to creating it then those that did not contribute get punished. The (hypothetical) AI is acausally creating itself by punishing those that don't contribute to creating it. If nobody does then nobody gets punished.

Comment author: ike 26 November 2014 03:17:05AM 0 points [-]

I'll be sure to ask you the next time I need to write an imaginary comment.

It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?

Comment author: wedrifid 26 November 2014 05:32:08AM 1 point [-]

I'll be sure to ask you the next time I need to write an imaginary comment.

I wasn't the pedant. I was the tangential-pedantry analyzer. Ask Lumifer.

It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?

Your comment was fine. It would be true of most people, I'm not sure if Faul is one of the exceptions.

Comment author: ike 26 November 2014 02:59:28AM 1 point [-]

Realistically speaking?

Comment author: wedrifid 26 November 2014 03:06:12AM *  3 points [-]

Realistically speaking?

Unfortunately this still suffers from the whole "Time Traveller visits you" part of the claim - our language doesn't handle it well. It's a realistic claim about counterfactual response of a real brain to unrealistic stimulus.

Comment author: dxu 25 November 2014 05:21:40AM 1 point [-]

This seems weird to me. While I acknowledge that there are widespread social stigmas associated with broadcasting your own intelligence, it hardly seems productive to actively downplay your intelligence either. XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact. So it seems odd that he would choose to "repeatedly [claim] that he is not a smart person". Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?

Comment author: wedrifid 26 November 2014 03:01:57AM *  3 points [-]

This seems weird to me.

It seemed weird enough to me that it stuck in my memory more clearly than any of his anti-MIRI comments.

XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact.

I concur.

Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?

My best guess is an ethical compulsion towards sincere expression of reality as he perceives it. For what it is worth that sincerity did influence my evaluation of his behaviour and personality. XiXiDu doesn't seem like a troll, even when he does things that trolls also would do. My impression is that I would like him if I knew him in person.

Comment author: Lumifer 26 November 2014 02:00:10AM 0 points [-]

Factually speaking

I don't think it's literally factually :-D

Comment author: wedrifid 26 November 2014 02:53:33AM *  3 points [-]

I don't think it's literally factually :-D

I think you're right. It's closer to, say... "serious counterfactually speaking".

Comment author: Viliam_Bur 25 November 2014 09:23:40AM *  9 points [-]

False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.

(Unless the context was something like "intelligence lower than extremely high"; i.e. something like "I have IQ 130, but compared with people with IQ 160 I feel stupid".)

Comment author: wedrifid 26 November 2014 02:52:04AM 2 points [-]

False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.

From the context I ruled out countersignalling and for what it is worth my impression was that the humility was real, not false. Given that I err on the side of cynical regarding hypocrisy and had found some of XiXiDu's comments disruptive I give my positive evaluation of Xi's sincerity some weight.

I agree that the hypothesis of low intelligence is implausible despite the testimony. Addition possible factors I considered:

  • Specific weakness in intelligence (eg. ADHD, dyslexia or something less common) that produced low self esteem in intelligence despite overall respectable g.
  • Perfectionistic or obsessive tendencies which would lead to harsh self judgements relative to an unrealistic ideal. (Potentially similar to the kind of tendencies which would cause the idealism failure mode described in the opening post.)
  • Not realising just how stupid 'average' is. (This is a common error. This wasn't the first time I've called 'bullshit' on claims to be below average IQ. Associating with highly educated nerds really biases the sample.)

(Unless the context was something like "intelligence lower than extremely high"; i.e. something like "I have IQ 130, but compared with people with IQ 160 I feel stupid".)

That would have been more accurate, but no, the context ruled that out.

I'm curious whether XiXiDu's confidence/objective self evaluation has changed over the intervening years. I hope it has.

View more: Prev | Next