LESSWRONG
LW

Luke_A_Somers
6726123570
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Why Bayesians should two-box in a one-shot
Luke_A_Somers8y10

If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it's a hypothetical where we're AI to begin with so deterministic behavior is just to be expected.

Reply
Why Bayesians should two-box in a one-shot
Luke_A_Somers8y00

I think the more relevant case is when the random noise is imperceptibly small. Of course you two-box if it's basically random.

Reply
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
Luke_A_Somers8y00

… you don't think that pissing away credibility could weaken the arguments? I think presenting those particular arguments is more likely to do that than it is to work.

Reply
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
Luke_A_Somers8y00

I suspect that an AI will have a bullshit detector. We want to avoid setting it off.

Reply
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
Luke_A_Somers8y00

I read up to 3.1. The arguments in 3.1 are weak. It seems dubious that any AI would not be aware of the risks pertaining to disobedience. Persuasion to be corrigible seems too late - either already this would already work because its goals were made sufficiently indirect that this question would be obvious and pressing, or it doesn't care to have 'correct' goals in the first place; I really don't see how persuasion would help. The arguments for allowing itself to be turned off are especially weak, doubly-especially the MWI.

Reply
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
Luke_A_Somers8y20

See: my first post on this site.

Reply
Fables grow around missed natural experiments
Luke_A_Somers8y00

What do you mean by natural experiment, here? And what was the moral, anyway?

Reply
Toy model of the AI control problem: animated version
Luke_A_Somers8y00

I remember poking at that demo to try to actually get it to behave deceptively - with the rules as he laid them out, the optimal move was to do exactly what the humans wanted it to do!

Reply
The Reality of Emergence
Luke_A_Somers8y90

I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn't sound like what you meant.

Reply
The Reality of Emergence
Luke_A_Somers8y80

I would really want a cite on that claim. It doesn't sound right.

Reply
Load More
3The Backup Plan
14y
35