ArisKatsaris comments on You're in Newcomb's Box - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (172)
I don't get it. Is this supposed to be some weird form of evidential or maybe timeless decision theory? It hardly matters; whatever decision theory you're using, you already know you exist; conditioning on the possibility that you don't is nonsensical. Hell, even if you're an AI using UDT you gain nothing from not assuming you exist; you were built to not update in the normal sense because whoever built you cared about all possible worlds you might end up in, but regardless, if you're standing there making the decision, you exist (i.e. this can be assumed at the start and taken into account).
Edit: Just for the purpose of explicitness, I should probably state that the conclusion here is that you should two-box in this case.
And so as to demonstrate that the first part of the post is controversial enough to be interesting: Sniffnoy is wrong - you are better off one boxing.
Rationalists should win.
In this scenario two-boxers get 200$ and exist, while one-boxers get 100$ and exist.
Two-boxers will be numerically fewer, because Prometheus is biased in favour of irrationality, but nonetheless it'll be two-boxers that'll be winning. That's the opposite of two-boxers in the Newcomb problem.
Nice icon, though my reasoning is the exact opposite than that of Quantum Suicide. I have no shared identity with the people who would one-box here, so I don't need to one-box in order to increase their chances at having existed -- if anything such an action would increase the stupidity levels in the multiverse.
Even a one-boxer would have to be particularly weird to want to increase the amplitude of his universe's configuration, as if that would affect his own life at all.
Quantum Suicide on the other hand assumes a shared identity between the people who'll die and the people who'll suffer permanent brain damage with a bullet lodged on their brain, and the people who'll have their consciousness magically copied by magical aliens before they kill themselves. I don't assume shared identity, and that's why I two-box here, quantum suiciders on the other hand assume it and that's why they fail.
! ! ! !
Tangential Question: Would it be good or bad for the world if 4chan picked this up as a meme?
Which meme, MS Clippy jokes or quantum suicide?
I'm fine with 4channers picking up quantum suicide, especially since to me it will almost always look like regular suicide.
The Friendly AI must be kept away from 4chan at all costs.
FAI's don't run away from hard problems.
I should have been more specific.
I'm not wondering whether interacting with 4chan would poison the mind of a specific software construct. I'm wondering whether the long term political consequences would be good or bad if the 4chan community picked up the generic technique of adding photo-shopped text to MS Clippy images as a joke generating engine that involved re-purposing of LW's themes content (probably sometimes in troll-like or deprecating ways).
Would it raise interesting emotional critiques of moral arguments? Would it poison the discourse with jokes and confusion? Would it bring new people here with worthwhile insights? Would it reduce/increase the seriousness with which the wider world took AGI research... and which of those outcomes is even preferred?
I still don't really have a good theory of what kinds of mass opinion on the subject of FAI is possible or desirable and when I see something novel like the clippy image it sometimes makes me try to re-calculate the public relations angle of singularity stuff.
That is brilliant. Did you create it manually?
Thanks, I did. I'm sure there are generators for it, though.