Is Omega even necessary to this problem?
I would consider transferring control to staply if and only if I were sure that staply would make the same decision were our positions reversed (in this way it's reminiscent of the prisoner's dilemma). If I were so convinced, then shouldn't I consider staply's argument even in a situation without Omega?
If staply is in fact using the same decision algorithms I am, then he shouldn't even have to voice the offer. I should arrive at the conclusion that he should control the universe as soon as I find out that it can produce more staples than paperclips, whether it's a revelation from Omega or the result of cosmological research.
My intuition rebels at this conclusion, but I think it's being misled by heuristics. A human could not convince me of this proposal, but that's because I can't know we share decision algorithms (i.e. that s/he would definitely do the same in my place).
This looks to me like a prisoner's dilemma problem where expected utility depends on a logical uncertainty. I think I would cooperate with prisoners who have different utility functions as long as they share my decision theory.
(Disclaimers: I have read most of the relevant LW posts on these topics, but have never jumped into discussion on them and claim no expertise. I would appreciate corrections if I misunderstand anything.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This is something which I think overlaps a number of your suggestions, but doesn't exactly match any of them,
If you're making a generalization, check it for scope. How much knowledge of what you're generalizing about do you actually have? Could conditions have changed? How representative are the examples you're drawing your conclusions from?
I agree. I've noticed an especially strong tendency to premature generalization (including in myself) in response to people asking for advice. Tell people what your experiences were, not (just) the general conclusions you drew from them.