Andrew Jacob Sauer
Andrew Jacob Sauer has not written any posts yet.

Andrew Jacob Sauer has not written any posts yet.

In this case the only reason the money pumping doesn't work is because Omega is unable to choose its policy based on its prediction of your second decision: If it could, you would want to switch back to b, because if you chose a, Omega would know that and you'd get 0 payoff. This makes the situation after the coinflip different from the original problem where Omega is able to see your decision and make its decision based on that.
In the Allais problem as stated, there's no particular reason why the situation where you get to choose between $24,000, or $27,000 with 33/34 chance, differs depending on whether someone just offered it to you, or if they offered it to you only after you got <=34 on a d100.
My worry with automation isn't that it will destroy the intrinsic value of human endeavors, rather that it will destroy the economic value of the average person's endeavors. I agree that human art is still valuable even if AI can make better art. My concern is that under the current system of production where people must contribute to society in a competitive way in order to secure an income and a living for themselves, full automation will be materially harmful to everyone who doesn't own the automated systems.
Is everybody's code going to be in Python?
What are the rules about program runtime?
A common concern around here seems to be that, without massive and delicate breakthroughs in our understanding of human values, any superintelligence will destroy all value by becoming some sort of paperclip optimizer. This is what Eliezer claims in Value is Fragile. Therefore, any vision of the future that manages to do better than this without requiring huge philosophical breakthroughs (in particular, a future that doesn’t know how to implement CEV before the Singularity happens) is encouraging to me as a proof of concept for how the future might be more likely to go well.
In a future where uploading minds into virtual worlds becomes possible before an AI takeover, there might well... (read more)
Thanks for the link, I will check it out!
As for cannibalism, it seems to me that its role in Eliezer's story is to trigger a purely illogical revulsion in the humans who antropomorphise the aliens.
I dunno about you but my problem with the aliens isn't that it is cannibalism but that the vast majority of them die slow and horribly painful deaths
No cannibalism takes place, but the same amount of death and suffering is present as in Eliezer's scenario. Should we be less or more revolted at this?
The same.
Which scenario has the greater moral weight?
Neither. They are both horrible.
Should we say the two-species configuration is morally superior because they've developed a peaceful, stable society with two intelligent species coexisting instead of warring and hunting each other?
Not really because most of them still die slow and horribly painful deaths.
Sorry to necro this here, but I find this topic extremely interesting and I keep coming back to this page to stare at it and tie my brain in knots. Thanks for your notes on how it works in the logically uncertain case. I found a different objection based on the assumption of logical omniscience:
Regarding this you say:
Perhaps you think that the problem with the above version is that I assumed logical omniscience. It is unrealistic to suppose that agents have beliefs which perfectly respect logic. (Un)Fortunately, the argument doesn't really depend on this; it only requires that the agent respects proofs which it can see, and... (read more)
That's what I was thinking. Garbage in, garbage out.
That's beside the point. In the first case you'd take 1A in the first game, and 2A in the 2nd game(34% chance of living is better than 33%). In the 2nd case, if you bothered to play at all, you'd probably take 1B/2B. What doesn't make sense is taking 1A and 2B. That policy is inconsistent no matter how you value different amounts of money (unless you don't care about money at all in which case do whatever, the paradox is better illustrated with something you do care about) so things like risk, capital cost, diminishing returns etc are beside the point.