In this case the only reason the money pumping doesn't work is because Omega is unable to choose its policy based on its prediction of your second decision: If it could, you would want to switch back to b, because if you chose a, Omega would know that and you'd get 0 payoff. This makes the situation after the coinflip different from the original problem where Omega is able to see your decision and make its decision based on that.
In the Allais problem as stated, there's no particular reason why the situation where you get to choose between $24,000, or $27,000 with 33/34 chance, differs depending on whether someone just offered it to you, or if they offered it to you only after you got <=34 on a d100.
My worry with automation isn't that it will destroy the intrinsic value of human endeavors, rather that it will destroy the economic value of the average person's endeavors. I agree that human art is still valuable even if AI can make better art. My concern is that under the current system of production where people must contribute to society in a competitive way in order to secure an income and a living for themselves, full automation will be materially harmful to everyone who doesn't own the automated systems.
A common concern around here seems to be that, without massive and delicate breakthroughs in our understanding of human values, any superintelligence will destroy all value by becoming some sort of paperclip optimizer. This is what Eliezer claims in Value is Fragile. Therefore, any vision of the future that manages to do better than this without requiring huge philosophical breakthroughs (in particular, a future that doesn’t know how to implement CEV before the Singularity happens) is encouraging to me as a proof of concept for how the future might be more...
As for cannibalism, it seems to me that its role in Eliezer's story is to trigger a purely illogical revulsion in the humans who antropomorphise the aliens.
I dunno about you but my problem with the aliens isn't that it is cannibalism but that the vast majority of them die slow and horribly painful deaths
No cannibalism takes place, but the same amount of death and suffering is present as in Eliezer's scenario. Should we be less or more revolted at this?
The same.
Which scenario has the greater moral weight?
Neither. They are both horr...
Sorry to necro this here, but I find this topic extremely interesting and I keep coming back to this page to stare at it and tie my brain in knots. Thanks for your notes on how it works in the logically uncertain case. I found a different objection based on the assumption of logical omniscience:
Regarding this you say:
Perhaps you think that the problem with the above version is that I assumed logical omniscience. It is unrealistic to suppose that agents have beliefs which perfectly respect logic. (Un)Fortunately, the argument doesn't really depend...
Sorry to necro this here, but I find this topic extremely interesting and I keep coming back to this page to stare at it and tie my brain in knots.
(As for this, I think a major goal of LessWrong -- and the alignment forum -- is to facilitate sustained intellectual progress; a subgoal of that is that discussions can be sustained over long periods of time, rather than flitting about as would be the case if we only had attention for discussing the most recent posts!!)
From an omniscient point of view, yes. From my point of view, probably not, but there are still problems that arise relating to this, that can cause logic-based agents to get very confused.
Let A be an agent, considering options X and not-X. Suppose A |- Action=not-X -> Utility=0. The naive approach to this would be to say: if A |- Action=X -> Utility<0, A will do not-X, and if A |- Action=X -> Utility>0, A will do X. Suppose further that A knows its source code, so it knows this is the case.
Consider the statement G=(A |- G) -> (Action=X -...
Suppose you learn about physics and find that you are a robot. You learn that your source code is "A". You also believe that you have free will; in particular, you may decide to take either action X or action Y.
My motivation for talking about logical counterfactuals has little to do with free will, even if the philosophical analysis of logical counterfactuals does.
The reason I want to talk about logical counterfactuals is as follows: suppose as above that I learn that I am a robot, and that my source code is "A"(which is presumed to...
Seems to me that before a philosophical problem is solved, it becomes a problem in some other field of study. Atomism used to be a philosophical theory. Now that we know how to objectively confirm it, it (or rather, something similar but more accurate) is a scientific theory.
It seems that philosophy (at least, the parts of philosophy that are actively trying to progress) is about trying to take concepts that we have intuitive notions of, and figure out what if anything those concepts actually refer to, until we succeed at this well enough that to study the...
When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.
I mean, yeah? You can still do that in your armchair, without looking at anything outside of yourself. Mathematical facts are indeed "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe," if you modify the statement a little to say "anywhere else existent" in order to acknowledge that the operation of tho...
Perhaps in many cases, if "X wants Y" then that means X will do or bring about Y unless it is prevented by something external. In some cases X is an unconscious optimization procedure, which therefore "wants" the thing that it is optimizing, in other cases X is the output of some optimization procedure, as in the case of a program that "wants" to complete its task or a microorganism that "wants" to reproduce, but optimization is not always involved, as illustrated by "high-pressure gas wants to expand".
I think an important consideration is the degree of catastrophe. Even the asteroid strike, which is catastrophic to many agents on many metrics, is not catastrophic on every metric, not even every metric humans actually care about. An easy example of this is prevention of torture, which the asteroid impact accomplishes quite smoothly, along with almost every other negative goal. The asteroid strike is still very bad for most agents affected, but it could be much, much worse, as with the "evil" utility function you alluded to, which is very bad f...
But, over the lifetime of civilization, our accumulated experience led us to update this prior, and single out the complexity measure suggested by math.
I may be picking nits, here, but what exactly does it mean to "update a prior"?
And as a mathematical consideration, is it in general possible to switch your probabilities from one (limit computable) universal prior to another with a finite amount of evidence?
Uh, if you're worried about UFAI I'd be more concerned about your digital footprint. The concern with UFAI is that it might decide to torture a clone of you(who isn't the same as you unless the UFAI has a ton of other information about you, which is a separate thing) instead of somebody else. It doesn't seem that much worse from a selfless or selfish point of view.
This is one of those things that seems obvious but it did cause some things to click for me that I hadn't thought of before. Previously my idea of AGI becoming uncontrollable was basically that somebody would make a superintelligent AGI in a box, and we would be able to unplug it anytime we wanted, and the real danger would be the AGI tricking us into not unplugging it and letting it out of the box instead. What changed this view was this line: "Try to unplug Bitcoin." Once you think of it that way it does seem pretty obvious that the most p...
I think that fully specifying human values may not be the best approach to an AI utopia. Rather, I think it would be easier and safer to tell the AI to upload humans and run an Archipelago-esque simulated society in which humans are free to construct and search for the society they want, free from many practical problems in the world today such as resource scarcity.
We're talking about the impact of an event though. The very question is only asking about worlds where the event actually happens.
If I don't know whether an event is going to happen and I want to know the impact it will have on me, I compare futures where the event happens to my current idea of the future, based on observation(which also includes some probability mass for the event in question, but not certainty).
In summary, I'm not updating to "X happened with certainty" rather I am estimating the utility in that counterfactual case.
Rot13:
Gur vzcnpg bs na rirag ba lbh vf gur qvssrerapr orgjrra gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba tvira pregnvagl gung gur rirag jvyy unccra, naq gur pheerag rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba.
Zber sbeznyyl, jr fnl gung gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba vf gur fhz, bire nyy cbffvoyr jbeyqfgngrf K, bs C(K)*H(K), juvyr gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba tvira pregnvagl gung n fgngrzrag R nobhg gur jbeyq vf gehr vf gur fhz bire nyy cbffvoyr jbeyqfgngrf K bs C(K|R)*H(K). Gur vzcnpg bs R orvat gehr, gura, vf gur nofbyhgr inyhr bs gur qvssrerapr bs gubfr gjb dhnagvgvrf.
The proof doesn't work on a logically uncertain agent. The logic fails here:
Examining the source code of the agent, because we're assuming the agent crosses, either PA proved that crossing implies U=+10, or it proved that crossing implies U=0.
A logically uncertain agent does not need a proof of either of those things in order to cross, it simply needs a positive expectation of utility, for example a heuristic which says that there's a 99% chance crossing implies U=+10.
Though you did say there's a version which still works for logical ...
The Riemann argument seems to differ from the Great Filter argument in this way: the Riemann argument depends only on the sheer number of observers, i.e. the only thing you're taking into account is the fact that you exist. Whereas in the great filter argument you're updating based on what kind of observer you are, i.e. you're intelligent but not a space-travelling, uploaded posthuman.
The first kind of argument doesn't work because somebody exists either way: if the RH or whatever is false then you are one of a small number, if it'...
Seems to me that if an agent with a reasonable heuristic for logical uncertainty came upon this problem, and was confident but not certain of its consistency, it would simply cross because expected utility would be above zero, which is a reason that doesn't betray an inconsistency. (Besides, if it survived it would have good 3rd party validation of its own consistency, which would probably be pretty useful.)
I agree that "it seems that it should". I'll try and eventually edit the post to show why this is (at least) more difficult to achieve than it appears. The short version is that a proof is still a proof for a logically uncertain agent; so, if the Löbian proof did still work, then the agent would update to 100% believing it, eliminating its uncertainty; therefore, the proof still works (via its Löbian nature).
Non-Archimedean utility functions seem kind of useless to me. Since no action is going to avoid moving the probability of any outcome by more than 1/3^^^3, absolutely any action is important only insomuch as it impacts the highest lexical level of utility. So you might as well just call that your utility function.
That's beside the point. In the first case you'd take 1A in the first game, and 2A in the 2nd game(34% chance of living is better than 33%). In the 2nd case, if you bothered to play at all, you'd probably take 1B/2B. What doesn't make sense is taking 1A and 2B. That policy is inconsistent no matter how you value different amounts of money (unless you don't care about money at all in which case do whatever, the paradox is better illustrated with something you do care about) so things like risk, capital cost, diminishing returns etc are beside the point.