Would it be possible to make those clearer in the post?
I had thought, from the way you phrased it, that the assumption was that for any game, I would be equally likelly to encounter a game with the choices and power levels of the original game reversed. This struck me as plausible, or at least a good point to start from.
What you in fact seem to need, is that I am equally likely to encounter a game with the outcome under this scheme reversed, but the power levels kept the same. This continues to strike me as a very substansive and almost certainly false assertion about the games I am likely to face.
I don't therefore see strong evidence I should reject my informal proof at this point.
I think you and I have very different understandings of the word 'proof'.
In the real world, agent's marginals vary a lot, and the gains from trade are huge, so this isn't likely to come up.
I doubt this claim, particularly the second part.
True, many interactions have gains from trade, but I suspect the weight of these interactions is overstated in most people's minds by the fact that they are the sort of thing that spring to mind when you talk about making deals.
Probably the most common form of interaction I have with people is when we walk past each-other in the street and neither of us hands the other the contents of their...
You're right, I made a false statement because I was in a rush. What I meant to say was that as long as Bob's utility was linear, whatever utility function Alice has there is no way to get all the money.
Are you enforcing that choice? Because it's not a natural one.
It simplifies the scenario, and suggests.
Linear utility is not the most obviously correct utility function: diminishing marginal returns, for instance.
Why is diminishing marginal returns any more obvious that accelerating marginal returns. The former happens to be the human attitude to th...
It does not. See this post ( http://lesswrong.com/lw/i20/even_with_default_points_systems_remain/ ): any player can lie about their utility to force their preferred outcome to be chosen (as long as it's admissible). The weaker player can thus lie to get the maximum possible out of the stronger player. This means that there are weaker players with utility functions that would naturally give them the maximum possible. We can't assume either the weaker player or the stronger one will come out ahead in a trade, without knowing more.
Alice has $1000. Bob has ...
If situation A is one where I am more powerful, then I will always face it at high-normalisation, and always face its complement at low normalisation. Since this system generally gives almost everything to the more powerful player, if I make the elementary error of adding the differently normalised utilities I will come up with an overly rosy view of my future prospects.
You x+y > 2h proof is flawed, since my utility may be normalised differently in different scenarios, but this does not mean I will personally weight scenarios where it is normalised to a large number higher than those where it is normalised to a small number. I would give an example if I had more time.
I didn't interpret the quote as implying that it would actually work, but rather as implying that (the Author thinks) Hanson's 'people don't actually care' arguments are often quite superficial.
consider that "there are no transhumanly intelligent entities in our environment" would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote
Why?
It seems like a mess of tautologies and thought experiments
My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don't suggests there is still work to be done.
I don't know if this is what the poster is thinking of, but one example that came up recently for me is the distinction between risk-aversion and uncertainty-aversion (these may not be the correct terms).
Risk aversion is the what causes me to strongly not want to bet $1000 on a coin flip, even though the expectancy of is zero. I would characterise risk-aversion as an arational preference rather than an irrational bias, primarily becase it arises naturally from having a utility function that is non-linear in wealth ($100 is worth a lot if you're begging on ...
They aren't isomorphic problems, however it is the case that CDT two-boxes and defects while TDT one boxes and co-operates (against some opponents).
But at some point your character is going to think about something for more than an instant (if they don't then I strongly contest that they are very intelligent). In a best case scenario, it will take you a very long time to write this story, but I think there's some extent to which being more intelligent widens the range of thoughts you can think of ever.
That's clearly the first level meaning. He's wondering whether there's a second meaning, which is a subtle hint that he has already done exactly that, maybe hoping that Harry will pick up on it and not saying it directly in case Dumbledore or someone else is listening, maybe just a private joke.
I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don't see why a 'happiness function' would be even slightly interesting to decision theorists.
I think I'd want to define a utility function as "what an agent wants to maximise" but I'm not entirely clear how to unpack the word 'want' in that sentence, I will admit I'm somewhat confused.
However, I'm not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.
In that case, I would say their true utility function was "follow the deontological rules" or "avoid being smited by divine clippy", and that maximising paperclips is an instrumental subgoal.
In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.
(1/36)(1+34p0) is bounded by 1/36, I think a classical statistician would be happy to say that the evidence has a p-value of 1/36 her. Same for any test where H_0 is a composite hypothesis, you just take the supremum.
A bigger problem with your argument is that it is a fully general counter-argument against frequentists ever concluding anything. All data has to be acquired before it can be analysed statistically, all methods of acquiring data have some probability of error (in the real world) and the probability of error is always 'unknowable', at least in ...
So, I wrote a similar program to Phil and got similar averages, here's a sample of 5 taken while I write this comment
8.2 6.9 7.7 8.0 7.1
These look pretty similar to the numbers he's getting. Like Phil, I also get occasional results that deviate far from the mean, much more than you'd expect to happen with and approximately normally distributed variable.
I also wrote a program to test your hypothesis about the sequences being too long, running the same number of trials and seeing what the longest string of heads is, the results are
19 22 18 25 23
Do these seem abnormal enough to explain the deviation, or is there a problem with your calculations?
You can double the real numbers representing them, but the results of this won't be preserved under affine transformations. So you can have two people whose utility functions are the same, tell them both "double your utility assigned to X" and get different results.
A green sky will be green
This is true
A pink invisible unicorn is pink
This is a meaningless sequence of squiggles on my computer screen, not a tautology
A moral system would be moral
I'm unsure what this one means
I'm not sure what 'should' means if it doesn't somehow cash out as preference.
I could not abide someone doing that to me or a loved one, throwing us from relative safety into absolute disaster. So I would not do it to another. It is not my sacrifice to make.
I could not abide myself or a loved one being killed on the track. What makes their lives so much less important.
How does this work with Clippy (the only paperclipper in known existence) being tempted with 3^^^^3 paperclips?
First thought, I'm not at all sure that it does. Pascal's mugging may still be a problem. This doesn't seem to contradict what I said about the leverage penalty being the only correct approach, rather than a 'fix' of some kind, in the first case. Worryingly, if you are correct it may also not be a 'fix' in the sense of not actually fixing anything.
I notice I'm currently confused about whether the 'causal nodes' patch is justified by the same argument. I will think about it and hopefully find an answer.
Random thoughts here, not highly confident in their correctness.
Why is the leverage penalty seen as something that needs to be added, isn't it just the obviously correct way to do probability.
Suppose I want to calculate the probability that a race of aliens will descend from the skies and randomly declare me Overlord of Earth some time in the next year. To do this, I naturally go to Delphi to talk to the Oracle of Perfect Priors, and she tells me that the chance of aliens descending from the skies and declaring an Overlord of Earth in the next year is 0.00...
I would agree that it is to some extent political. I don't think its very dark artsy though, because it seems to be a case of getting rid of an anti-FAI misunderstanding rather than creating a pro-FAI misunderstanding.
But yeah, "diyer" is too close to "die" to be easily distinguishable. Maybe "rubemond"?
I could see the argument for that, provided we also had saphmonds, emmonds etc... Otherwise you run the risk of claiming a special connection that doesn't exist.
Chemistry would not be improved by providing completely different names to chlorate and perchlorate (e.g. chlorate and sneblobs).
Okay, thats actually a good example. This caused me to re-think my position. After thinking, I'm still not sure that the analogy is actually valid though.
In chemistry, we have a systemic naming scheme. Systematic name schemes are good, because they let us guess word meanings without having to learn them. In a difficult field which most people learn only as adults if at all, this is a very good thing. I'm no chemist, but if I h...
Do you really think this!? I admit to being extremely surprised to find anyone saying this.
If rubies were called diyermands it seems to me that people wouldn't guess what it was when they heard it, they would simply guess that they had misheard 'diamond', especially since it would almost certainly be a context where that was plausible, most people would probably still have to have the word explained to them.
Furthermore, once we had the definition, we would be endlessly mixing them up, given that they come up in exactly the same context. Words are used many...
Even if not, they should at least be called something that acknowledges the similarity, like "Pascal-like muggings".
Any similarities are arguments for giving them a maximally different name to avoid confusion, not a similar one. Would the English language really be better if rubies were called diyermands?
Why on earth would you expect the downstream utilities to exactly cancel the mugging utility?
The first is contradictory, you've just told me something, then told me I don't know it, which is obviously false.
Sure this is right? After all, the implication is also true in the case of A being false, the conjuntion certainly is not.
He specifically specifies that A is true as well as A => B
Intuitively I suggest there should be an inequality, too, seeing as B|A is not necessarily independent of A.
B|A is not an event, so it makes no sense to talk about whether or not it is independent of A.
To see why this is a valid theorem, break it up into three posibilities, P(A & B) = x, P(A & ~B) = y, P(~A) = 1 - x - y.
Then P(A) = P(A & B) + P(A & ~B) =...
I don't know if this is typical, but I recently a professional trader stated in an email to me that he knew very little about Bitcoin and basically had no idea what to think of it. This may hint that the lack of interest isn't based on certainty that bitcoin will flop, but simply on not knowing how to treat it and sticking to markets where they do have reasonably well-understood ways of making a profit, since exposure to risk is a limited resource.
I fully agree that is an interesting avenue of discussion, but it doesn't look much like what the paper is offering us.
Maybe I'm misunderstanding here, but it seems like we have no particular reason to suppose P=NP is independent of ZFC. Unless it is independent, its probability under this scheme must already be 1 or 0, and the only way to find out which is to prove or disprove it.
In ZF set theory, consider the following three statements.
I) The axiom of choice is false
II) The axiom of choice is true and the continuum hypothesis is false
III) The axiom of choice is true and the continuum hypothesis is true
None of these is provably true or false so they all get assigned probability 0.5 under your scheme. This is a blatant absurdity as they are mutually exclusive so their probabilities cannot possibly sum to more than 1
So induction gives the right answer 100s of times, and then gets it wrong once. Doesn't seem too bad a ratio.
I am indeed suggesting that an agent can assign utility, not merely expected utility, to a lottery.
I am suggesting that this is equivalent to suggesting that two points can be parallel. It may be true for your special definition of point, but its not true for mine, and its not true for the definition the theorems refer to.
Yes, in the real world the lottery is part of the outcome, but that can be factored in with assigning utility to the outcomes, we don't need to change our definition of utility when the existing one works (reading the rest of your post...
I'm not sure quite what the best response to this is, but I think I wasn't understanding you up to this point. We seem to have a bit of a level mixing problem.
In VNM utility theory, we assign utility to outcomes, defined as a complete description of what happens, and expected utility to lotteries, defined as a probability distribution over outcomes. They are measured in the same units, but they are not the same thing and should not be compared directly.
VNM utility tells you nothing about how to calculate utility and everything about how to calculate expect...
it remains to show that someone with that preference pattern (and not pattern III) still must have a VNM utility function
Why does it remain to be shown? How does this differ from the claim that any other preference pattern that does not violate a VNM axiom is modelled by expected utility?
...Now consider the games involving chance that people enjoy. These either show (subjective probability interpretation of "risk") or provide suggestive evidence toward the possibility (epistemic probability interpretation) that some people just plain like risk.
So, when people say 'risk aversion', they can mean one of three different things:
I) I have a utility function that penalises world-histories in which I take risks.
II) I have a utility function which offers diminishing returns in some resource, so I am risk averse in that resource
III) I am risk averse in utility
Out of the three (III) is irrational and violates VNM. (II) is not irrational, and is an extremely common preference among humans wrt some things, but not others (money vs lives being the classic one). (I) is not irrational, but is pretty weird, I'm ...
I think we have almost reached agreement, just a few more nitpicks I seem to have with your current post.
the independence principle doesn't strictly hold in the real world, like there are no strictly right angle in the real world
Its pedantic, but these two statements aren't analogous. A better analogy would be
"the independence principle doesn't strictly hold in the real world, like the axiom that all right angles are equal doesn't hold in the real world"
"there are no strictly identical outcomes in the real world, like there are no strict...
To me it's a single, atomic real-world choice you have to make:
To you it may be this, but the fact that this leads to an obvious absurdity suggests that this is not how most proponents think of it, or how its inventors thought of it.
Given that people can rationally have preferences that make essential reference to history and to the way events came about, why can't risk be one of those historical factors that matter? What's so "irrational" about that?
Nothing. Whoever said there was?
If your goal is to not be a thief, then expected utility theory recommends that you do not steal.
I suspect most of us do have 'do not steal' preferences on the scale of a few hundred pounds or more.
On the other hand, once you get to, say, a few hundred human lives, or the fate of the entire spe...
First, I did study mathematical logic, and please avoid such kind of ad hominem.
Fair enough
That said, if what you're referring to is the whole world state, the outcomes are, in fact, always different. Even if only because there is somewhere in your brain the knowledge that the choice is different.
I thought this would be your reply, but didn't want to address it because the comment was too long already.
Firstly, this is completely correct. (Well, technically we could imagine situations where the outcomes removed your memory of there ever having been a...
Gwern said pretty much everything I wanted to say to this, but there's an extra distinction I want to make
What you're doing is saying you can't use A, B, and C when there is dependency, but have to create subevents like C1="C when you are you sure you'll have either A or C".
The distinction I made was things like A2="A when you prepare" not A2="A when you are sure of getting A or C". This looks like a nitpick, but is in fact incredibly important. The difference between my A1 and A2, is important, they are fundamentally diff...
The problem here is that you've not specified the options in enough detail, for instance you appear to prefer going to Ecaudor with preparation time to going without preparation time, but you haven't stated this anywhere. You haven't given the slightest hint whether you prefer Iceland with preparation time to Ecuador without. VNM is not magic, if you put garbage in you get garbage out.
So to really describe the problem we need six options:
A1 - trip to Ecuador, no advance preparation A2 - trip to Ecuador, advance preparation B1 - laptop B2 - laptop, but you ...
What makes you think you have a reliable way of fooling Omega?
In particular, I am extremely sceptical that simply not making your mind up, and then at the last minute doing something that feels random, would actually correspond to making use of quantum nondeterminism. In particular, if individual neurons are reasonably deterministic, then regardless of quantum physics any human's actions can be predicted pretty perfectly, at least on a 5/10 minute scale.
Alternatively, even if it is possible to be delibrately non-cooperative, the problem can just be changed...
Not quite always
http://www.boston.com/news/local/massachusetts/articles/2011/07/31/a_lottery_game_with_a_windfall_for_a_knowing_few/