That's a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn't lead to that. (It does lead to your take of the term though - your preference isn't 1A/2B, though).
Your assignment looks like "diminishing utility", i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?
The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insec...
The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people.
Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark woul...
Why would I not hold them responsible? They are the ones who are trying to make us responsible by giving us an opportunity to act, but their opportunities are much more direct - after all, they created the situation that exerts the pressure on us. This line of thought is mainly meant to be argued in Fred's terms, who has a problem with feeling responsible for this suffering (or non-pleasure) - it offers him an out of the conundrum without relinquishing his compassion for humanity (i.e. I feel the ending as written is illogical, and I certainly think "...
The central problem in all of these thought experiments is the crazy notion that we should give a shit about the welfare of other minds simply because they exist and experience things analogously to the way we experience things.
Well, I see the central problem in the notion that we should care about something that happens to other people if we're not the ones doing it to them. Clearly, the aliens are sentient; they are morally responsible for what happens to these humans. While we certainly should pursue possible avenues to end the suffering, we shouldn't act as if we were.
I don't see how your points apply: I would have paid had I lost. Except if my hypothetical self is so much in debt that it can't reasonably spend $100 on an investment such as this - in which case Omega would have known in advance, and understands my nonpayment.
I do not consider the future existence of Omega as a factor at all, so it doesn't matter whether it self-destructs or not. And it is also a given that Omega is absolutely trustworthy (more than I could say for myself).
My view is that this may well be one of the undecidable theorems that Goedel has ...
The problem is easier to decide with a small change that also makes it more practical. Suppose two competing laboratories design a machine intelligence and bid for a government contract to produce it. The government will evaluate the prototypes and choose one of them for mass-production (the "winner", getting multiplied); due to the R&D effort involved, the company who fails the bid will go into receivership, and the machine intelligence not chosen will be auctioned off, but never reproduced (the "loser").
The question is: should the...
It is bad to apply statistics when you don't in fact have large numbers - we have just one universe (at least until the many-world theory is better established - and anyway, the exposition didn't mention it).
I think the following problem is equivalent to the one posed: It is late at night, you're tired, and it's dark and you're driving down an unfamiliar road. Then you see two motels, one to the right of the street, one to the left, both advertising vacant rooms. You know from a visit years ago that one has 10 rooms, the other has 100, but you can't tell ...
"Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. "
Well then, the statistical expected (average) share any agent is going to get long-term is 1/10th of the pie. The simplest solution that ensures this is the equal division; anticipating this from the start cuts down on negotiation costs, and if a majority agrees to follow this strategy (i.e agrees to not realize more than their "share"), it is also stable - anyone who ponders upsetting it risks to be the "odd man out" who eats ...
I believe both of your computations are correct, and the fallacy lies in mixing up the payoff for the group with the payoff for the individual - which the frame of the problem as posed does suggest, with multiple identities that are actually the same person. More precisely, the probabilities for the individual are 90/10 , but the probabilities for the groups are 50/50, and if you compute payoffs for the group (+$12/-$52), you need to use the group probabilities. (It would be different if the narrator ("I") offered the guinea pig ("you"...
I don't understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).
The phlogiston theory had predictive power (e.g. what kind of "air" could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better under
An explicit belief that you would not allow yourself to hold under these conditions would be that the tree which falls in the forest makes a sound - because no one heard it, and because we can't sense it afterwards, whether it made sound or not had no empirical consequence.
Every time I have seen this philosophical question posed on lesswrong, the two sophists that were arguing about it were in agreement that a sound would be produced (under the physical definition of the word), so I'd be really surprised if you could let go of that belief.
beneath my notice
I'm referring to that. Sending that message is an implicit lie -- well, you could call it a "social fiction", if you like a less loaded word.
It is also a message that is very likely to be misunderstood (I don't yet know my way around lesswrong well enough to find it again, but I think there's an essay here someplace that deals with the likelyhood of recipients understanding something completely different than what you intended to mean, but you not being able to detect this because the interpretation you know shapes your perce...
In another comment on this post, Eugine Nier linked to Schelling. I read that post, and the Slate page that mentions Schelling vs. Vietnam, and it became clear to me that acting moral acts as an "antidote" to these underhanded strategies that count on your opponent being rational. (It also serves as a Gödelian meta-layer to decide problems that can't be decided rationally.)
If, in Schellings example, the guy who is left with the working radio set is moral, he might reason that "the other guy doesn't deserve the money if he doesn't work for it...
Well, it seems I misunderstand your statement, "It is possible to not control anger but instead never even feel it in the first place, without effort or willpower."
I know it is possible to experience anger, but control it and not act angry - there is a difference between having the feeling and acting on it. I know it is also possible to not feel anger, or to only feel anger later, when distanced from the situation. I'm ok with being aware of the feeling and not acting on it, but to get to the point where you don't feel it is where I'm starting to...
Eliezer, you state in the intro that the 5-second-level is a "method of teaching rationality skills". I think it is something different.
First, the analysis phase is breaking down behaviour patterns into something conscious; this can apply to my own patterns as I figure out what I need to (or want to) teach, or to other people's patterns that I wish to emulate and instill into myself.
It breaks down "rationality" into small chunks of "behaviour" which can then be taught using some sort of conditioning - you're a bit unclear on ...
Assuming the person who asks the question wants to learn something and not hold a socratic argument, what they need is context. They need context to anchor the new information (there's a word "red", in this case) to what they already know. You can give this context in the abstract and specific (the "one step up, one step down" method that jimrandomh descibes above achieves this), but it doesn't really matter. The more different ways you can find, the better the other person will understand, and the richer a concept they will take away...
One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.
I know Settlers of Catan, and own it. It's been awhile since I last playe... (read more)