All of mendel's Comments + Replies

One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.

I know Settlers of Catan, and own it. It's been awhile since I last playe... (read more)

1[anonymous]
The problem with this is that dealing with p=1 is iffy. Ideally, our certainty response would be triggered, if not as strongly, when dealing with 99.99% certainty -- for one thing, because we can only ever be, say, 99.99% certain that we read p=1 correctly and it wasn't actually p=.1 or something! Ideally, we'd have a decaying factor of some sort that depends on the probabilities being close to 1 or 0. The reason I asked is that it's very possible that a correct model of "attaching a utility to certainty" would be equivalent to a model with diminishing utility of money. If that were the case, we would be arguing over nothing. If not, we'd at least stand a chance of formulating gambles clarifying our intuitions if we knew what the alternatives are. If the 33% and 34% chances are in the middle of their error margins, which they should be, our uncertainty about the chances cancels out and the expected utility is still the same. Going for the higher expected value makes sense. I brought up Settlers of Catan because, if I imagine a tile on the board with $24K and 34 dots under it, and another tile with $27K and 33 dots, suddenly I feel a lot better about comparing the probabilities. :) Does this help you, or am I atypical in this way? Obviously with the advisor situation, you have to take your advisee's biases into account. The one most relevant to risk avoidance is, I think, the status quo bias: rather than taking into account the utility of the outcomes in general, the king might be angry at you if the utility becomes worse, and not as picky if the utility becomes better (than it is now). You have to take your own utility into account, which depends not on the outcome but on your king's satisfaction with it.

That's a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn't lead to that. (It does lead to your take of the term though - your preference isn't 1A/2B, though).

Your assignment looks like "diminishing utility", i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?

0[anonymous]
I think so, but your question forces me to think about it harder. When I thought about it initially, I did come to that conclusion -- for myself, at least. [I realized that the math I wrote here was wrong. I'm going to try to revise it. In the meantime, another question. Do you think that risk avoidance can be modeled by assigning an additional utility to certainty, and if so, what would that utility depend on?] Also, thinking about the paradox more, I've realized that my intuition about probabilities relies significantly on my experience playing the board game Settlers of Catan. Are you familiar with it?

You seem to have examples in mind?

2Pavitra
The lottery comes immediately to mind. You can't be absolutely sure that you'll lose.

The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insec... (read more)

1[anonymous]
Risk-avoidance is captured in the assignment of U($X). If the risk of not getting any money worries you disproportionately, that means that the difference U($24K) - U($0) is higher than 8 times the difference U($27K) - U($24K).

The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people.

Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark woul... (read more)

5[anonymous]
Not necessarily. It is assumed that receiving $24000 is equally good in either situation. Your utility function can ignore money entirely (in which case 1A2A is irrational because you should be indifferent in both cases). You can use the utility function which prefers not to receive monetary rewards divisible by 9: in this case, 1A>1B and 2A>2B is your best bet, giving you 100% and 34% chances to avoid 9s, rather than 0% chances. In general, your utility function can have arbitrary preferences on A and B separately; but no matter what, it will prefer 1A to 1B if and only if it prefers 2A to 2B. As for the rest of your reply -- yes, it is true that real people use strategies ("heuristic" is the word used in the original post) that lead them to choose 1A and 2B. That's sort of why it's a paradox, after all. However, these strategies, which work well in most cases, aren't necessarily the best in all cases. The math shows that. What the math doesn't tell us is which case is wrong. My own judgment, for this particular sum of money (which is high relative to my current income), is that choice 1A is correctly better than choice 2A, in order to avoid risk. However, choice 1B is also better than choice 2B, upon reflection, even though my intuitions tell me to go with 2B. This is because my intuitions aren't distinguishing 33% and 34% correctly. In reality, faced with the opportunity to earn amounts on the order of $20K, I should maximize my chances to walk away with something. In the first case, I can maximize them fully, to 100%, which triggers my "success!" instinct or whatever: I know I've done everything I can because I'm certain to get lots of money. In the second case, I don't get any satisfaction from the correct decision, because all I've done is improve my chances by 1%. In general, the heuristic that 1% chances are nearly worthless is correct, no matter what's at stake: I can usually do better by working on something that will give me a 10% or 25% chance. In th
0wedrifid
The problem is not with the hypothetical. It is with the intuition. Intuitions which really do prompt bad decisions in the real life circumstances along these lines.

Why would I not hold them responsible? They are the ones who are trying to make us responsible by giving us an opportunity to act, but their opportunities are much more direct - after all, they created the situation that exerts the pressure on us. This line of thought is mainly meant to be argued in Fred's terms, who has a problem with feeling responsible for this suffering (or non-pleasure) - it offers him an out of the conundrum without relinquishing his compassion for humanity (i.e. I feel the ending as written is illogical, and I certainly think "... (read more)

The central problem in all of these thought experiments is the crazy notion that we should give a shit about the welfare of other minds simply because they exist and experience things analogously to the way we experience things.

Well, I see the central problem in the notion that we should care about something that happens to other people if we're not the ones doing it to them. Clearly, the aliens are sentient; they are morally responsible for what happens to these humans. While we certainly should pursue possible avenues to end the suffering, we shouldn't act as if we were.

0Perplexed
Interesting. Though in the scenario I suggested there is no suffering. Only an opportunity to deploy pleasure (ice cream). I'm curious as to your reasons why you hold the aliens morally responsible for the human clones - I can imagine several reasons, but wonder what yours are. Also, I am curious as to whether you think that the existence of someone with greater moral responsibility than our own acts to decrease or eliminate the small amount of moral responsibility that we Earthlings have in this case.

I don't see how your points apply: I would have paid had I lost. Except if my hypothetical self is so much in debt that it can't reasonably spend $100 on an investment such as this - in which case Omega would have known in advance, and understands my nonpayment.

I do not consider the future existence of Omega as a factor at all, so it doesn't matter whether it self-destructs or not. And it is also a given that Omega is absolutely trustworthy (more than I could say for myself).

My view is that this may well be one of the undecidable theorems that Goedel has ... (read more)

The problem is easier to decide with a small change that also makes it more practical. Suppose two competing laboratories design a machine intelligence and bid for a government contract to produce it. The government will evaluate the prototypes and choose one of them for mass-production (the "winner", getting multiplied); due to the R&D effort involved, the company who fails the bid will go into receivership, and the machine intelligence not chosen will be auctioned off, but never reproduced (the "loser").

The question is: should the... (read more)

It is bad to apply statistics when you don't in fact have large numbers - we have just one universe (at least until the many-world theory is better established - and anyway, the exposition didn't mention it).

I think the following problem is equivalent to the one posed: It is late at night, you're tired, and it's dark and you're driving down an unfamiliar road. Then you see two motels, one to the right of the street, one to the left, both advertising vacant rooms. You know from a visit years ago that one has 10 rooms, the other has 100, but you can't tell ... (read more)

"Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. "

Well then, the statistical expected (average) share any agent is going to get long-term is 1/10th of the pie. The simplest solution that ensures this is the equal division; anticipating this from the start cuts down on negotiation costs, and if a majority agrees to follow this strategy (i.e agrees to not realize more than their "share"), it is also stable - anyone who ponders upsetting it risks to be the "odd man out" who eats ... (read more)

2AlephNeil
Your version of the story discards the most important ingredient: The fact that when you win the coin toss, you only receive money if you would have paid had you lost. As for Omega, all we know about it is that somehow it can accurately predict your actions. For the purposes of Counterfactual Mugging we may as well regard Omega as a mindless robot which will burn the money you give to it and then self-destruct immediately after the game. (This makes it impossible to pay because you feel obligated to Omega. In fact, the idea is that you pay up because you feel obligated to your counterfactual self.)

I believe both of your computations are correct, and the fallacy lies in mixing up the payoff for the group with the payoff for the individual - which the frame of the problem as posed does suggest, with multiple identities that are actually the same person. More precisely, the probabilities for the individual are 90/10 , but the probabilities for the groups are 50/50, and if you compute payoffs for the group (+$12/-$52), you need to use the group probabilities. (It would be different if the narrator ("I") offered the guinea pig ("you"... (read more)

I don't understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).

  • The phlogiston theory had predictive power (e.g. what kind of "air" could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better under

... (read more)

An explicit belief that you would not allow yourself to hold under these conditions would be that the tree which falls in the forest makes a sound - because no one heard it, and because we can't sense it afterwards, whether it made sound or not had no empirical consequence.

Every time I have seen this philosophical question posed on lesswrong, the two sophists that were arguing about it were in agreement that a sound would be produced (under the physical definition of the word), so I'd be really surprised if you could let go of that belief.

1Manfred
Hm, yeah. The trouble is how the doctrine handles deductive logic - for example, the belief that a falling tree makes vibrations in the air when the laws of physics say so is really a direct consequence of part of physics. The correct answer definitely appears to be that you can apply logic, and so the doctrine should be not to believe in something when there is no Bayesian evidence that differentiates it from some alternative.

Yes, that's the post I was referring to. Thank you!

Of course, these analyses and exercises would also serve beautifully as use-cases and tests if you wanted to create an AI that can pass a Turing test for being rational. ;-)

beneath my notice

I'm referring to that. Sending that message is an implicit lie -- well, you could call it a "social fiction", if you like a less loaded word.

It is also a message that is very likely to be misunderstood (I don't yet know my way around lesswrong well enough to find it again, but I think there's an essay here someplace that deals with the likelyhood of recipients understanding something completely different than what you intended to mean, but you not being able to detect this because the interpretation you know shapes your perce... (read more)

5Barry_Cotter
implicit lie vs. social fiction I don't think these are normally useful ways of thinking about status posturing. Verbalising this stuff is a faux pas in the overwhelming majority of human social groups. I'm not sure if I disagree with you on whether the message is "very likely" to be understood. In my limited experience, and with my below average people reading skills, I'd say that most status jockeying in non-intimate contexts is obvious enough for me to notice if I'm paying attention to the interaction. The post you meant is probably Illusion of Transparency. I contend that it applies less strongly to in person status jockeying than to lingual information transfer. I suggest you watch a clip of a foreign language movie if you disagree.
1wedrifid
This can work sometimes but it in most contexts it is difficult to pull off without sounding awkward or crude. At best it conveys that you are aware that social dynamics exist but aren't quite able to navigate them smoothly yet. Mind you unless there is a pre-existing differential in status or social skills in their favour they will tend to come off slightly worse than you in the exchange. A costly punishment.

In another comment on this post, Eugine Nier linked to Schelling. I read that post, and the Slate page that mentions Schelling vs. Vietnam, and it became clear to me that acting moral acts as an "antidote" to these underhanded strategies that count on your opponent being rational. (It also serves as a Gödelian meta-layer to decide problems that can't be decided rationally.)

If, in Schellings example, the guy who is left with the working radio set is moral, he might reason that "the other guy doesn't deserve the money if he doesn't work for it... (read more)

0Viliam_Bur
Seems like morality is (inter alia) a heuristic for improving one's bargaining position by limiting one's options.

Well, it seems I misunderstand your statement, "It is possible to not control anger but instead never even feel it in the first place, without effort or willpower."

I know it is possible to experience anger, but control it and not act angry - there is a difference between having the feeling and acting on it. I know it is also possible to not feel anger, or to only feel anger later, when distanced from the situation. I'm ok with being aware of the feeling and not acting on it, but to get to the point where you don't feel it is where I'm starting to... (read more)

0Cayenne
Mostly I don't even feel frustration, but instead sadness. I'd like to be able to help, but sometimes the best I can do is just be patient and try to explain clearly, and always immediately abandon my arguments if I find that I'm the one with the error. Edit - please disregard this post

My opinion? I'd not lie. You've noticed the attempt, why claim you didn't? Display your true reaction.

6wedrifid
Noticing the attempt and doing nothing is not a lie. It is a true reaction.

And yet, not to feel an emotion in the first place may obscure you to yourself - it's a two-sided coin. To opt to not know what you're feeling when I struggle to find out seems strange to me.

2Cayenne
I think you're misunderstanding what I said. I'm not obscuring my feelings from myself. I'm just aware of the moment when I choose what to feel, and I actively choose. I'm not advocating never getting angry, just not doing it when it's likely to impair your ability to communicate or function. If you choose to be offended, that's a valid choice... but it should also be an active choice, not just the default. I find it fairly easy to be frustrated without being angry at someone. It is, after all, my fault for assuming that someone is able to understand what I'm trying to argue, so there's no point in being angry at them for my assumption. They might have a particularly virulent meme that won't let them understand... should I get mad at them for a parasite? It seems pointless. Edit - please disregard this post

The problem with the downvote is that it mixes the messages "I don't agree" with "I don't think others should see this". There is no way to say "I don't agree, but that post was worth thinking about", is there? Short of posting a comment of your own, that is.

3lessdazed
I think there is a positive outcome from the system as it is, at least for sufficiently optimistic people. The feature is that it should be obvious that downvoting is mixed with those and other things, which helps me not take anything personally. Downvotes could be anything, and individuals have different criteria for voting, and as I am inclined to take things personally, this obviousness helps me. If I knew 50% of downvotes meant "I think the speaker is a bad person", every downvote might make me feel bad. As downvotes currently could mean so many things, I am able to shrug them off. They could currently mean: the speaker is bad, the comment is bad, I disagree with the comment, I expect better from this speaker, it's not fair/useful for this comment to be voted so highly rated compared to a similar adjacent comment that I would rather people read instead/I would like to promote as the communal norm, etc. If one has an outlook that is pessimistic in a particular way, any mixing of single messages to multiple meanings will cause one to overly react as if the worst meaning is intended by a message, and this sort of person would be most helped by ensuring each message has only one meaning.
3AdeleneDawner
I've been known to upvote in such cases, if the post is otherwise neutral-or-better. I like to see things here that are worth thinking about.
4Swimmer963 (Miranda Dixon-Luinenburg)
That's exactly what I do. I try to downvote comments based on how they're written (if they're rude or don't make sense, I downvote them) instead of what they're written about. (Though I may upvote comments based on agreeing with the content.)

Eliezer, you state in the intro that the 5-second-level is a "method of teaching rationality skills". I think it is something different.

First, the analysis phase is breaking down behaviour patterns into something conscious; this can apply to my own patterns as I figure out what I need to (or want to) teach, or to other people's patterns that I wish to emulate and instill into myself.

It breaks down "rationality" into small chunks of "behaviour" which can then be taught using some sort of conditioning - you're a bit unclear on ... (read more)

0mendel
Of course, these analyses and exercises would also serve beautifully as use-cases and tests if you wanted to create an AI that can pass a Turing test for being rational. ;-)

Assuming the person who asks the question wants to learn something and not hold a socratic argument, what they need is context. They need context to anchor the new information (there's a word "red", in this case) to what they already know. You can give this context in the abstract and specific (the "one step up, one step down" method that jimrandomh descibes above achieves this), but it doesn't really matter. The more different ways you can find, the better the other person will understand, and the richer a concept they will take away... (read more)