Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Michaelos 22 July 2014 04:02:05PM *  1 point [-]

Do other people display ambiguity aversion in all cases, or only when there are personal resources at stake?

Example. You've just found a discarded ticket to play two draws of the charity lottery! Here's how it works.

There 90 balls, 30 red, 60 either black or yellow in some distribution. You may choose either:

1a) I pay Givewell's top charity $100 if you draw a red ball.

1b) I pay Givewell's top charity $100 if you draw a black ball.

And then on the subsequent draw we go to an entirely different urn which may have an entirely different distribution of yellow and black balls (although still 30 red, 60 either black or yellow) , and then either:

2a) I pay Givewell's top charity $100 if you draw a red or yellow ball.

2b) I pay Givewell's top charity $100 if you draw a black or yellow ball.

For some reason, the buttons are set to 1b and 2a. You can at no monetary cost switch options to minimize ambiguity by pressing the buttons to toggle to the other option. It's perhaps a barely noticeable expenditure of calories, so the cost seems trivial: not even a penny.

1: Do you do switch to the less ambiguous option at a trivial cost?

2: Instead of pressing the buttons, would you pay 1 penny to the person running the charity lottery, to do so in either case?

3: Would you be willing to pay 1 penny to switch in either case if the person running the charity lottery also gave you two pennies before hand? You get to keep them if you don't use them.

When considering the previous situation, I don't feel particularly ambiguity averse at all, and I don't really feel the need to make any changes to the settings. But maybe other people do, so I thought I should check. And maybe it is weird of me to not feel ambiguity aversion about this, and I should check that as well.

Edit: Formatting and Grammar.

Comment author: Michaelos 21 July 2014 02:12:07PM 0 points [-]

Hmm. The results appear quite different if you allow communication and repeated plays. And they also introduce something which seems slightly different than Trembling Hand (Perhaps Trembling Memory?)

With communication and repeated plays:

Assume all potential Player 1's credibly precommit to flip a fair coin and based on the toss, pick B half the time, and C half the time.

All potential Player 2's would know this, and assuming they expect Player 1 to almost always follow the precommitment, would pick Y, because they would maximize their expected payout. (50% chance of 2 means expected payout of 1, compared to picking X, where 50% chance of 1 means expected payout of 1/2.)

All potential Player 1's, based on following that precommitment universally, and having Player 2's always pick Y, will get 2 50% of the time and 6 50% of the time, which would get an expected per game payout of 4.

This seems better for everyone (by about one point per game) then Player 1 only choosing A.

With Trembling Memory:

Assume further that some of the time, Player 2's memory trembles and he forgets about the precommitments and the fact that this game is played repeatedly.

So If Player 2 suspects for instance, that they MIGHT be in the case above, but have forgotten important facts (they are incorrect about this being a one-off game, they are incorrectly assessing the state of common knowledge, but they are correctly assessing the current payoff structure of this particular game) then following those suspicions, it would still make sense for them to choose Y, and it would also explain why Player 1 chose something that wasn't A.

However, nothing would seem to prevent Player 2 from suspecting other possibilities. (For instance, under the assumption that Player 1 hits player 2 with Amnesia dust before every game, knowing that Player 2 will be forced into a memory trembled and will believe the above, Player 1 could play C every time, with player 2 drawing predictably incorrect conclusions and playing Y at no benefit.)

I'm not sure how to model a situation with trembling memory, though, so I would not be surprised if I was missing something.

Comment author: Michaelos 11 July 2014 01:39:27PM 3 points [-]

I want to thank you for posting that link to Gwern's accumulated material. I was going to make a comment about the estimated adoption speeds, but prior to that I started reading Gwern's material and found the information I was going to use to construct my comment was out of date.

Comment author: Michaelos 08 July 2014 04:01:54PM 0 points [-]

Should you consider "Not begging someone for something" denying them agency?

I mean, presumably, you would prefer your five year old not ask you for too much chocolate. But what if they say "But PARENT, if I DIDN'T beg you for too much chocolate, I would be denying you of your agency!"

Because I have both begged for things, not begged for things and been begged from, in all sorts of various circumstances. Possibly too many to list.

Comment author: James_Miller 07 July 2014 03:46:46PM 5 points [-]

The great filter argument and Fermi's paradox take into account the speed of light, and the size and age of the galaxy. Both figure that there has been plenty of time for aliens to colonize the galaxy even if they traveled at, say, 1% of the speed of light. If our galaxy were much younger or the space between star systems much bigger there would not be a Fermi paradox and we wouldn't need fear the great filter.

To directly answer the question of your second sentence, yes but only by a very small amount.

Comment author: Michaelos 07 July 2014 06:34:31PM 0 points [-]

I think that reading this and thinking it over helped me figure out a confusing math error I was making. Thank you!

Normally, to calculate the odds of a false negative, I would need the test accuracy, but I would also need the base rate.

I.E, If a test for the presence or absence of colonization is 99% accurate, and the base rate for evidence of colonization is present in 1% of stars, and my test is negative, then I can compute the odds of a false negative.

However, in this case, I was attempting to determine "Given that our tests aren't perfectly accurate, what if the base rate of colonization isn't 0%?" and while that may be a valid question, I was using the wrong math to work on it, and it was leading me to conclusions that didn't make a shred of sense.

Comment author: Michaelos 07 July 2014 03:15:42PM 0 points [-]

I have a question. Does the probability that the colonization of the universe with light speed probes has occurred, but only in areas where we would not have had enough time to notice it yet, affect the Great Filter argument?

For instance, assume the closest universal colonization with near light speed probes to us started 100 light years away in distance, 50 years ago in time. When we look at the star where colonization started, we wouldn't see evidence of near light speed colonization yet, because we're seeing light from 100 years ago, before they started.

I think a simpler way of putting this might be "What is the probability our tests for colonial explosion are giving a false negative? If that probability was high, would it affect the Great Filter Argument?"

Comment author: Michaelos 26 June 2014 01:05:39PM 4 points [-]

In reference to your request for thoughts, It seems like in both cases you could have parties switch their professed beliefs about the systems, without actually switching their behavior. This kind of pivot can and does definitely happen among some politicians. Should a reference to it be included?

Here are some potential examples:

Parties currently opposed to taking action on Anthropogenic climate change:

"Anthropogenic climate change has models which explain it and data which confirm the models. And it is good as a practical matter, because it is predicted to cause some areas to become more temperate, which will increase the yields of particular crops, so unlike the opposing party, we don't need to do anything to keep anthropogenic climate change from happening."

Parties currently opposed to keeping minimum wages low:

"Minimum wage caused unemployment has models which explain it and data which confirm the models. And is is good as a practical matter, because it is predicted to increase automation in low skill fields, which will increase yields of particular services, so unlike the opposing party, we don't need to keep the minimum wage low."

Comment author: DanArmak 24 June 2014 08:41:26PM 5 points [-]

I think there's a relevant difference here between being ignorant of actual data that you are aware exists (e.g. the color of hair), and being ignorant of the existence of alternative theories or models (e.g. possible alternative meanings of the word "color").

Comment author: Michaelos 25 June 2014 02:35:47PM 4 points [-]

That seemed to make sense to me at first, but I'm having a hard time actually finding a good dividing line to show the relevant difference, particularly since what seems like it can be model ignorance for one question can be data ignorance for another question.

For instance, here are possible statements about being ignorant about the question. "What is my spouse's hair color?"

1: "I don't know your spouse's hair color."

2: "I don't know if your spouse has hair."

In this context, 1 seems like data ignorance, and 2 would seem like model ignorance.

But given a different question "Does my spouse have hair?"

2 is data ignorance, and 1 doesn't seem to be a well phrased response.

And there appear to be multiple levels of this as well: For instance, someone might not know whether or not I have a spouse.

What is the best way to handle this? Is it to simply try to keep track of the number of assumptions you are making at any given time? That seems like it might help, since in general, models are defined by certain assumptions.

Comment author: Michaelos 24 June 2014 08:24:58PM 4 points [-]

If you can notice when you're confused, how do you notice when you're ignorant?

I think one tricky thing about this question is there are cases where I am ALWAYS ignorant, and the question to ask instead is, is my ignorance relevant? I mean, I tried to give a short example of this with a simple question, below, but ironically, I was ignorant about how many different ways you could be ignorant about something until I started trying to count them, and I'm likely still ignorant about it now.


For instance, take the question: What is my spouse's hair color?

Presumably, a good deal of people reading this are somewhat ignorant about that.

On the other hand, they probably aren't as ignorant as a blind visiting interstellar Alien, Samplix, who understands English but nothing about color, although Samplix has also been given a explanation of hexadecimal color chart and has decided to guess the RGB values of my spouse's hair is #66FF00.

But you could also have another blind alien, Sampliy, who wasn't given even given a color chart, doesn't understand what words are colors and what words aren't, and so goes to roughly the middle of a computer English/Interstellar dictionary and guesses "Mouse?"

Or another visiting Alien, Sampliz, who doesn't understand English and so responds with '%@%$^!'

And even if you know my spouse has black hair, you could get more specific than that:

For instance, a Hair analyzing computer might be able to determine that my spouse has approximately 300,000 hairs, and 99% of them happen to be the Hexadecimal shade #001010, but another, more specific Hair Analyzing computer, might say that my spouse has 314,453 hairs, and 296,415 of them are Hexadecimal shade #001010. and 10,844 of them are Hexadecimal shade #001011, and...

And even if you were standing with that report from the second computer saying "Okay, it finished it's report, and I have this printout from an hour ago, so I am DEFINITELY not ignorant about your spouse's hair color."

Well, what if I told you my spouse just came back from a Hair salon?


The above list isn't exhaustive, but I think it establishes the general point. My spouse's hair color seems like the kind of question which someone could be ignorant about in less ways than something as confusing as consciousness, and yet... even spousal hair color is complicated.

Comment author: AlexMennen 18 June 2014 05:02:40PM 1 point [-]

Including value X in the aggregation is easy: just include a term in the aggregated utility function that depends on the aggregation used in the future. The hard part is maximizing such an aggregated utility function. If Value X takes up enough of the utility function already, an AI maximizing the aggregation might just replace its utility function with Value X and start maximizing that. Otherwise, the AI would probably ignore Value X's preference to be the only value represented in the aggregation, since complying would cost it more utility elsewhere than it gains. There's no point to the lottery you suggest, since a lottery between two outcomes cannot have higher utility than either of the outcomes themselves. If Value X is easily satisfied by silly technicalities, the AI could build a different AI with the aggregated utility function, make sure that the other AI becomes more powerfull than it is, and then replace its own utility function with Value X.

I don't think your Blue Cult example works very well, because for them, the preference for everyone to join the Blue Cult is an instrumental rather than terminal value.

Comment author: Michaelos 19 June 2014 05:48:37PM 1 point [-]

Thank you very much for helping me break that down!

View more: Next