Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Squark 29 July 2014 07:55:30PM 0 points [-]

I don't understand what you mean by "utility bound". A bounded utility function is just a function which takes values in a finite interval.

Comment author: Michaelos 30 July 2014 12:34:48PM 0 points [-]

Let me try rephrasing this a bit.

What if, depending on other circumstances(say the flip of a fair coin), your utility function can take values in either a finite(if heads) or infinite(if tails) interval?

Would that entire situation be bounded, unbounded, neither, or is my previous question ill posed?

Comment author: PeerGynt 28 July 2014 09:29:21PM 7 points [-]

I'm fairly sure this comment was not exactly intended as a compliment, but I can think of worse insults than having my writing put in the same category as Nick Bostrom. As the author of the first of these parables, even I recognize that these two stories differ very significantly in quality

The Blue and Green Martians parable was an attempt to discuss a question of ethics that is important to many members of this community, and which it is almost impossible to discuss elsewhere. The decision to use an analogy was an attempt to minimize mindkill. This did not succeed. However, I am fairly sure that if I had chosen not to use an analogy, the resulting flamewar would have been immense. This probably means that there are certain topics we just can't discuss, which feels distinctly suboptimal, but I'm not sure I have a better solution.

Comment author: Michaelos 29 July 2014 03:20:07PM 0 points [-]

What would you think of the following solution?

Announce 'I would like to have conversations about the controversial topic Pick-up Artistry. Because talking about it publicly can result in problems, If you want to talk with me about that topic, please send me a message stating your position on the topic.'

By keeping it open like that and not stating your own position, it seems to be about as not prone to mindkill as you could get.

The downside is, private conversations don't have as much bounce effects. For instance in the prior mentioned thread, Viliam_Bur essentially created a post which I don't think would ever get paralleled in a series of private conversations.

(Viliam_Bur's post for reference: http://lesswrong.com/r/discussion/lw/klx/ethics_in_a_feedback_loop_a_parable/b5oz

Comment author: Squark 29 July 2014 09:40:36AM 1 point [-]

I'm afraid that this kind of reasoning cannot avoid the real underlying problem, namely that Solomonoff expectation values of unbounded utility functions tend to divergence since utility grows as BB(n) and probability falls only as 2^{-n}.

Comment author: Michaelos 29 July 2014 11:43:41AM *  0 points [-]

What if the utility function is bound but and the bound itself is expandable without limit in at least some cases?

For instance, take a hypothetical utility function, Coinflipper bot.

Coinflipper bot has utility equal to the number of fair coins it has flipped.

Coinflipper bot has a utility bound equal to the 2^(greatest number of consecutive heads on fair coins it has flipped+1)

For instance, a particular example of Coinflipper bot might have flipped 512 fair coins and it's current record is 10 consecutive heads on fair coins, so it's utility is 512 and it's utility bound is 2^(10+1) or 2048.

On the other hand, a different instance of Coinflipper bot might have flipped 2 fair coins, gotten 2 tails, and have a utility of 2 and a utility bound of 2^(0+1)=2.

How would the math work out in that kind of situation?

Comment author: Squark 28 July 2014 06:49:03PM 3 points [-]

Just a sidenote, but IMO the solution to Pascal mugging is simply using a bounded utility function. I don't understand why people insist on unboundedness.

Comment author: Michaelos 28 July 2014 08:48:08PM 1 point [-]

I don't understand why people insist on unboundedness.

Possibly because there do appear to be potential solutions to Pascal's mugging that do not require bounding your utility function.

Example:

I claim that in general, I find it reasonable to submit and give the Pascal's mugger the money, and I am being Pascal's mugged and considering giving a Pascal's mugger money.

I also consider: What is the chance that a future mugger will make a Pascal's mugging with a higher level of super exponentiation and that I won't be able to pay?

And I claim that the answer appears to be: terribly unlikely, but considering the risks of failing at a higher level of super exponentiation, likely enough that I shouldn't submit to the current Pascal's mugger.

Except, that's ALSO true for the next Pascal's mugger.

So despite believing in Pascal's mugging, I act exactly as if I don't, and claiming that I 'believe' in Pascal's mugging, doesn't actually pay rent (for the muggers)

End of Example.

Since there exist examples like this and others that appear to solve Pascal's mugging without requiring a bounded utility function, a lot of people wouldn't want to accept a utility bound just because of the mugging.

Comment author: Michaelos 22 July 2014 04:02:05PM *  2 points [-]

Do other people display ambiguity aversion in all cases, or only when there are personal resources at stake?

Example. You've just found a discarded ticket to play two draws of the charity lottery! Here's how it works.

There 90 balls, 30 red, 60 either black or yellow in some distribution. You may choose either:

1a) I pay Givewell's top charity $100 if you draw a red ball.

1b) I pay Givewell's top charity $100 if you draw a black ball.

And then on the subsequent draw we go to an entirely different urn which may have an entirely different distribution of yellow and black balls (although still 30 red, 60 either black or yellow) , and then either:

2a) I pay Givewell's top charity $100 if you draw a red or yellow ball.

2b) I pay Givewell's top charity $100 if you draw a black or yellow ball.

For some reason, the buttons are set to 1b and 2a. You can at no monetary cost switch options to minimize ambiguity by pressing the buttons to toggle to the other option. It's perhaps a barely noticeable expenditure of calories, so the cost seems trivial: not even a penny.

1: Do you do switch to the less ambiguous option at a trivial cost?

2: Instead of pressing the buttons, would you pay 1 penny to the person running the charity lottery, to do so in either case?

3: Would you be willing to pay 1 penny to switch in either case if the person running the charity lottery also gave you two pennies before hand? You get to keep them if you don't use them.

When considering the previous situation, I don't feel particularly ambiguity averse at all, and I don't really feel the need to make any changes to the settings. But maybe other people do, so I thought I should check. And maybe it is weird of me to not feel ambiguity aversion about this, and I should check that as well.

Edit: Formatting and Grammar.

Comment author: Michaelos 21 July 2014 02:12:07PM 1 point [-]

Hmm. The results appear quite different if you allow communication and repeated plays. And they also introduce something which seems slightly different than Trembling Hand (Perhaps Trembling Memory?)

With communication and repeated plays:

Assume all potential Player 1's credibly precommit to flip a fair coin and based on the toss, pick B half the time, and C half the time.

All potential Player 2's would know this, and assuming they expect Player 1 to almost always follow the precommitment, would pick Y, because they would maximize their expected payout. (50% chance of 2 means expected payout of 1, compared to picking X, where 50% chance of 1 means expected payout of 1/2.)

All potential Player 1's, based on following that precommitment universally, and having Player 2's always pick Y, will get 2 50% of the time and 6 50% of the time, which would get an expected per game payout of 4.

This seems better for everyone (by about one point per game) then Player 1 only choosing A.

With Trembling Memory:

Assume further that some of the time, Player 2's memory trembles and he forgets about the precommitments and the fact that this game is played repeatedly.

So If Player 2 suspects for instance, that they MIGHT be in the case above, but have forgotten important facts (they are incorrect about this being a one-off game, they are incorrectly assessing the state of common knowledge, but they are correctly assessing the current payoff structure of this particular game) then following those suspicions, it would still make sense for them to choose Y, and it would also explain why Player 1 chose something that wasn't A.

However, nothing would seem to prevent Player 2 from suspecting other possibilities. (For instance, under the assumption that Player 1 hits player 2 with Amnesia dust before every game, knowing that Player 2 will be forced into a memory trembled and will believe the above, Player 1 could play C every time, with player 2 drawing predictably incorrect conclusions and playing Y at no benefit.)

I'm not sure how to model a situation with trembling memory, though, so I would not be surprised if I was missing something.

Comment author: Michaelos 11 July 2014 01:39:27PM 3 points [-]

I want to thank you for posting that link to Gwern's accumulated material. I was going to make a comment about the estimated adoption speeds, but prior to that I started reading Gwern's material and found the information I was going to use to construct my comment was out of date.

Comment author: Michaelos 08 July 2014 04:01:54PM 0 points [-]

Should you consider "Not begging someone for something" denying them agency?

I mean, presumably, you would prefer your five year old not ask you for too much chocolate. But what if they say "But PARENT, if I DIDN'T beg you for too much chocolate, I would be denying you of your agency!"

Because I have both begged for things, not begged for things and been begged from, in all sorts of various circumstances. Possibly too many to list.

Comment author: James_Miller 07 July 2014 03:46:46PM 5 points [-]

The great filter argument and Fermi's paradox take into account the speed of light, and the size and age of the galaxy. Both figure that there has been plenty of time for aliens to colonize the galaxy even if they traveled at, say, 1% of the speed of light. If our galaxy were much younger or the space between star systems much bigger there would not be a Fermi paradox and we wouldn't need fear the great filter.

To directly answer the question of your second sentence, yes but only by a very small amount.

Comment author: Michaelos 07 July 2014 06:34:31PM 0 points [-]

I think that reading this and thinking it over helped me figure out a confusing math error I was making. Thank you!

Normally, to calculate the odds of a false negative, I would need the test accuracy, but I would also need the base rate.

I.E, If a test for the presence or absence of colonization is 99% accurate, and the base rate for evidence of colonization is present in 1% of stars, and my test is negative, then I can compute the odds of a false negative.

However, in this case, I was attempting to determine "Given that our tests aren't perfectly accurate, what if the base rate of colonization isn't 0%?" and while that may be a valid question, I was using the wrong math to work on it, and it was leading me to conclusions that didn't make a shred of sense.

Comment author: Michaelos 07 July 2014 03:15:42PM 0 points [-]

I have a question. Does the probability that the colonization of the universe with light speed probes has occurred, but only in areas where we would not have had enough time to notice it yet, affect the Great Filter argument?

For instance, assume the closest universal colonization with near light speed probes to us started 100 light years away in distance, 50 years ago in time. When we look at the star where colonization started, we wouldn't see evidence of near light speed colonization yet, because we're seeing light from 100 years ago, before they started.

I think a simpler way of putting this might be "What is the probability our tests for colonial explosion are giving a false negative? If that probability was high, would it affect the Great Filter Argument?"

View more: Next