Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Benja 23 September 2012 10:46:05PM *  5 points [-]

If all the coins are quantum-mechanical, you should never quit, nor if all the coins are logical (digits of pi). If the first coin is logical ("what laws of physics are true?", in the LHC dilemma), the following coins are quantum, and your utility is linear in squared amplitude of survival, again you should never quit. However, if your utility is logarithmic in squared amplitude (i.e., dying in half of your remaining branches seems equally bad no matter how many branches you have remaining), then you should quit if your first throw comes up heads.

Comment author: Brilliand 05 February 2016 04:08:01AM *  0 points [-]

I'm not getting the same result... let's see if I have this right.

If you quit if the first coin is heads: 50%*75% death rate from quitting on heads, 50%*50% death rate from tails

If you never quit: 50% death rate from eventually getting tails (minus epsilon from branches where you never get tails)

These deathrates are fixed rather than a distribution, so switching to a logarithm isn't going to change which of them is larger.

I don't think the formula you link to is appropriate for this problem... it's dominated by the log(2^-n) factor, which fails to account for 50% of your possible branches being immune to death by tails. Similarly, your term for quitting damage fails to account for some of your branches already being dead when you quit. I propose this formula as more applicable.

Comment author: Pentashagon 29 June 2012 11:42:55PM -1 points [-]

I'll guess that in your analysis, given the base case of D and E's game being a tie vote on a (D=100, E=0) split, results in a (C=0, D=0, E=100) split for three pirates since E can blackmail C into giving up all the coins in exchange for staying alive? D may vote arbitrarily on a (C=0, D=100, E=0) split, so C must consider E to have the deciding vote.

If so, that means four pirates would yield (B=0, C=100, D=0, E=0) or (B=0, C=0, D=100, E=0) in a tie. E expects 100 coins in the three-pirate game and so wouldn't be a safe choice of blackmailer, but C and D expect zero coins in a three-pirate game so B could choose between them arbitrarily. B can't give fewer than 100 coins to either C or D because they will punish that behavior with a deciding vote for death, and B knows this. It's potentially unintuitive for C because C's expected value in a three-pirate game is 0 but if C commits to voting against B for anything less than 100 coins, and B knows this, then B is forced to give either 0 or 100 coins to C. The remaining coins must go to D.

In the case of five pirates C and D except more than zero coins on average if A dies because B may choose arbitrarily between C or D as blackmailer. B and E expect zero coins from the four-pirate game. A must maximize the chance that two or more pirates will vote for A's split. C and D have an expected value of 50 coins from the four-pirate game if they assume B will choose randomly, and so a (A=0, B=0, C=50, D=50, E=0) split is no better than B's expected offer for C and D and any fewer than 50 coins for C or D will certainly make them vote against A. I think A should offer (A=0, B=n, C=0, D=0, E=100-n) where n is mutually acceptable to B and E.

Because B and E have no relative advantage in a four-pirate game (both expect zero coins) they don't have leverage against each other in the five-pirate game. If B had a non-zero probability of being killed in a four-pirate game then A should offer E more coins than B at a ratio corresponding to that risk. As it is, I think B and E would accept a fair split of n=50, but I may be overlooking some potential for E to blackmail B.

Comment author: Brilliand 30 September 2015 10:08:02PM 0 points [-]

In every case of the pirates game, the decision-maker assigns one coin to every pirate an even number of steps away from himself, and the rest of the coins to himself (with more gold than pirates, anyway; things can get weird with large numbers of pirates). See the Wikipedia article Kawoomba linked to for an explanation of why.

Comment author: Lumifer 16 September 2015 12:31:34AM 1 point [-]

being downvoted to -4 makes it impossible to reply to those who replied to me

It's quite possible, only requiring payment in your own karma points. If you're karma-broke, well....

Comment author: Brilliand 28 September 2015 07:25:41PM 2 points [-]

Seeing as how what I was saying was basically "let the poor starve", this ending seems strangely appropriate.

Comment author: TheAncientGeek 18 September 2015 12:40:29AM -1 points [-]

If you think AI researchers won't co operate on friendly AI, then FAI is doomed. If people are going to cooperate. they can agree on restricting AI to oracles as well as any other measure.

Comment author: Brilliand 23 September 2015 05:52:22PM 0 points [-]

I'm trying to interpret this in a way that makes it true, but I can't make "AI researchers" a well-defined set in that case. There are plenty of people working on AI who aren't capable of creating a strong AI, but it's hard to know in advance exactly which few researchers are the exception.

I don't think we know yet which people will need to cooperate for FAI to succeed.

Comment author: taryneast 10 September 2015 12:41:56AM *  2 points [-]

"whether they're worth the cost of keeping alive." and this highlights the differences in our views.

our point of difference is in this whole basis of using practical "worth" as The way of deciding whether or not a person should live/die.

I can get trying to minimise the birth of new people that are net-negative contributors to the world... but from my perspective, once they are born - it's worth putting some effort into supporting them.

Why? because it's not their fault they were born the way they are, and they should not be punished because of that. They need help to get along.

Sometimes - the situation that put them in their needy state occurred after they were born - and again is still not their fault.

Another example to point out why I feel your view is unfair to people: Imagine somebody who has worked all their lives in an industry that has given amazing amounts of benefit to the world.. but has only just now become obsolete. That person is now unemployed and, due to being near retirement age, unemployable. It's an industry in which they were never really paid very well, and their savings don't add up to enough to cover their ongoing living costs for very long.

Eventually, there will come a time when the savings run out and this person dies of starvation without our help.

I consider this not to be a fair situation, and I'd rather my tax-dollars went to helping this person live a bit longer, than go to the next unnecessary-war (drummed up to keep the current pollies in power).

Comment author: Brilliand 15 September 2015 09:50:10PM 0 points [-]

I've just made the unpleasant discovery that being downvoted to -4 makes it impossible to reply to those who replied to me (or to edit my comment). I'll state for the record that I disagree with that policy... and proceed to shut up.

Comment author: taryneast 07 September 2015 05:46:24AM *  2 points [-]

"sentient minds are remarkably easy to create"

I'm not sure I agree with this. It takes quite a lot of resources (time, energy etc) to create sentient minds at present... certainly to bring them to any reasonable state of maturity. After which, the people that put that time and effort in quite often get very attached to that new sentient mind - even if that mind is not a net-productive citizen.

The strategy that you choose to follow in how to divide up resources to sentient minds may be based on what you perceive to be their net-productivity... and maybe you feel a strong need to push your ideas on others as "oughts" that you think they should follow (eg that people ought to earn every resource themselves)... but it's pretty clear that other people are following other strategies than your preferred one.

as a counter-example, a very large number of people (not including myself here) follow that old adage of "from each according to his abilities to each according to his needs" which is just about the exact opposite of your own.

Comment author: Brilliand 09 September 2015 11:02:04PM 0 points [-]

[I've written two different responses to your comment. This one is more true to my state of mind when I wrote the comment you replied to.]

Consider this: a man gets a woman pregnant, the man leaves. The woman carries the child to birth, hands it over to an adoption agency. Raising the child to maturity is now someone else's problem, but it has those parents' genes. I do not want this to be a viable strategy. If some people choose this strategy, that only makes it more important to stop letting them cheat.

Comment author: taryneast 07 September 2015 05:46:24AM *  2 points [-]

"sentient minds are remarkably easy to create"

I'm not sure I agree with this. It takes quite a lot of resources (time, energy etc) to create sentient minds at present... certainly to bring them to any reasonable state of maturity. After which, the people that put that time and effort in quite often get very attached to that new sentient mind - even if that mind is not a net-productive citizen.

The strategy that you choose to follow in how to divide up resources to sentient minds may be based on what you perceive to be their net-productivity... and maybe you feel a strong need to push your ideas on others as "oughts" that you think they should follow (eg that people ought to earn every resource themselves)... but it's pretty clear that other people are following other strategies than your preferred one.

as a counter-example, a very large number of people (not including myself here) follow that old adage of "from each according to his abilities to each according to his needs" which is just about the exact opposite of your own.

Comment author: Brilliand 09 September 2015 11:01:57PM *  0 points [-]

It's a lot of resources from the perspective of a single person, but I was thinking at a slightly larger scale. By "easy", I mean that manageable groups of people can do it repeatedly and be confident of success. Really, the fact that sentient minds can be valued in terms of resources at all is sufficient for my argument. (That value can then be ignored when assessing productivity, as it's a sunk cost.)

You seem to be looking in the wrong place with your "that people ought to earn every resource themselves" example - my opinion is that the people who have resources should not give those resources to people who won't make good use of them. That the people who lack resources will then have to earn them if they're to survive is an unavoidable consequence of that (and is my real goal here), but those aren't the people that I think ought to be changing things.

As for what strategies people actually follow, I think most people do what I'm saying they should do, on an individual level. Most people protect their resources, and share them only with those who they expect to be able to return the favor. On the group level, though, people lose track of how much things actually cost, and support things like welfare that help people regardless of whether they're worth the cost of keeping alive.

Comment author: [deleted] 03 November 2012 04:18:08PM -1 points [-]

So what you're saying is that the only reason this problem is a problem is because the problem hasn't been defined narrowly enough. You don't know what Omega is capable of, so you don't know which choice to make. So there is no way to logically solve the problem (with the goal of maximizing utility) without additional information.

Here's what I'd do: I'd pick up B, open it, and take A iff I found it empty. That way, Omega's decision of what to put in the box would have to incorporate the variable of what Omega put in the box, causing an infinite regress which will use all cpu cycles until the process is terminated. Although that'll probably result in the AI picking an easier victim to torment and not even giving me a measly thousand dollars.

Comment author: Brilliand 03 September 2015 09:36:48PM *  0 points [-]

If you look in box B before deciding whether to choose box A, then you can force Omega to be wrong. That sounds like so much fun that I might choose it over the $1000.

In response to Semantic Stopsigns
Comment author: Richard_Kulisz 22 November 2007 12:48:50AM 2 points [-]

Before the Big Bang is beyond the universe. Beyond the universe are other laws of physics. Which laws? All self-consistent laws. What are sets of laws of physics? They're mathematics. What is mathematics? Arbitrary symbol manipulation. And there you've reached a final stopping point. Because it isn't even intelligible to ask why there are symbols or why there is mathematical existence. They are meta-axiomatic, and there is nothing beyond or beneath them. More importantly, there is no meta-level above them because they are their own meta-level.

Comment author: Brilliand 01 September 2015 05:56:50PM 0 points [-]

This looks like equivocation between the math-like structure of the universe and mathematics itself - mathematics proper is something invented by humans, which happens to resemble the structure of the universe. Whatever is outside the universe is unknown, but probably can be discovered with considerable difficulty (and will probably be describable by mathematics, but will not be mathematics itself).

Comment author: wedrifid 15 January 2013 04:38:02PM 2 points [-]

Though, formalizing this intuition is murder. Literally.

No, murder requires that you kill someone (there are extra moral judgements necessary but the killing is rather unambiguous.)

Comment author: Brilliand 28 August 2015 05:26:02PM 0 points [-]

I read that quote as saying "if you formalize this intuition, you wind up with the definition of murder". While not entirely true, that statement does meet the "kill" requirement.

View more: Next