Posts like this reinforce the suspicion that LessWrong is a personality cult.
I disagree. This entire thread is so obviously a joke, one could only take it as evidence if they've already decided what they want to believe and are just looking for arguments.
It does show that EY is a popular figure around here, since nobody goes around starting Chuck Norris threads about random people, but that's hardly evidence for a cult. Hell, in the case of Norris himself, it's the opposite.
I've heard people talk about "Success Chains" - do something every day, and eventually you get a chain of successful days, and this helps pressure you to keep having successful days. Since this is generally a binary metric, it's better to have a single catastrophic failure than numerous smaller ones - it reduces the number of "breaks" in the chain and thus keeps that momentum going.
In other words, if I failed my diet a bit, I might as well fail it severely - then my body will have tons of food, and it'll be easier to get "back on track" the next few days.
Essentially, if your consequences don't scale correctly, then you want to cluster your failures. If you get written up for being late to work whether it's 15 minutes of 4 hours, you'd rather have one day where EVERYTHING goes wrong and you're 4 hours late, rather than a week where you show up 15 minutes late. Since humans are horrible at scale, it's not surprising that even our internal consequences of guilt and such don't scale linearly to the size of the transgression.
If you want to get up early, and oversleep once, chances are, you'll keep your schedule for a few days, then oversleep again, ad infinitum. Better to mark that first oversleep as a big failure, take a break for a few days, and restart the attempt.
Small failures always becoming huge ones also helps as a deterrent - if you know that that single cookie that bends your diet will end up with you eating the whole jar and canceling the diet altogether, you will be much more likely to avoid even small deviations like the plague, next time.
Wait, if you're regarding the elimination of war, famine and disease as consolation prizes for creating an wFAI, what are people expecting from a sFAI?
God. Either with or without the ability to bend the currently known laws of physics.
This steelman argument doesn't address the main issue with Pascal's mugging - anyone in real life who makes you such an offer is virtually certain to be lying or deluded.
This was my argument when I first encountered the problem in the Sequences. I didn't post it here because I haven't yet figured out what this post is about (gotta sit down and concentrate on the notation and the message of the author and I haven't done that yet), but my first thoughts when I read Eliezer claiming that it's a hard problem were that as the number of potential victims increases, the chance of the claim being actually true decreases (until it reaches a hard limit which equals the chance of the claimant having a machine that can produce infinite victims without consuming any resources). And the decrease in chance isn't just due to the improbability of a random person having a million torture victims - it also comes from the condition that a random person with a million torture victims also for some reason wants $5 from you.
Where is the flaw here? What makes the mugging important, despite how correct my gut reaction appears to me?
To be fair, I think this is more a fact about the medium (written stories) than about the characters. It's much easier to write something in which your protagonist reacts rather than being the first mover. This would not necessarily be the case when extrapolating the character outside the context (e.g. when faced with a dictatorship, a superhero may act to overthrow it).
The point is that a superhero can't take preemptive action. The author can invent a situation where a raid is possible, but for the most part, superman must destroy the nuke after it has been launched - preemptively destroying the launch pad instead would look like an act of aggression from the hero. And going and killing the general before he orders the strike is absolutely out of the question. This is fine for a superhero, but most of us can't stop nukes in-flight.
A dictatorship is different because aggression from the villain is everywhere anyway - and it's guaranteed that we will be shown at least one poor farm girl assaulted by soldiers before our hero takes action against the mastermind. Only when the villain is breaking the rules egregiously and constantly is the hero allowed to bend them a bit.
If you have a situation with both an antihero and a hero in it, the hero can be easily predicted - as opposed to the antihero,who is actually allowed to plan. Superheroes end up quite simple, since the rules they obey are so strict, they can only take one course of action (their choices tend to be about whether they follow the rules or not, and not between to courses of action that are both allowed). And that course of action often isn't the most effective.
Looking back, I feel kind of disappointed with the way Akon negotiated this one. I feel like any one of the following could have really made things go better for all parties involved.
Asking the Super-Happies for "the gift of Untranslatable 2", perhaps by sharing thoughts via affectionate skin contact first, and then talking about further compromise. I just don't understand how this unwounded so quickly, it seems like ensuring that all parties have an equal means of negotiation and empathy would come first. Humanity may have had a much easier time understanding the pain of their children if they could feel it as well, making them more likely to see the Super-Happy point of view in such a way that is still compatible with human values. If necessary, simply lie about the nature of the gift, or even outright state "Feel the emotions of others at the touch of a palm! Better living through Plasmids!".
Give Untranslatable 2 to the Baby-Eaters. Really, the Impossible Possible World struck gold with the Super-Happies. If the adult Baby Eaters had the same capacity to feel the pain of others as did the Super Happies, then The Winnowing wouldn't last very long after.
Request that Humanity and Baby Eater populations be put into some sort of stasis while negotiations take place among smaller groups to figure out what can be done. Akon could assist in whatever subterfuge be needed.
Point out that given Humanity's more fragmented nature (compared to Super Happies), Akon being an unexceptional example of decision maker is not actually an advantage. He's not exactly made of stern stuff, and anyone who hears about the situation is likely to turn to whoever can come up with a better solution, an exceptional case of decision maker. The Super Happies didn't consider that exceptional can also mean better, since humans can't transfer skills via sex.
Point out that a lot of people are going to commit suicide from the offer. The Super-Happies ought to have taken the rebuke of both parties (whether explicit or not) as a sign that their method of negotiation was just wrong, instead of trying to force the method onto both.
I can definitely agree with 5, and to some extent with 3. With 4, it didn't seem to me when I read this months ago that the Superhappies would be willing to wait; it works as a part of 3 (get a competent committee together to discuss after stasis has bought time), but not by itself.
I found it interesting on my first reading that the Superhappies are modeled as a desirable future state, though I never formulated a comprehensive explanation for why Eliezer might have chosen to do that. Probably to avoid overdosing the Lovecraft. It definitely softens the blow from modifying humanity's utility function to match their own.
You definitely hit the nail on the head with 5. Finding the other guy's pain and highlighting it, as well as showing how your offer helps what they actually care about, is both a basic and a vital negotiation technique. Call me when I'm organizing the first contact mission; I might have a space diplomat seat ready for you.
Wrap it in emotions. People understand emotions.
What do you mean, specifically? "Having fun" aside, being emotional about a game is socially harmful/uncool in the same way a precommitment can be.
-Hanlon's razor - I always start from the assumption that people seek the happiness of others once their own basic needs are met, then go from there. Helps me avoid the "rich people/fanatics/foreigners/etc are trying to kill us all [because they're purely evil and nonhuman]" conspiracies.
-"What would happen if I apply x a huge amount of times?" - taking things to the absurd level help expose the trend and is one of my favourite heuristics. Yes, it ignores the middle of the function, but more often than not, the value at x->infinity is all that matters. And when it isn't, the middle tends to be obvious anyway.
When you mentioned compartmentalization, I thought of compartmentalization of beliefs and the failure to decompartmentalize - which I consider a rationalistic sin, not a virtue.
Maybe rename this to something about remembering the end goal, or something about abstraction levels, or keeping the potential application in mind; for example "the virtue of determinism"?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It seems to scale to willpower: For some people, "a single small failure once per month" is going to be an impossible goal, but "multiple small failures OR one big failure" is an option. If and only if one is dealing with THAT choice, it seems like a single big failure does a lot less damage to motivation.
If you've got different anecdotes then I think we'll just have to agree to disagree. If you've got studies saying I'm wrong, I'm happy to accept that I'm wrong - I know it worked, since I used this to help fix my spouse's sleep cycle, but that doesn't mean it worked for the reasons I think. :)
I agree, you can get over some slip-ups, depending on how easy what you're trying is compared to your motivation.
As you said, it's a chain - the more you succeed, the easier it gets. Every failure, on the other hand, makes it harder. Depending on the difficulty of what you're trying, a hard reset is sensible because it saves time from an already doomed attempt, >and< makes the next one easier (due to the deterrent thing).