Comment author: AlexanderRM 05 October 2015 01:46:36AM 0 points [-]

Just want to mention @ #8: After a year and a half of reading LW and the like I still haven't accomplished this one. Admittedly this is more like a willpower/challenge thing (similar to a "rationality technique") than just an idea I dispute, and there might be cases where simply convincing someone to agree that that's important would get them past the point of what you term "philosophical garbage" where they go "huh, that's interesting", but still hard.

Granted I should mention that I at least hope that LW stuff will affect how I act once I graduate college, get a job and start earning money beyond what I need to survive. I was already convinced that I ought to donate as much as possible to various causes, but LW has probably affect which causes I'll choose.

Comment author: DanielH 11 November 2013 11:14:08AM 0 points [-]

Be sufficiently averse to the fire department and see if that suggests anything.

I do believe it suggests libertarianism. But I can't be sure, as I can't simply "be sufficiently averse" any more than I can force myself to believe something.

Still, that one seems to be a fairly reasonable sentence. If I were to learn only that one of these had been used in an LW article (by coincidence, not by a direct causal link), I would guess it was either that one or "I won't socially kill you".

Comment author: AlexanderRM 02 October 2015 11:17:46PM 0 points [-]

I would be amazed if Scott Alexander has not used "I won't socially kill you" at some point. Certainly he's used some phrase along the line of "people who won't socially kill me".

...and in fact, I checked and the original article has basically the meaning I would have expected: "knowing that even if you make a mistake, it won't socially kill you.". That particular phrase was pretty much lifted, just with the object changed.

In response to comment by DanielLC on Sympathetic Minds
Comment author: JulianMorrison 12 February 2013 10:46:11PM 0 points [-]

Now learn the Portia trick, and don't be so sure that you can judge power in a mind that doesn't share our evolutionary history.

Also watch the Alien movies, because those aren't bad models of what a maximizer would be like if it was somewhere between animalistic and closely subhuman. Xenomorphs are basically xenomorph-maximizers. In the fourth movie, the scientists try to cut a deal. The xenomorph queen plays along - until she doesn't. She's always, always plotting. Not evil, just purposeful with purposes that are inimical to ours. (I know, generalizing from fictional evidence - this isn't evidence, it's a model to give you an emotional grasp.)

Comment author: AlexanderRM 02 October 2015 10:54:43PM 0 points [-]

The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it's worth it to betray your allies, and 2. it being risky to try when you're just barely past the point where you think it's worth it. Also there's the other humans/other nations around, which might or might not apply in interstellar politics.

...although I've just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as "first strike would win but the other side can see it coming and retaliate"), both of which change the dynamics drastically. (I suppose it's valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you're unlikely to know with certainty that the other side is an optimizer without them telling you)

Comment author: kevin_p 09 January 2014 02:22:05PM 20 points [-]

It seems to be known under the name of the equal treatment fallacy in various blogs and articles, although none of them are from particularly respectable sources. Other examples are the right of homosexuals to marry a member of the opposite gender, the right of soviet citizens to criticize the president of the USA, and Anatole France's famous statement that "in its majestic equality, the law forbids rich and poor alike to sleep under bridges, beg in the streets, and steal loaves of bread".

Comment author: AlexanderRM 23 September 2015 01:49:00AM 0 points [-]

It seems like the Linux user (and possibly the Soviet citizen example, but I'm not sure) is... in a broader category than the equal treatment fallacy, because homosexuality and poverty are things one can't change (or, at least, that's the assumption on which criticizing the equal treatment fallacy is based).

Although, I suppose my interpretation may have been different from the intended one- as I read it as "the OSX user has the freedom to switch to Linux and modify the source code of Linux", i.e. both the Linux and OSX user has the choice of either OS. Obviously the freedom to modify Linux and keep using OSX would be the equal treatment fallacy.

Comment author: Yvain2 21 September 2008 01:31:35AM 1 point [-]

Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.

Comment author: AlexanderRM 07 September 2015 07:30:17PM 0 points [-]

Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.

Comment author: Greg2 20 September 2008 10:39:25PM 7 points [-]

Perhaps the question could also be asked this way: How many times does the LHC have to inexplicably fail before we take it as scientific confirmation that world-destroying black holes and/or strange particles are indeed produced by LHC-level collisions? Would we treat such a scenario as a successful experimental result for the LHC?

Comment author: AlexanderRM 07 September 2015 07:24:16PM 1 point [-]

I wouldn't describe a result that eliminated the species conducting the experiment in the majority of world-branches as "successful", although I suppose the use of LHCs could be seen as an effective use of quantum suicide (two species which want the same resources meet, flip a coin loser kills themselves- might have problems with enforcement) if every species invariably experiments with them before leaving their home planet.

On the post as a whole: I was going to say that since humans in real life don't use the anthropic principle in decision theory, that seems to indicate that applying it isn't optimal (if your goal is to maximize the number of world-branches with good outcomes), but realized that humans are able to observe other humans and what sort of things tend to kill them, along with hearing about those things from other humans when we grow up, so we're almost never having close calls with death frequently enough to need to apply the anthropic principle. If a human were exploring an unknown environment with unknown dangers by themselves, and tried to consider the anthropic principle... that would be pretty terrifying.

Comment author: [deleted] 01 January 2012 07:56:03PM 1 point [-]

You mean the North Paw?

In response to comment by [deleted] on Insufficiently Awesome
Comment author: AlexanderRM 03 September 2015 03:40:16PM 0 points [-]

I'd be interested to hear from other LessWrongians if anyone has bought this and if it lives up to the description (and also if this model produces a faint noise constantly audible to others nearby, like the test belt); I'm the sort of person who measures everything in dead African children so $149... I'm a bit reserved about even if it is exactly as awesome as the article implied.

On the other hand, the "glasses that turn everything upside" interest me somewhat; my perspective on that is rather odd- I'm wondering how that would interact with my mental maps of places. Specifically because I'm a massive geography buff and have an absurdly detailed mental map of the whole world, which I've noticed has a specific north=up direction. Obviously those glasses probably won't help shake the built-in direction (if I just get used to them), but I'd still be interested to see what they do.

In response to comment by TimS on Dead Child Currency
Comment author: RomeoStevens 15 January 2012 12:30:56AM 3 points [-]

the depressing reality is that the child in the emperor's new clothes would have been lynched.

Comment author: AlexanderRM 03 September 2015 01:23:59AM 0 points [-]

The specific story described is perfectly plausible, because it involves political pressure rather than social, and (due to the technology level and the like) the emperor's guards can't kill everybody in the crowd, so once everyone starts laughing they're safe. However, as a metaphor for social pressure it certainly is overly optimistic by a long shot.

Comment author: TimS 10 January 2012 03:37:30AM *  3 points [-]

I'm not sure if the dynamic I was referencing has a specific description. But it is the case that in ordinary society, X can be true, everyone can know X is true, and someone declaring X is true will receive negative feedback. Cognitive-behavioral therapy might call it a part of the avoidance dynamic.

All I'm really trying to say is that college students who lack self-reflection can be giant pains for college professors. And someone upset by being singled out (per your example) has a reasonable justification for the emotional reaction, which the idiot the link discusses definitely does not.

In response to comment by TimS on Dead Child Currency
Comment author: AlexanderRM 03 September 2015 01:21:36AM 0 points [-]

I would really like to know the name for that dynamic if it has one, because that's very useful.

Comment author: wedrifid 10 January 2012 02:22:32AM 0 points [-]

Highly unlikely. A random unknown person, probably outside of your jurisdiction "will die"? We don't even have reason to believe the button press on the magical box causes the death rather than being associated via a newcomblike prediction. This is a test of ethics, not much of a legal problem at all.

Comment author: AlexanderRM 02 September 2015 11:25:51PM 0 points [-]

It seems like in the event that, for example, such buttons that paid out money exclusively to the person pushing became widespread and easily available, governments ought to band together to prevent the pressing of those buttons, and the only reason they might fail to do so would be coordination problems (or possibly the question of proving that the buttons kill people), not primarily from objections that button-pushing is OK. If they failed to do so (keeping in mind these are buttons that don't also do the charity thing) that would inevitably result in the total extermination of the human race (assuming that the buttons paid out goods with inherent value so that the collapse of society and shortage of other humans doesn't interfere with pressing them).

However I agree with your point that this is about ethics, not law.

View more: Prev | Next