If I were to steelman the usefulness of the argument, I'd say the conclusion is that positions on economics shouldn't be indispensable parts of a political movement, because that makes it impossible to reason about economics and check whether that position is wrong. Which is just a specific form of the general argument against identifying with object-level beliefs*.
*For that matter, one should perhaps be careful about identifying with meta-level beliefs as well, although I don't know if that's entirely possible for a human to do, even discounting the argum...
"He who builds his cart behind closed gates will find it not suited to the tracks outside the gates."
-Unattributed (Chinese) proverb, quoted by Chen Duxiu in "Call to Youth" 1915.
The way to signal LW ingroupness would be to say "signaling progressiveness", but that does cover it fairly well. I suspect the logic is roughly that our current prison system (imprisoning people for 12 years for a 1st time drug offense) is bad in the direction of imprisoning far too many people, so opposing our current prison system is good, so opposing the current prison system more is even better, and the most you can oppose the prison system is to support abolishing all prisons.
(actually there might be something of an argument to be made that...
I know I'm 5 years late on this but on the offchance someone sees this, I just want to mention I found Yvain's/Scott Alexander's essay on the subject incredibly useful*.
The tl;dr: Use universalizability for your actions moreso than direct utilitarianism. His suggestion is 10% for various reasons, mainly being a round number that's easy to coordinate around and have people give that exact number. Once you've done that, the problems that would be solved by everyone donating 10% of their income to efficient charities are the responsibility of other people who...
It's also worth noting that "I would set off a bomb if it would avert or shorten the Holocaust even if it would kill a bunch of babies" would still answer the question... ...or maybe it wouldn't, because the whole point of the question is that you might be wrong that it would end the war. See for comparison "I would set off a bomb and kill a bunch of innocent Americans if it would end American imperialism", which has a surprising tendency to not end American imperialism and in fact make it worse.
Overall I think if everyone followed a he...
I think the first two of those at least can be read in any combination of sarcastic/sincere*, which IMO is the best way to read them. I need to take a screenshot of those two and share them on some internet site somewhere.
I assume what Will_Pearson meant to say was "would not regret making this wish", which fits with the specification of "I is the entity standing here right now". Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it's supposed the find out of that set of possibly worlds the one you would most like, or... something along those lines)) I'm not sure that would rule out every bad outcome, but... I think it probably would. Besides the obvious...
A more practical and simple (and possibly legal) idea for abusing knowledge of irrational charity: Instead of asking for money to save countless children, ask for money to save one, specific child.
If one circulated a message on the internet saying that donations could save the life of a specific child, obviously if you then used the money for something unrelated there would be laws against that. But if you simply, say, A: lied about why they were in danger of dying, B: overstated the amount of money needed, C: left out the nationality of the child, and D: ...
This probably is a bit late, but in a general sense Effective Altruism sounds like what you're looking for, although the main emphasis there is the "helping others as much as possible" rather than the "rationalists" part, but there's still a significant overlap in the communities. If both LW and EA are too general for you and you want something with both rationality and utilitarian altruism right in it's mission statement... I'm sure there's some blog somewhere in the ratioinalist blogosphere which is devoted to that specifically, altho...
Just want to mention @ #8: After a year and a half of reading LW and the like I still haven't accomplished this one. Admittedly this is more like a willpower/challenge thing (similar to a "rationality technique") than just an idea I dispute, and there might be cases where simply convincing someone to agree that that's important would get them past the point of what you term "philosophical garbage" where they go "huh, that's interesting", but still hard.
Granted I should mention that I at least hope that LW stuff will affect how...
I would be amazed if Scott Alexander has not used "I won't socially kill you" at some point. Certainly he's used some phrase along the line of "people who won't socially kill me".
...and in fact, I checked and the original article has basically the meaning I would have expected: "knowing that even if you make a mistake, it won't socially kill you.". That particular phrase was pretty much lifted, just with the object changed.
The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it's worth it to betray your allies, and 2. it being risky to try when you're just barely past the point where you think it's worth it. Also there's the oth...
It seems like the Linux user (and possibly the Soviet citizen example, but I'm not sure) is... in a broader category than the equal treatment fallacy, because homosexuality and poverty are things one can't change (or, at least, that's the assumption on which criticizing the equal treatment fallacy is based).
Although, I suppose my interpretation may have been different from the intended one- as I read it as "the OSX user has the freedom to switch to Linux and modify the source code of Linux", i.e. both the Linux and OSX user has the choice of either OS. Obviously the freedom to modify Linux and keep using OSX would be the equal treatment fallacy.
Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.
I wouldn't describe a result that eliminated the species conducting the experiment in the majority of world-branches as "successful", although I suppose the use of LHCs could be seen as an effective use of quantum suicide (two species which want the same resources meet, flip a coin loser kills themselves- might have problems with enforcement) if every species invariably experiments with them before leaving their home planet.
On the post as a whole: I was going to say that since humans in real life don't use the anthropic principle in decision theo...
I'd be interested to hear from other LessWrongians if anyone has bought this and if it lives up to the description (and also if this model produces a faint noise constantly audible to others nearby, like the test belt); I'm the sort of person who measures everything in dead African children so $149... I'm a bit reserved about even if it is exactly as awesome as the article implied.
On the other hand, the "glasses that turn everything upside" interest me somewhat; my perspective on that is rather odd- I'm wondering how that would interact with my m...
The specific story described is perfectly plausible, because it involves political pressure rather than social, and (due to the technology level and the like) the emperor's guards can't kill everybody in the crowd, so once everyone starts laughing they're safe. However, as a metaphor for social pressure it certainly is overly optimistic by a long shot.
I would really like to know the name for that dynamic if it has one, because that's very useful.
It seems like in the event that, for example, such buttons that paid out money exclusively to the person pushing became widespread and easily available, governments ought to band together to prevent the pressing of those buttons, and the only reason they might fail to do so would be coordination problems (or possibly the question of proving that the buttons kill people), not primarily from objections that button-pushing is OK. If they failed to do so (keeping in mind these are buttons that don't also do the charity thing) that would inevitably result in th...
Interesting: He makes the argument that progress in physical areas of technology (transportation, chemistry etc.) has slowed in part due to government regulation (which would explain why the computers and the internet have been the one thing progressing drastically). But the United States has never been the source of all or even the majority of the worlds' new inventions, so an explanation focused on the U.S. government can't fill that large a gap (although, I suppose a slowdown of 1/3rd or even more would be explained).
Any information on what the situatio... (read more)