All of AlexanderRM's Comments + Replies

Interesting: He makes the argument that progress in physical areas of technology (transportation, chemistry etc.) has slowed in part due to government regulation (which would explain why the computers and the internet have been the one thing progressing drastically). But the United States has never been the source of all or even the majority of the worlds' new inventions, so an explanation focused on the U.S. government can't fill that large a gap (although, I suppose a slowdown of 1/3rd or even more would be explained).

Any information on what the situatio... (read more)

0ChristianKl
If you look at the area of transportation innovation in trains does happen outside of the US. Japan manages to build better trains over time and even our European trains are better than those trains of the past. There a general thought that Detroit did worse than German or Japanese carmakers. In general Europe has also seen an increase in regulation. Europe outlawed GMO's. Germany banned nuclear plants. German culture is often even more critical of new technology than the US.

If I were to steelman the usefulness of the argument, I'd say the conclusion is that positions on economics shouldn't be indispensable parts of a political movement, because that makes it impossible to reason about economics and check whether that position is wrong. Which is just a specific form of the general argument against identifying with object-level beliefs*.

*For that matter, one should perhaps be careful about identifying with meta-level beliefs as well, although I don't know if that's entirely possible for a human to do, even discounting the argum... (read more)

"He who builds his cart behind closed gates will find it not suited to the tracks outside the gates."

-Unattributed (Chinese) proverb, quoted by Chen Duxiu in "Call to Youth" 1915.

The way to signal LW ingroupness would be to say "signaling progressiveness", but that does cover it fairly well. I suspect the logic is roughly that our current prison system (imprisoning people for 12 years for a 1st time drug offense) is bad in the direction of imprisoning far too many people, so opposing our current prison system is good, so opposing the current prison system more is even better, and the most you can oppose the prison system is to support abolishing all prisons.

(actually there might be something of an argument to be made that... (read more)

0Lumifer
From comments at Marginal Revolution: "People will only tolerate so much bad policy before they start demanding bad counter-policy".

I know I'm 5 years late on this but on the offchance someone sees this, I just want to mention I found Yvain's/Scott Alexander's essay on the subject incredibly useful*.

The tl;dr: Use universalizability for your actions moreso than direct utilitarianism. His suggestion is 10% for various reasons, mainly being a round number that's easy to coordinate around and have people give that exact number. Once you've done that, the problems that would be solved by everyone donating 10% of their income to efficient charities are the responsibility of other people who... (read more)

It's also worth noting that "I would set off a bomb if it would avert or shorten the Holocaust even if it would kill a bunch of babies" would still answer the question... ...or maybe it wouldn't, because the whole point of the question is that you might be wrong that it would end the war. See for comparison "I would set off a bomb and kill a bunch of innocent Americans if it would end American imperialism", which has a surprising tendency to not end American imperialism and in fact make it worse.

Overall I think if everyone followed a he... (read more)

I think the first two of those at least can be read in any combination of sarcastic/sincere*, which IMO is the best way to read them. I need to take a screenshot of those two and share them on some internet site somewhere.

I assume what Will_Pearson meant to say was "would not regret making this wish", which fits with the specification of "I is the entity standing here right now". Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it's supposed the find out of that set of possibly worlds the one you would most like, or... something along those lines)) I'm not sure that would rule out every bad outcome, but... I think it probably would. Besides the obvious... (read more)

A more practical and simple (and possibly legal) idea for abusing knowledge of irrational charity: Instead of asking for money to save countless children, ask for money to save one, specific child.

If one circulated a message on the internet saying that donations could save the life of a specific child, obviously if you then used the money for something unrelated there would be laws against that. But if you simply, say, A: lied about why they were in danger of dying, B: overstated the amount of money needed, C: left out the nationality of the child, and D: ... (read more)

2Richard_Kennaway
You have just rediscovered the idea, "I know, why not just lie!" On which, see this. I predict that (a) you would be found out, (b) if it came to court, the court would convict (fraud in a good cause is still fraud), and (c) so would the forum of public opinion. ETA: See also.

This probably is a bit late, but in a general sense Effective Altruism sounds like what you're looking for, although the main emphasis there is the "helping others as much as possible" rather than the "rationalists" part, but there's still a significant overlap in the communities. If both LW and EA are too general for you and you want something with both rationality and utilitarian altruism right in it's mission statement... I'm sure there's some blog somewhere in the ratioinalist blogosphere which is devoted to that specifically, altho... (read more)

Just want to mention @ #8: After a year and a half of reading LW and the like I still haven't accomplished this one. Admittedly this is more like a willpower/challenge thing (similar to a "rationality technique") than just an idea I dispute, and there might be cases where simply convincing someone to agree that that's important would get them past the point of what you term "philosophical garbage" where they go "huh, that's interesting", but still hard.

Granted I should mention that I at least hope that LW stuff will affect how... (read more)

I would be amazed if Scott Alexander has not used "I won't socially kill you" at some point. Certainly he's used some phrase along the line of "people who won't socially kill me".

...and in fact, I checked and the original article has basically the meaning I would have expected: "knowing that even if you make a mistake, it won't socially kill you.". That particular phrase was pretty much lifted, just with the object changed.

The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it's worth it to betray your allies, and 2. it being risky to try when you're just barely past the point where you think it's worth it. Also there's the oth... (read more)

It seems like the Linux user (and possibly the Soviet citizen example, but I'm not sure) is... in a broader category than the equal treatment fallacy, because homosexuality and poverty are things one can't change (or, at least, that's the assumption on which criticizing the equal treatment fallacy is based).

Although, I suppose my interpretation may have been different from the intended one- as I read it as "the OSX user has the freedom to switch to Linux and modify the source code of Linux", i.e. both the Linux and OSX user has the choice of either OS. Obviously the freedom to modify Linux and keep using OSX would be the equal treatment fallacy.

Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.

I wouldn't describe a result that eliminated the species conducting the experiment in the majority of world-branches as "successful", although I suppose the use of LHCs could be seen as an effective use of quantum suicide (two species which want the same resources meet, flip a coin loser kills themselves- might have problems with enforcement) if every species invariably experiments with them before leaving their home planet.

On the post as a whole: I was going to say that since humans in real life don't use the anthropic principle in decision theo... (read more)

I'd be interested to hear from other LessWrongians if anyone has bought this and if it lives up to the description (and also if this model produces a faint noise constantly audible to others nearby, like the test belt); I'm the sort of person who measures everything in dead African children so $149... I'm a bit reserved about even if it is exactly as awesome as the article implied.

On the other hand, the "glasses that turn everything upside" interest me somewhat; my perspective on that is rather odd- I'm wondering how that would interact with my m... (read more)

The specific story described is perfectly plausible, because it involves political pressure rather than social, and (due to the technology level and the like) the emperor's guards can't kill everybody in the crowd, so once everyone starts laughing they're safe. However, as a metaphor for social pressure it certainly is overly optimistic by a long shot.

I would really like to know the name for that dynamic if it has one, because that's very useful.

It seems like in the event that, for example, such buttons that paid out money exclusively to the person pushing became widespread and easily available, governments ought to band together to prevent the pressing of those buttons, and the only reason they might fail to do so would be coordination problems (or possibly the question of proving that the buttons kill people), not primarily from objections that button-pushing is OK. If they failed to do so (keeping in mind these are buttons that don't also do the charity thing) that would inevitably result in th... (read more)

I just want to say that even though I generally disagree with these objections to donation*, I really love the "You can't just throw nutrients into ecosystem and expect a morally good outcome." bit and will try to remember/save that in the future. It's rather interesting that Malthusianism is completely accepted without comment in ecology and evolution, but seems to be widely hated when brought up in political or social spheres, so maybe phrasing it in ecosystem terms will make people more liable to accept it. Probably be best to introduce the co... (read more)

Worth noting that the dead baby value is very different from the actual amount which most Westerners regard the lives of white, middle-class people from their own country as being worth. In fact, pretty much the whole point of the statistic is that it's SHOCKINGLY low. I suppose we could hope that Dead Baby currency would result in a reduction to that discrepancy... although I think in the case of the actual example given, the Malthusians* have a point where it would dramatically increase access to life-prolonging things without increasing access to birth ... (read more)

0juliawise
If the demographic transition continues, I'm not too worried about Malthusian scenarios. It seems that people who are less worried about their children being wiped out by disease have fewer children. Another option is interventions that improve lives without saving them, such as deworming.

Alternative rephrasing: $4000 dollars is given to your choice of either one of the top-rated charities for saving lives, or one of the top-rated charities for distributing birth control (or something else that reduces population growth).

That means a pure reduction on both sides in number of people on the planet, and- assuming there are currently too many people on the planet- a net reduction in suffering in the long run as there are fewer people to compete with each other, plus the good it does in the short run to women who don't have to go through unwante... (read more)

Note that the Reversal Test is written with the assumption of consequentialism, where there's an ideal value for some trait of the universe, whereas the whole point of the trolley problem is that the only problem is deontological, assuming the hypothetical pure example where there are no unintended consequences.

However, the Reversal Test of things like "prevent people from pulling the lever" is still useful if you want to make deontologists question the action/inaction distinction.

I was about to give the exact same example of the soldier throwing himself on a grenade. I don't know where the idea of his actions being "shameful" even comes up.

The one thing I realize from your comment is there's the dishonesty of his actions, and if lots of people did this insurance companies would start catching on and it would stop working plus it would make life insurance that much harder to work with. But it didn't sound like the original post was talking about that with "shameful", it sounds like they were suggesting (or assum... (read more)

I know I'm 8 years late on this (only started reading LessWrong a year ago)- does anyone have a good, snappy term for the quality of humor being funny regardless of the politics? There have been times when I was amused by a joke despite disagreeing with the political point, and wanted to make some comment along the lines of "I'm a [group attacked by the joke] and this passes the Yudkowsky Test of being funny regardless of the politics", but I think "Yudkowsky test" isn't a good term (for one thing, I have no idea if Yudkowsky actually c... (read more)

The assumption is that people start doing things that match with their stated beliefs- so, for instance, people who claim to oppose genocide would actually oppose genocide in all cases, which is the whole point of thinking hypocrisy is bad. Causing people to no longer be hypocrites by making them instead give up their stated beliefs would just make for a world which was more honest but otherwise not dramatically improved.

Incidentally, on the joking side: If atheists did win the religious war, they could then use this statement in a completely serious and logical context: https://www.youtube.com/watch?v=FmmQxXPOMMY

Worth elaborating: If all religious people were non-hypocritical and do exactly what the religion they claim to follow commands, there would probably be an enormous initial drop in violence, followed by any religions that follow commandments like "thou shalt not kill" without exception being wiped out, with religions advocating holy war and the persecution of heretics getting the eventual upper hand (although imperfectly adapted religions might potentially be able to hold off the better-adapted ones through strength of numbers- for instance, if a... (read more)

0ndvo
Is it correct to say that he who is not coherent is hypocritical? I'm used to think of hypocrisy as someone who does not apply to himself the criteria he wants to apply to others. I can think of some reasons why it is not plausible that there can be found people completely coherent: * people are not aware of all their ideas at the same time; * people change their minds; * people can hold inconsistent ideas, at least when they are not aware of it; Another thought: human minds are the environment where memes develop, but one should notice that memes are also the environment in which humans act. That means that even firmly believing something to be wrong someone can still decide to do it, and vice-versa.
1Richard_Kennaway
Sounds like the history of Europe and the Islamic world. Except that no-one ever did get the upper hand, neither for Christianity vs. Islam, nor the splits within those faiths. Anyone want to go back to the time of the Crusades? If the only thing in favour of an idea is how wonderful the world would be if everyone followed it, it's a bad idea.

I think that might help somewhat- thinking of rationality as something you do rather than something you are is definitely good regardless- but there's still the basic problem that your self-esteem is invested in rationality. Rationality requires you to continually be willing to doubt your core values and consider that they might be wrong, and if your core values are wrong, then you haven't gotten any use up to that point out of your rationality. I don't think it's just a matter of realizing your were wrong once and recovering self-esteem out of the fact th... (read more)

I just want to mention that the thing about a human trying to self-modify their brain in the manner described and with all the dangers listed could make an interesting science fiction story. I couldn't possibly write it myself and am not even sure what the best method of telling it would be- probably it would at least partially include something like journal entries or just narration from inside the protagonists' head, to illustrate what exactly was going on.

Especially if the human knew the dangers perfectly well, but had some reason they had to try anyway... (read more)

Why would a command economy be necessary to avoid that? Welfare Capitalism- you run the economy with laissez-faire except you tax some and give it away to poor people, who can then spend it as they wish as if they'd earned it in laissez-faire economics- would work just fine. As mechanization increases, you gradually increase the welfare.

It won't be entirely easy to implement politically, mainly because of our ridiculous political dichotomy where you can either understand basic economics or care about poor people, but not both.

Since we're citing sources I'l... (read more)

An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn't actually "buy time" for anything in particular- I can't think of anything we'd want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can't think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building... (read more)

My impression of the thought experiment is that there's suppose to be no implication that their side winning the war would be any better than the other side winning. Their side winning is explicitly about maintaining social status and authority. "Keep harm at a low level" might mean "lower than a Hobbesian war of all against all", not necessarily low by our standards. It seems like maybe the thought experiment could be improved by explicitly rephrasing it to make their nation be a pretty terrible place by our standards and winning the w... (read more)

Interesting observation: You talked about that in terms the effects of banning sweatshops, rather than talking about it in terms of the effects of opening them. It's of course the exact same action and the same result in every way- deontological as well as consequentialist- but it changes from "causing people to work in horrible sweatshop conditions" to "leaving people to starve to death as urban homeless", so it switches around the "killing vs. allowing to die" burden. (I'm not complaining, FYI, I think it's actually an excellent technique. Although maybe it would be better if we came up with language to list two alternatives neutrally with no burden of action.)

"consequentialists who believe in heaven and hell and think they have insight into how to get people into heaven would be willing to do lots of nasty things to increase the number of people who go to heaven."

I fully agree with this (as someone who doesn't believe in heaven and hell, but is a consequentialist), and also would point out that it's not that different from the way many people who believe in heaven and hell already act (especially if you look at people who strongly believe in them; ignore anyone who doesn't take their own chances of he... (read more)

"Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls' procedure to determine what is a preferable social contract, perhaps you're a Rawlsian more than you're a consequentialist. :) BTW, are you familiar with Rawls' objections to (classical) utilitarianism?"

I can't speak for Yvain but as someone who fully agreed with his use of that test, I would describe mysel... (read more)

0Morendil
This later piece is perhaps relevant.

"The main point is that forcing people to become gladiators against their will requires a system that would almost certainly lower utility (really you'd have to have an institution of slavery or a caste system; any other way and people would revolt against the policy since they would expect a possibility of being to be gladiators themselves)."

It seems to me that, specifically, gladiatorial games that wouldn't lower utility would require that people not revolt against the system since they accept the risk of being forced into the games as the pric... (read more)

0Jiro
In the case of prostitution, similar arguments apply to some extent to all jobs, but "to some extent" refers to very different degree. My test would be as follows: ask how much people would have to be paid before they would be willing to take the job (in preference to a job of some arbitrary but fixed level of income and distastefulness) Compare that amount to the price that the job actually gets in a free market. The higher the ratio gets, the worse the moral hazard. I would expect both prostitution and being a gladiator to score especially low in this regard.

I would say yes, we should re-examine it.

The entertainment value of forced gladiatorial games on randomly-selected civilians... I personally would vote against them because I probably wouldn't watch them anyway, so it would be a clear loss for me. Still, for other people voting in favor of them... I'm having trouble coming up with a really full refutation of the idea in the Least Convenient Possible World hypothetical where there's no other way to provide gladiatorial games, but there are some obvious practical alternatives.

It seems to me that voluntary gl... (read more)

So you're suggesting one should always use ask culture in response to questions, but being careful about which culture you use when asking questions? That sounds like a decent strategy overall. However, from the descriptions people have been giving it seems to me that you aren't supposed to refuse requests in guess culture (that's why it's offensive to make a request someone doesn't want to agree to).

Now, I'm probably both biased personally against guess culture and being influenced by other people who are more on the ask side describing it here, but it se... (read more)

I don't have much to contribute here personally; Just want ton ote that Yvain has an excellent diagram on the "inferrential distances" thing: http://squid314.livejournal.com/337475.html

(Also, the place he linked it from: http://slatestarcodex.com/2013/05/30/fetal-attraction-abortion-and-the-principle-of-charity/ is probably the more obviously relevant thing to moral debates in politics and the like.)

I would say that for someone who accepts liberal ideas (counting most conservatives in western countries), this seems like a very useful argument for convincing them of this: If we always used intuitional morality, we would currently have morality that disagrees with their intuitions (about slavery being wrong, democracy being good, those sorts of things).

Of course, as a rational argument it makes no sense. It just appeals to me because my intuitions are Consequentialist and I want to try to convince others to follow Consequentialism, because it will lead to better outcomes.

It seems to me that Utilitarianism can be similar to the way you describe Kant's approach: Selecting a specific part of our intuitions- "Actions that have bad consequences are bad"- ignoring the rest, and then extrapolating from that. Well, that and coming up with a utility function. Still, it seems to me that you can essentially apply it logically to situations and come up with decisions based on actual reasoning: You'll still have biases, but at least (besides editing utility functions) you won't be editing your basic morality just to follow yo... (read more)

Isn't "the psychology of the discoursing species" another way of saying "moral intuitions"? Or at least, those are included in the umbrella of that term.

0torekp
Yes, they're included. Well said. I believe this way of putting it, however, supports my criticism of the phrase "ultimately grounded in our moral intuitions;" the phrase is badly incomplete.

As I side note, I'd like to say I'd imagine nearly all political beliefs throughout history have had people citing every imaginable form of ethics as justifications, and furthermore without even distinguishing between them. From what I understand the vast majority of people don't even realize there's a distinction (I myself didn't know about non-consequentalist ideas until about 6 months ago, actually).

BTW, I would say that an argument about "the freedom to own slaves" is essentially an argument that slavery being allowed is a terminal value, although I'd doubt anyone would argue that owning of slaves is itself a terminal value.

"My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided [i]not[/i] to implement the measure."

Does anyone know of a cit... (read more)

"But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual."

A question for comparison: would you rather have a 1/Googolplex chance of being tortured for 50 years, or lose 1 c... (read more)

2Jiro
Whenever I drive, I have a greater than a 1/googlolplex chance of getting into an accident which would leave me suffering for 50 years, and I still drive. I'm not sure how to measure the benefit I get from driving, but there are at least some cases where it's pretty small, even if it's not exactly a cent.

As I understand it, the math is in the dust speck's favor because EY used an arbitrarily large number such that it couldn't possibly be otherwise.

I think a better comparison would be between 1 second of torture (which I'd estimate is worth multiple dust specks, assuming it's not hard to get them out of your eye) and 50 years of torture, in which case yes, it would flip around 1.5 billion. That is of course assuming that you don't have a term in your utility function where sharing of burdens is valuable- I assume EY would be fine with that but would insist that you implement it in the intermediate calculations as well.

A better metaphor: What if we replaced "getting a dust speck in your eye" with "being horribly tortured for one second"? Ignore the practical problems of the latter, just say the person experiences the exact same (average) pain as being horribly tortured, but for one second.

That allows us to directly compare the two experiences much better, and it seems to me it eliminates the "you can't compare the two experiences"- except of course with long term effects of torture, I suppose; to get a perfect comparison we'd need a torture... (read more)

Note here that the difference is between the deaths of currently-living people, and preventing the births of potential people. In hedonic utilitarian terms it's the same, but you can have other utilitarian schemes (ex. choice utilitarianism as I commented above) where death either has an inherent negative value, or violates the person's preferences against dying.

BTW note that even if you draw no distinction, your thought experiment doesn't necessarily prove the Repugnant Conclusion. The third option is to say that because the Repugnant Conclusion is false,... (read more)

I think the dust motes vs. torture makes sense if you imagine a person being bombarded with dust motes for 50 years. I could easily imagine a continuous stream of dust motes being as bad as torture (although possibly the lack of variation would make it far less effective than what a skilled torturer could do).

Based on that, Eliezer's belief is just that the same number of dust motes spread out among many people is just as bad as one person getting hit by all of them. Which I will admit is a bit harder to justify. One possible way to make the argument is to... (read more)

Load More