Posts

Sorted by New

Wiki Contributions

Comments

Interesting: He makes the argument that progress in physical areas of technology (transportation, chemistry etc.) has slowed in part due to government regulation (which would explain why the computers and the internet have been the one thing progressing drastically). But the United States has never been the source of all or even the majority of the worlds' new inventions, so an explanation focused on the U.S. government can't fill that large a gap (although, I suppose a slowdown of 1/3rd or even more would be explained).

Any information on what the situation has been in other countries? I wouldn't be surprised if the entire First World has trended towards drastically more regulation, which would indeed leave only the places with fewer inventors and little capital to invest or consumer-money to spend able to experiment with technologies in those fields (if true, the implications for the chance of changing the situation aren't as bright as if it's just the United States). Still, this is something that has to be remembered in any discussion of technology, or for that matter any questions of this type. More generally there seems to be a general lack of tendency (among Americans at least) to check on or be aware of other countries in all sorts of questions, and the few times they are brought up it's usually a single anecdote to reinforce the speakers' point (but even these are less common than one would expect). That seems to be a serious impediment to actually figuring out problems.

If I were to steelman the usefulness of the argument, I'd say the conclusion is that positions on economics shouldn't be indispensable parts of a political movement, because that makes it impossible to reason about economics and check whether that position is wrong. Which is just a specific form of the general argument against identifying with object-level beliefs*.

*For that matter, one should perhaps be careful about identifying with meta-level beliefs as well, although I don't know if that's entirely possible for a human to do, even discounting the argument that there might be conservation of tribalism. It might be possible to reduce ones' identity down to a general framework for coming up with good meta-level beliefs, and avoid object-level

"He who builds his cart behind closed gates will find it not suited to the tracks outside the gates."

-Unattributed (Chinese) proverb, quoted by Chen Duxiu in "Call to Youth" 1915.

The way to signal LW ingroupness would be to say "signaling progressiveness", but that does cover it fairly well. I suspect the logic is roughly that our current prison system (imprisoning people for 12 years for a 1st time drug offense) is bad in the direction of imprisoning far too many people, so opposing our current prison system is good, so opposing the current prison system more is even better, and the most you can oppose the prison system is to support abolishing all prisons.

(actually there might be something of an argument to be made that in order to fight a policy way too far to one side of good policy, it can be useful in some cases to overcompensate and bring a policy too far to the other side into the discussion, although I think in a politically polarized environment like the US that's bad overall- the overwhelming majority people who hear such an argument will be people who were already convinced of a decent policy and will be sent too far to one side by it, while the people who actually would have their beliefs brought closer to a good policy by hearing the counter-narrative either won't hear it, or will use it to strawman the opposition.)

I know I'm 5 years late on this but on the offchance someone sees this, I just want to mention I found Yvain's/Scott Alexander's essay on the subject incredibly useful*.

The tl;dr: Use universalizability for your actions moreso than direct utilitarianism. His suggestion is 10% for various reasons, mainly being a round number that's easy to coordinate around and have people give that exact number. Once you've done that, the problems that would be solved by everyone donating 10% of their income to efficient charities are the responsibility of other people who are donating less than that amount (I'd also suggest trying to spread the message as much as possible, as I'm doing here).

Of course it'd be better to donate more of your income. I would say that if feeling bad about donating 10% causes you to donate more, then... donate more. If it just causes you to feel like you'll never be good enough so you don't even try, it's useless and you'd do more good by considering yourself completely absolved. 10% is also incredibly useful for convincing people who aren't already convinced of unlimited utilitarian duty to donate to efficient charity.

*http://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

It's also worth noting that "I would set off a bomb if it would avert or shorten the Holocaust even if it would kill a bunch of babies" would still answer the question... ...or maybe it wouldn't, because the whole point of the question is that you might be wrong that it would end the war. See for comparison "I would set off a bomb and kill a bunch of innocent Americans if it would end American imperialism", which has a surprising tendency to not end American imperialism and in fact make it worse.

Overall I think if everyone followed a heuristic of "never kill babies", the world would be better on average. However you could get a problem if only the carefully moral people follow that rule and the less-careful don't and end up winning. For a consequentialist, a good rule would be "any ethical injunction which causes itself to be defeated cannot be used". At the very least, the heuristic of "don't violate Geneva Convention-like agreements restricting war to make it less horrible which the other side has stuck to" seems reasonable, although it's less clear for cases like where a few enemy soldiers individually violate it, or where being the first to violate it gives a major advantage and you're worried the other side might do so.

I think the first two of those at least can be read in any combination of sarcastic/sincere*, which IMO is the best way to read them. I need to take a screenshot of those two and share them on some internet site somewhere.

I assume what Will_Pearson meant to say was "would not regret making this wish", which fits with the specification of "I is the entity standing here right now". Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it's supposed the find out of that set of possibly worlds the one you would most like, or... something along those lines)) I'm not sure that would rule out every bad outcome, but... I think it probably would. Besides the obvious "other humans have different preferences from the guy building the AI"- maybe the AI is ordered to do a similar thing for each human individually- can anyone think of ways this would go badly?

A more practical and simple (and possibly legal) idea for abusing knowledge of irrational charity: Instead of asking for money to save countless children, ask for money to save one, specific child.

If one circulated a message on the internet saying that donations could save the life of a specific child, obviously if you then used the money for something unrelated there would be laws against that. But if you simply, say, A: lied about why they were in danger of dying, B: overstated the amount of money needed, C: left out the nationality of the child, and D: Used the money to save a large number of children, do you think a court would convict that?

Getting the money towards some cause where the child-saving is a lot less direct, like technological research or SIAI, would probably get hit for lying, but for something like fighting Malaria or the like that might be incredibly useful.

This probably is a bit late, but in a general sense Effective Altruism sounds like what you're looking for, although the main emphasis there is the "helping others as much as possible" rather than the "rationalists" part, but there's still a significant overlap in the communities. If both LW and EA are too general for you and you want something with both rationality and utilitarian altruism right in it's mission statement... I'm sure there's some blog somewhere in the ratioinalist blogosphere which is devoted to that specifically, although it might be just a single person's blog rather than a community forum.

Incidentally, if you did find- or found- a specific community along those lines I'd be interested in joining it myself.

Load More