Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 06 April 2016 09:26:02PM 2 points [-]

If there is going to be a government role in getting innovation started, people have to believe philosophically that it’s possible to plan.

Peter Thiel

Comment author: AlexanderRM 15 April 2016 09:54:13PM 0 points [-]

Interesting: He makes the argument that progress in physical areas of technology (transportation, chemistry etc.) has slowed in part due to government regulation (which would explain why the computers and the internet have been the one thing progressing drastically). But the United States has never been the source of all or even the majority of the worlds' new inventions, so an explanation focused on the U.S. government can't fill that large a gap (although, I suppose a slowdown of 1/3rd or even more would be explained).

Any information on what the situation has been in other countries? I wouldn't be surprised if the entire First World has trended towards drastically more regulation, which would indeed leave only the places with fewer inventors and little capital to invest or consumer-money to spend able to experiment with technologies in those fields (if true, the implications for the chance of changing the situation aren't as bright as if it's just the United States). Still, this is something that has to be remembered in any discussion of technology, or for that matter any questions of this type. More generally there seems to be a general lack of tendency (among Americans at least) to check on or be aware of other countries in all sorts of questions, and the few times they are brought up it's usually a single anecdote to reinforce the speakers' point (but even these are less common than one would expect). That seems to be a serious impediment to actually figuring out problems.

Comment author: ZankerH 07 April 2016 12:10:28PM *  0 points [-]

A perfect example of a fully general counter-argument!

Comment author: AlexanderRM 15 April 2016 09:22:23PM 0 points [-]

If I were to steelman the usefulness of the argument, I'd say the conclusion is that positions on economics shouldn't be indispensable parts of a political movement, because that makes it impossible to reason about economics and check whether that position is wrong. Which is just a specific form of the general argument against identifying with object-level beliefs*.

*For that matter, one should perhaps be careful about identifying with meta-level beliefs as well, although I don't know if that's entirely possible for a human to do, even discounting the argument that there might be conservation of tribalism. It might be possible to reduce ones' identity down to a general framework for coming up with good meta-level beliefs, and avoid object-level

Comment author: AlexanderRM 15 April 2016 09:15:54PM 5 points [-]

"He who builds his cart behind closed gates will find it not suited to the tracks outside the gates."

-Unattributed (Chinese) proverb, quoted by Chen Duxiu in "Call to Youth" 1915.

Comment author: PeerInfinity 01 May 2010 10:03:51PM *  11 points [-]

I was pondering that article about Zachary Baumkletterer again.

Summary: Zachary Baumkletterer is that guy who had so much empathy for the starving people in the world and felt so guilty about being so much more fortunate than them, that he voluntarily lowered himself to their standard of living, and donated the rest of his income and posessions to charity (which charity? that's critically important!) Unfortunately, that meant that he was starving himself to death.

One way to resolve this situation would have been for Zachary's boss to give him a budget specifically for food, explaining that he must use all of it on food, and must not give any of it away, etc. This budget qualifies as a business expense, since it directly affects Zachary's productivity. Or if the boss really can't afford to give him any raise at all, then he could allocate part of Zachary's current salary for a food budget.

Another option would have been for his boss to threaten to fire him if he refuses to eat enough to stay healthy and productive.

Another option would be for the people who know Zach to invite him to talk and eat with them. He would have had a hard time refusing an opportunity to talk with other people about his mission.

Another option would have been for the people who know him to offer to donate two dollars to charity for every one dollar that Zach spent on himself, up to a certain limit.

Spoiler: But noone, not even Zachary himself, thought of any of these options. They sent him to a mental institution instead. This story fills me with rage. Lots of rage. Even though I'm pretty sure it's fictional.

Two facts remain though:

1) Zachary's strategy is obviously suboptimal from a utilitarian perspective. If Zachary starves himself to death, then he won't be able to help anyone else at all.

2) If everyone in the world followed Zachary's strategy, of lowering their standard of living to match the poorest people in the world, and donated the rest of their income to helping raise the standard of living of the poorest people in the world, then the world would be... a whole lot better than it is now. Though of course there are still some ways that this scenario could go wrong.

Anyway, if Zachary had thought of any of the points I mentioned here, and actually suggested them to people, allowing himself to stay alive without compromising his ethics... then I would have considered him a hero. I would have considered him a role model. Someone who I would like to be more like. Someone who I would actively try to be more like. Someone who I could compare my own performance with, to see how well I'm doing.

But the Zachary that was actually described in the story was... a misguided fanatic. An example of why you sometimes need to resist your sense of empathy. An excuse to be lazy and apathetic. An excuse to be selfish.

Anyway, personally, I think I'll just stick with utilitarianism, and instead of trying to lower my standard of living as much as I can possibly get away with, just out of a sense of guilt, I'll try to maximize the net utility of my actions, even if this means sometimes spending money on luxuries that I could easily live without. For example, in cases where not buying the thing would cost too much willpower. I'll try not to let my sense of empathy and guilt cause me to do stupid and counter-productive things.

Comment author: AlexanderRM 14 November 2015 10:31:38PM *  1 point [-]

I know I'm 5 years late on this but on the offchance someone sees this, I just want to mention I found Yvain's/Scott Alexander's essay on the subject incredibly useful*.

The tl;dr: Use universalizability for your actions moreso than direct utilitarianism. His suggestion is 10% for various reasons, mainly being a round number that's easy to coordinate around and have people give that exact number. Once you've done that, the problems that would be solved by everyone donating 10% of their income to efficient charities are the responsibility of other people who are donating less than that amount (I'd also suggest trying to spread the message as much as possible, as I'm doing here).

Of course it'd be better to donate more of your income. I would say that if feeling bad about donating 10% causes you to donate more, then... donate more. If it just causes you to feel like you'll never be good enough so you don't even try, it's useless and you'd do more good by considering yourself completely absolved. 10% is also incredibly useful for convincing people who aren't already convinced of unlimited utilitarian duty to donate to efficient charity.

*http://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

Comment author: Autolykos 02 November 2015 02:42:25PM 2 points [-]

There probably was a time when killing Hitler had a significant chance of ending the war by enabling peace talks (allowing some high-ranking German generals/politicians to seize power while plausibly denying having wanted this outcome). The window might have been short, and probably a bit after '42, though. I'd guess any time between the Battle of Stalingrad (where Germany stopped winning) and the Battle of Kursk (which made Soviet victory inevitable) should've worked - everyone involved should rationally prefer white peace to the very real possibility of a bloody stalemate. Before, Germany would not accept. Afterwards, the Soviets wouldn't.

Comment author: AlexanderRM 10 November 2015 06:52:28PM 0 points [-]

It's also worth noting that "I would set off a bomb if it would avert or shorten the Holocaust even if it would kill a bunch of babies" would still answer the question... ...or maybe it wouldn't, because the whole point of the question is that you might be wrong that it would end the war. See for comparison "I would set off a bomb and kill a bunch of innocent Americans if it would end American imperialism", which has a surprising tendency to not end American imperialism and in fact make it worse.

Overall I think if everyone followed a heuristic of "never kill babies", the world would be better on average. However you could get a problem if only the carefully moral people follow that rule and the less-careful don't and end up winning. For a consequentialist, a good rule would be "any ethical injunction which causes itself to be defeated cannot be used". At the very least, the heuristic of "don't violate Geneva Convention-like agreements restricting war to make it less horrible which the other side has stuck to" seems reasonable, although it's less clear for cases like where a few enemy soldiers individually violate it, or where being the first to violate it gives a major advantage and you're worried the other side might do so.

Comment author: Kaj_Sotala 19 February 2010 04:57:54PM *  7 points [-]

The funny thing is that this comment and all three above it could either be read straight or as sarcastic / indirect.

I'm having slight difficulty determining which ones should be read as which.

Comment author: AlexanderRM 03 November 2015 12:00:37AM 0 points [-]

I think the first two of those at least can be read in any combination of sarcastic/sincere*, which IMO is the best way to read them. I need to take a screenshot of those two and share them on some internet site somewhere.

Comment author: Eliezer_Yudkowsky 30 May 2013 05:18:11AM 9 points [-]

Easier to do by just squishing someone, actually.

Comment author: AlexanderRM 08 October 2015 09:24:32PM 0 points [-]

I assume what Will_Pearson meant to say was "would not regret making this wish", which fits with the specification of "I is the entity standing here right now". Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it's supposed the find out of that set of possibly worlds the one you would most like, or... something along those lines)) I'm not sure that would rule out every bad outcome, but... I think it probably would. Besides the obvious "other humans have different preferences from the guy building the AI"- maybe the AI is ordered to do a similar thing for each human individually- can anyone think of ways this would go badly?

Comment author: Desrtopa 04 April 2011 03:03:47PM 13 points [-]

Damn, does SIAI have any kids they can push down a well?

... I can never run for public office.

Comment author: AlexanderRM 05 October 2015 02:02:41AM 0 points [-]

A more practical and simple (and possibly legal) idea for abusing knowledge of irrational charity: Instead of asking for money to save countless children, ask for money to save one, specific child.

If one circulated a message on the internet saying that donations could save the life of a specific child, obviously if you then used the money for something unrelated there would be laws against that. But if you simply, say, A: lied about why they were in danger of dying, B: overstated the amount of money needed, C: left out the nationality of the child, and D: Used the money to save a large number of children, do you think a court would convict that?

Getting the money towards some cause where the child-saving is a lot less direct, like technological research or SIAI, would probably get hit for lying, but for something like fighting Malaria or the like that might be incredibly useful.

Comment author: Giles 04 April 2011 03:41:35PM 4 points [-]

paulfchristiano,

I'm probably one of the people you're trying to reach. I want to help people and I am aware that the approaches favoured by society may well not work. But doing the right thing is really hard.

However many arguments you present to me it's still really hard. For me, it's not a lack of argument that stands in the way of doing the right thing.

What I want is a community of rationalists who are interested in helping others as much as possible. Does such a thing already exist, ready-made? Either as a subset of LW or independent of it?

I can't help feeling that such a thing would help your cause immensely. However good your arguments are, people will want to know "what do I do next?" And they will be much happier with the answer "come meet my friends and throw some ideas around" than "give all your money to SIAI".

Comment author: AlexanderRM 05 October 2015 01:55:03AM 0 points [-]

This probably is a bit late, but in a general sense Effective Altruism sounds like what you're looking for, although the main emphasis there is the "helping others as much as possible" rather than the "rationalists" part, but there's still a significant overlap in the communities. If both LW and EA are too general for you and you want something with both rationality and utilitarian altruism right in it's mission statement... I'm sure there's some blog somewhere in the ratioinalist blogosphere which is devoted to that specifically, although it might be just a single person's blog rather than a community forum.

Incidentally, if you did find- or found- a specific community along those lines I'd be interested in joining it myself.

Comment author: AlexanderRM 05 October 2015 01:46:36AM 0 points [-]

Just want to mention @ #8: After a year and a half of reading LW and the like I still haven't accomplished this one. Admittedly this is more like a willpower/challenge thing (similar to a "rationality technique") than just an idea I dispute, and there might be cases where simply convincing someone to agree that that's important would get them past the point of what you term "philosophical garbage" where they go "huh, that's interesting", but still hard.

Granted I should mention that I at least hope that LW stuff will affect how I act once I graduate college, get a job and start earning money beyond what I need to survive. I was already convinced that I ought to donate as much as possible to various causes, but LW has probably affect which causes I'll choose.

View more: Next