Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: complexmeme 17 May 2016 03:55:20PM 4 points [-]

"Amount of EA money sent to top four GiveWell charities" might be low because GiveWell itself is not included in that list. (I ended up putting my donation to GiveWell under "other", which while technically accurate, wasn't ideal.) In addition to GiveWell specifically, it would have been worth having an option for Effective Altruism's sort of giving (charities directed at obvious, cost-effective ways of saving the lives of / improving the quality of life for the world's poorest), but not to organizations specifically recommended by GiveWell.

Comment author: complexmeme 29 November 2014 05:25:52AM 26 points [-]

after a bit of searching I can't find a definitive post describing the concept

The idiom used to describe that concept in social psychology is "idiosyncrasy credits", so searching for that phrase produces more relevant material (though as far as I can tell nothing on Less Wrong specifically).

Comment author: John_Maxwell_IV 29 July 2013 05:01:35AM 2 points [-]

Do you have a source for the claim that this act was due to industry lobbying as opposed to risk aversion?

Comment author: complexmeme 17 August 2013 03:00:58AM 2 points [-]

I can see why you think I was making that implicit claim, though that wasn't quite the point I was trying to make.

I don't know to what extent the regulation mentioned in the Wikipedia article I linked to was influenced by industry lobbying versus concern about other sorts of risks to infrastructure or public safety. I'm not sure whether the precise cause of the passage of such regulation is that relevant to the regulation's durability in the face of potential benefits from adoption of new technology. Maybe it is, but the precise example of "limit[ing cars] to the same speed as horses" in the original post seems to imply that was something that didn't happen, not just something that did happen for different reasons.

Comment author: complexmeme 24 July 2013 06:14:42PM 1 point [-]

The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.

I wouldn't be surprised if this was the case, and I'd be very surprised if the end of cheap (at least, much cheaper) petroleum has nothing to do with that.

Comment author: complexmeme 24 July 2013 05:48:15PM 29 points [-]

If cars were invented nowadays, the horse-and-saddle industry would surely try to arrange for them to be regulated out of existence, or sued out of existence, or limited to the same speed as horses to ensure existing buggies remained safe.

That's not a new thing, that sort of regulation actually happened!

Comment author: Yosarian2 24 July 2013 02:20:16PM *  11 points [-]

People who think that automation is currently increasing unemployment don't generally just talk about jobs lost during the Great Recession. They see an overall trend of reduction in employment and wages since at least 2000.

You're absolutely right that the recession was caused by a financial shock. The thing is, a normal effect of recessions is for productivity to increase; businesses lay off workers and then try to figure out how to run their operation more efficiently with less workers, that happens in every recession. The difference might be that this time, it is easier then ever in the past for employers to figure out how to do more with less workers (because of the internet, and automation, and computers, ect), and so even when demand starts to come back up as the GDP grows again, they apparently still don't need to hire many workers.

The economists making the automation argument aren't saying that automation caused the great recession or the loss of jobs that happened then; they tend to think that it's a long ongoing trend that's been going for quite a while, that it was partly hidden for a few years by the housing bubble, but that the great recession has accelerated that trend by increasing the need for employers to find ways to be more cost-effective.

Edit: the main assumption EY is making in this article seems to be here:

Since it should take advanced general AI to automate away most or all humanly possible labor

and I don't think that's true. I think that a majority of labor done today, either physical or intellectual, is basically a series of routine or repeatable tasks, and I think that a big chunk of it could be done by either narrow AI software or robotics or internet-based logistics.

Anyway, you wouldn't really have to automate most or all of human labor to create an unemployment crises; if we hit long-term unemployment levels of 20%-30% that would probably not be sustainable without some fairly significant social and economic changes.

Comment author: complexmeme 24 July 2013 04:18:48PM 9 points [-]

They see an overall trend of reduction in employment and wages since at least 2000.

And also wage stagnation in contrast to continuing productivity gains since the 1970s.

Comment author: ad2 04 February 2009 08:01:24PM 0 points [-]

The Superhappies could have transformed humanity and the Babyeaters without changing themselves or their way of life in the slightest, and no one would have been able to stop them.

Why would I care about whether the Superhappies change themselves to appreciate literature or beauty? What I want is for them to not change me.

All their "fair-mindedness" does is guarantee that I will be changed again, also against my will, the next time they encounter strangers.

Comment author: complexmeme 26 December 2012 04:27:57PM 2 points [-]

that I will be changed again, also against my will, the next time

The next time, it presumably wouldn't be against your will, due to the first set of changes.

Comment author: complexmeme 14 August 2012 06:01:14PM *  0 points [-]

"You have brain damage" is also a theory with perfect explanatory adequacy.... Why not?

This led me to think of two alternate hypotheses:

One is that the same problem underlying the second factor ("abnormal belief evaluation") is at fault, that self-evaluation for abnormal beliefs involves the same sort of self-modelling needed for a theory like "I have brain damage" to seem explanatory (or even coherent). The other is that there are separate systems for self-evaluation and belief-probability-evaluation that are both damaged in the case of such delusions.

One might take the Capgras delusion and similar as evidence that those systems at least overlap, but there's some visibility bias involved, since people who hold beliefs that seem (to them) to be both probable and crazy are likely to conceal those beliefs (see someonewrongonthenet's comment).

Comment author: ViEtArmis 24 July 2012 03:51:30PM 3 points [-]

Your lackey proposes as follows: “I move that we vote upon the following: that if this motion passes unanimously, all members of the of the Board resign immediately and are given a reasonable compensation; that if this motion passes 4-1 that the Director who voted against it must retire without compensation, and the four directors who voted in favor may stay on the Board; and that if the motion passes 3-2, then the two 'no' voters get no compensation and the three 'yes' voters may remain on the board and will also get a spectacular prize - to wit, our company's 51% share in your company divided up evenly among them.”

Considering the reasoning that ends in "everyone is kicked off the board," wouldn't they all talk about it for a few minutes and then reject the proposal 4-1 (or maybe 3-2)?

Comment author: complexmeme 24 July 2012 06:56:48PM 1 point [-]

Agreed. Pretty sure even if the other board members didn't see the exact nature of the trap, they'd still find it obvious that it is a trap, especially considering the source.

Comment author: ArisKatsaris 11 July 2012 12:09:44PM 2 points [-]

That last "if you know the other person cooperated" is unnecessary, in a True Prisoner's Dilemma each player prefers defecting in any circumstance.

Not quite: e.g. If you're playing True Prisoner's Dilemma against a copy of yourself, you prefer cooperating, because you know your choice and your copy's choice will be identical, but you don't know what the choice will be before you actually make it.

If you don't know for sure that they'll be identical, but there's some other logical connection that will e.g. make it 99% certain they'll be identical. (e.g. your copies were not created at that particular moment, but a month ago, and were allowed to read different random books in the meantime), then one would argue you're still better off preferring cooperation.

Comment author: complexmeme 12 July 2012 03:34:28PM 0 points [-]

Given the context, I was assuming the scenario being discussed was one where the two players' decisions are independent, and where no one expects they may be playing against themselves.

You're right that the game changes if a player thinks that their choice influences (or, arguably, predicts) their opponent's choice.

View more: Next