Roko comments on (One reason) why capitalism is much maligned - Less Wrong

1 Post author: multifoliaterose 19 July 2010 03:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 19 July 2010 02:13:27PM *  8 points [-]

And Jeff Bezos spends his money on Blue origin which furthers the cause of the human race as a whole

This seems good to me from the little that I know.

fathers in Africa spend their disposable income on "wine, cigarettes and prostitutes",

See point 2 of http://blog.givewell.org/2010/05/26/thoughts-on-moonshine-or-the-kids/

Only the super-rich have a demonstrated psychological capability to spend large amounts of their time and money on the greater good.

In my opinion the overall giving record of the super-rich is appalling and I strain to find a meaningful sense in which the above statement is true. I don't think that it's clear that the super-rich show more demonstrated psychological capability to spend time and money on the greater good than fathers in Africa do.

According to http://features.blogs.fortune.cnn.com/2010/06/16/gates-buffett-600-billion-dollar-philanthropy-challenge/

"The IRS facts for 2007 show that the 400 biggest taxpayers had a total adjusted income of $138 billion, and just over $11 billion was taken as a charitable deduction, a proportion of about 8%...Is it possible that annual giving misses the bigger picture? One could imagine that the very rich build their net worth during their lifetimes and then put large charitable bequests into their wills. Estate tax data, unfortunately, make hash of that scenario, as 2008 statistics show."

It should be kept in mind that (a) there are a few very big donors who drag the mean up and (b) much of the money donated by the super-rich is donated for signaling reasons without a view toward maximizing positive impact.

Note also that Peter Theil has paid more money to SIAI than all other human beings combined, and that the Future of Humanity Institute is paid for almost entirely by British billionaire James Martin.

It's not clear that funding SIAI and FHI has positive expected value.

At http://blog.givewell.org/2009/05/07/small-unproven-charities/ Holden Karnofsky points out that

"[Funding a small charity carries a risk that] it succeeds financially but not programmatically – that with your help, it builds a community of donors that connect with it emotionally but don’t hold it accountable for impact. It then goes on to exist for years, even decades, without either making a difference or truly investigating whether it’s making a difference. It eats up money and human capital that could have saved lives in another organization’s hands.

As a donor, you have to consider this a disaster that has no true analogue in the for-profit world. I believe that such a disaster is a very common outcome, judging simply by the large number of charities that go for years without ever even appearing to investigate their impact. I believe you should consider such a disaster to be the default outcome for an new, untested charity, unless you have very strong reasons to believe that this one will be exceptional."

The "saving lives" reference may not be relevant, but the fact remains that by funding SIAI and FHI when these organizations have not demonstrated high levels of accountability, donors to these organizations may systematically increase rather than decrease existential risk.

See Holden's remarks on SIAI at the comment linked under http://blog.givewell.org/2010/06/29/singularity-summit/

Our hunter-gatherer intuitions about equality are based on assumptions of zero sum games and technological standstill, and are almost completely counterproductive in this modern, highly-positive-sum, highly-complex world.

Agree with this.

At the same time, I would say that too much inequality may be bad for economic growth. In practice, too much inequality seems to give rise to political instability and interferes with the ability of very bright children born to poor parents to make the most of their talents.

Comment deleted 19 July 2010 02:24:41PM *  [-]
Comment author: multifoliaterose 19 July 2010 02:31:52PM 6 points [-]

What SIAI/FHI are trying to do has very high expected value, but in general, because unaccountable charities often exhibit gross inefficiency at accomplishing their stated goals, donating to organizations with low levels of accountability may hurt the causes that the charities work toward (on account of resulting in the charities ballooning and making it harder for more promising organizations that work on the same causes to emerge).

Comment deleted 19 July 2010 03:33:54PM [-]
Comment author: multifoliaterose 19 July 2010 03:36:31PM *  2 points [-]

I don't think that SIAI and FHI are less-than-averagely accountable. I think that the standard for accountability in the philanthropic world is in general is very low and that there's an opportunity for rationalists to raise it by insisting that the organizations that they donate to demonstrate high levels of accountability.

Comment author: Vladimir_Nesov 19 July 2010 02:36:42PM *  6 points [-]

I can't imagine how you could come to the conclusion that SIAI/FHI have zero or negative expected value.

SIAI has a higher risk of producing uFAI than your average charity.

Comment deleted 19 July 2010 03:47:55PM [-]
Comment author: Vladimir_Nesov 19 July 2010 03:59:35PM 4 points [-]

They could be dangerously deluded, for example, even if their aim is right. Currently, I don't believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.

Comment author: FAWS 19 July 2010 03:59:02PM *  3 points [-]

Maybe FAI is impossible, humanity's only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?

Comment author: Vladimir_Nesov 19 July 2010 04:05:03PM *  3 points [-]

Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)

Comment author: FAWS 19 July 2010 04:10:17PM 0 points [-]

(Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity grow up for itself.)

Could you rephrase that? I have no idea what you are saying here.

Comment author: Vladimir_Nesov 19 July 2010 04:14:34PM *  5 points [-]

FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it's in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.

Comment author: FAWS 19 July 2010 04:20:41PM 2 points [-]

Thank you, now I see how the short version says the same thing, even though it sounded like gibberish to me before. I think I agree.

Comment deleted 19 July 2010 04:04:13PM *  [-]
Comment author: Vladimir_Nesov 19 July 2010 05:11:54PM *  1 point [-]

Now what kind of civilized rational conversation is that?

Comment deleted 19 July 2010 03:45:00PM *  [-]