multifoliaterose comments on (One reason) why capitalism is much maligned - Less Wrong

1 Post author: multifoliaterose 19 July 2010 03:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 19 July 2010 02:13:27PM *  8 points [-]

And Jeff Bezos spends his money on Blue origin which furthers the cause of the human race as a whole

This seems good to me from the little that I know.

fathers in Africa spend their disposable income on "wine, cigarettes and prostitutes",

See point 2 of http://blog.givewell.org/2010/05/26/thoughts-on-moonshine-or-the-kids/

Only the super-rich have a demonstrated psychological capability to spend large amounts of their time and money on the greater good.

In my opinion the overall giving record of the super-rich is appalling and I strain to find a meaningful sense in which the above statement is true. I don't think that it's clear that the super-rich show more demonstrated psychological capability to spend time and money on the greater good than fathers in Africa do.

According to http://features.blogs.fortune.cnn.com/2010/06/16/gates-buffett-600-billion-dollar-philanthropy-challenge/

"The IRS facts for 2007 show that the 400 biggest taxpayers had a total adjusted income of $138 billion, and just over $11 billion was taken as a charitable deduction, a proportion of about 8%...Is it possible that annual giving misses the bigger picture? One could imagine that the very rich build their net worth during their lifetimes and then put large charitable bequests into their wills. Estate tax data, unfortunately, make hash of that scenario, as 2008 statistics show."

It should be kept in mind that (a) there are a few very big donors who drag the mean up and (b) much of the money donated by the super-rich is donated for signaling reasons without a view toward maximizing positive impact.

Note also that Peter Theil has paid more money to SIAI than all other human beings combined, and that the Future of Humanity Institute is paid for almost entirely by British billionaire James Martin.

It's not clear that funding SIAI and FHI has positive expected value.

At http://blog.givewell.org/2009/05/07/small-unproven-charities/ Holden Karnofsky points out that

"[Funding a small charity carries a risk that] it succeeds financially but not programmatically – that with your help, it builds a community of donors that connect with it emotionally but don’t hold it accountable for impact. It then goes on to exist for years, even decades, without either making a difference or truly investigating whether it’s making a difference. It eats up money and human capital that could have saved lives in another organization’s hands.

As a donor, you have to consider this a disaster that has no true analogue in the for-profit world. I believe that such a disaster is a very common outcome, judging simply by the large number of charities that go for years without ever even appearing to investigate their impact. I believe you should consider such a disaster to be the default outcome for an new, untested charity, unless you have very strong reasons to believe that this one will be exceptional."

The "saving lives" reference may not be relevant, but the fact remains that by funding SIAI and FHI when these organizations have not demonstrated high levels of accountability, donors to these organizations may systematically increase rather than decrease existential risk.

See Holden's remarks on SIAI at the comment linked under http://blog.givewell.org/2010/06/29/singularity-summit/

Our hunter-gatherer intuitions about equality are based on assumptions of zero sum games and technological standstill, and are almost completely counterproductive in this modern, highly-positive-sum, highly-complex world.

Agree with this.

At the same time, I would say that too much inequality may be bad for economic growth. In practice, too much inequality seems to give rise to political instability and interferes with the ability of very bright children born to poor parents to make the most of their talents.

Comment author: xamdam 19 July 2010 04:35:35PM *  5 points [-]

And Jeff Bezos spends his money on Blue origin which furthers the cause of the human race as a whole

This seems good to me from the little that I know.

No need to reply to this red herring about spending habits of super-rich; they are largely irrelevant to your argument (that capitalism is still the better system).

But once we go down that road...

"The IRS facts for 2007 show that the 400 biggest taxpayers had a total adjusted income of $138 billion, and just over $11 billion was taken as a charitable deduction, a proportion of about 8%...Is it possible that annual giving misses the bigger picture? One could imagine that the very rich build their net worth during their lifetimes and then put large charitable bequests into their wills. Estate tax data, unfortunately, make hash of that scenario, as 2008 statistics show."

It's a good counter-point to Roko's fantasy about the kindness of billionaires. I suspect he fell for availability bias with his space program idea and Bezos. BTW, the Blue Origin investment is not even close to closing his income gap with the average Joe. Buffett is giving away all of his money, to be managed by tech-smart Gates, would have made a better example (which again supports the availability bias ;).

Still, the real economics of it is that the super-rich by and large do not take away much from society, with some exceptions. This is because they either buy goods and services for themselves or invest. This per se adds a single cycle to the money circulation rate, which is not huge. The exceptions come when they spend obscene amount on essentially single-used goods that need to be produced anew, such as building palaces for them and their whole f*g royal extended family (sorry for the emotion, but this is what I feel for the oil sheiks). Somewhat counter-intuitively, if they compete on existing luxuries, such Michelangelo paintings, and spend billion dollars on them, the harm is pretty minimal, since little societal resources needed to be wasted on these goods.

All in all I heard of very few, billion-dollar self indulgent spenders. The rest of the money gets invested, and often in new startups/technologies that you mutual fund will not invest in, and which in fact is a very valuable service.

Comment author: multifoliaterose 19 July 2010 07:22:36PM 2 points [-]

Thanks for your interesting response. If you have any relevant references concerning what billionaires do with their money, I would appreciate them.

If super-rich people really do reinvest most of their money in startups/technologies, then their disinclination toward charitable spending may not be problematic at all. It's occurred to me that investment in startups/technologies may more cost effective than donations to virtually all presently existing charities (even the ones that GiveWell recommends, which I presently donate to).

At the same time, if the situation is as you describe, then why don't billionaires make this point more often to increase their public adulation?

Comment author: xamdam 19 July 2010 08:42:57PM *  1 point [-]

Most of my data is just plain logic and some reading of biographies/news.

I imagine it's actually pretty hard to spend a billion dollars on yourself, because each thing that you acquire, if it is of any value above rubbish, carries management overhead. These people have teams managing their staff; owning too much stuff can get pretty annoying.

I do not know what they invest in in general, I suspect hedge funds and VC firms, if not their own business expansion, since these can provide greater returns with small risk if you are rich enough to diversify. What I can say is that I and other ordinary folk do not invest in startups, as I cannot diversify that risk enough and cannot afford time for due diligence etc. It's up to the rich to provide Angel/VC funding.

Comment author: multifoliaterose 19 July 2010 09:00:59PM 1 point [-]

Thanks for your response.

•I have the same impression that rich people can't spend too much money on themselves. But I remain concerned that they may split their fortune many ways among their children, grandchildren, great-grandchildren etc. who all use a lot of money on luxury goods. It would be good to have some data on this point.

•Hedge funds may skew wealth on account of picking up "quarters on the side walk" that otherwise would have been distributed randomly among members of the population. Wealth skewing seems to be bad for (economic growth)/(political stability)/(average quality of life). On the other hand hedge funds may stabilize the economy on account of suppressing bubbles. On the other hand they may destabilize the economy on account of leveraging a lot of funds and occasionally messing up. These things are complicated.

•Angel/VC funding is probably good.

•I would like to see super-rich people systematically using their money to achieve maximum positive social impact. Angel/VC funding should have some positive social impact, but since the market system does not take into account externalities & because there are tragedy of the commons issues in the market system, I think that super-rich people could be benefiting the world much more than they are now if they were actively trying to benefit the world rather than just trying to make more money.

Comment deleted 19 July 2010 05:44:55PM *  [-]
Comment author: xamdam 19 July 2010 06:02:02PM *  3 points [-]

I venture that if you put this data next to the marginal utility of money the 1% donation of the ordinary people will look way more charitable than the 8% or the super-rich.

Buffett said, rather honestly (after declaring his intention of giving away 99%) something along the lines of "don't look at me for charity advice, I never gave away a dollar I actually needed". You have to discount super-rich giving quite steeply on altruism scale.

Additional accounting note: the 8% comes from American data, so 8x is not the true ratio your 1% figure is from UK, and according to you Americans' giving is waaay more charitable. Also not known how much of the super-rich giving goes to churches as you point out.

Just to point out, we are not arguing about altruism of the super-rich, not their usefulness in a capitalist society; they are not only a necessary evil but are actually useful because of their investment profile.

Comment deleted 19 July 2010 06:06:29PM [-]
Comment author: xamdam 19 July 2010 06:44:30PM *  2 points [-]

Agreed, but this still does not indicate any general altruism of the super-rich. Pragmatically, you're better off hitting them up for 10M than me for $100, even if I am giving up more utils in process. Individually Theil deserves credit for far-sightedness, of course.

Comment deleted 19 July 2010 04:15:45PM *  [-]
Comment author: multifoliaterose 19 July 2010 04:26:31PM 4 points [-]

Okay, fine: I currently believe that funding SIAI and FHI has expected value near zero but my belief on this matter is unstable and subject to rapid change with incoming evidence.

Comment author: Vladimir_Nesov 19 July 2010 04:52:23PM *  6 points [-]

As I see it, most of current worth of SIAI is in focusing attention on the problem of FAI, and it doesn't need to produce any actual research on AI to make progress on that goal. The mere presence of this organization allows people like me to (1) recognize the problem of FAI, something you are unlikely to figure out or see as important on your own and (2) see the level of support for the cause, and as a result be more comfortable about seriously devoting time to studying the problem (in particular, extensive discussion by many smart people on Less Wrong and elsewhere gives more confidence that the idea is not a mirage).

Initially, most of the progress in this direction was produced personally by Eliezer, but now SIAI is strong enough to carry on. Publicity causes more people to seriously think about the problem, which will eventually lead to technical progress, if it's possible at all, regardless of whether current SIAI is capable of making that progress.

This makes current SIAI clearly valuable, because whatever is the truth about possible paths towards FAI, it takes a significant effort to explore them, and SIAI calls attention to that task. If SIAI can make progress on the technical problem as well, more power to them. If other people begin to make technical progress, they now have the option of affiliating with SIAI, which might be a significant improvement over personally trying to fight for funding on FAI research.

Comment author: multifoliaterose 19 July 2010 05:14:55PM *  6 points [-]

Not all publicity is good publicity. The majority of people who I've met off of Less Wrong who have heard of SIAI think that the organization is full of crazy people. A lot of these people are smart. Some of these people have Ph.D.'s from top tier universities in sciences.

I think that SIAI should be putting way more emphasis on PR, networking within academic, etc. This is in consonance with a comment by Holden Karnofsky here

To the extent that your activities will require “beating” other organizations (in advocacy, in speed of innovation, etc.), what are the skills and backgrounds of your staffers that are relevant to their ability to do this?

I'm worried that SIAI's poor ability to make a good public impression may poison the cause of existential risk in the mind of the public and dissuade good researchers from studying existential risk. There are some very smart people who it would be good to have working on Friendly AI who, despite their capabilities, care a lot about their status in broader society. I think that it's very important that an organization that works toward Friendly AI at least be well regarded by a sizable minority people in the scientific community.

Comment author: andreas 19 July 2010 06:50:58PM 13 points [-]

In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.

We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.

Comment author: Utilitarian 25 July 2010 04:46:30AM 6 points [-]

Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".

Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."

Comment author: ata 25 July 2010 07:10:44AM *  3 points [-]

Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".

Agreed; I've had similar thoughts. Given recent popular coverage of the various things called "the Singularity", I think we need to accept that it's pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil's predictions.

Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."

I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that's because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it's fair to describe SIAI as still being fundamentally about FAI (at least to anyone who's adequately prepared to think about FAI).

Describing it as "a philosophy institute researching hugely important fundamental questions" may give people the wrong impressions, if it's not quickly followed by more specific explanation. When people think of "philosophy" + "hugely important fundamental questions", their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. ("Philosophy" is another term I'm inclined toward avoiding these days.) When I've had to describe SIAI in one phrase to people who have never heard of it, I've been calling it an "artificial intelligence think-tank". Meanwhile, Michael Vassar's Twitter describes SIAI as a "decision theory think-tank". That's probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where "decision theory" already refers to an interesting established field that's relevant to AI but doesn't share with "artificial intelligence" the connotations of missed goals, science fiction geekery, anthropomorphism, etc.

Comment author: Vladimir_Nesov 19 July 2010 05:17:49PM *  4 points [-]

I'm pretty sure usable suggestions for improvement are welcome. About ten years ago there was only the irrational version of Eliezer who just recently understood that the problem existed, while right now we have some non-crazy introductory and scholary papers, and a community that understands the problem. The progress seems to be in the right direction.

If you asked the same people about the idea of FAI fifteen years ago, say, they'd label it crazy just the same. SIAI gets labeled automatically, by association with the idea. Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you'd need to at least point out specific actions to attempt this argument).

Comment author: multifoliaterose 19 July 2010 06:28:24PM 2 points [-]

Good point - I will write to SIAI about this matter.

I actually agree that up until this point progress has been in the right direction, I guess my thinking is that the SIAI has attracted a community consisting of a very particular kind of person, may have achieved near-saturation within this population, and that consequently SIAI as presently constituted may have outlived the function that you mention. This is the question of room for more funding

Agree with

Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you'd need to at least point out specific actions to attempt this argument).

There are things that I have in mind but I prefer to contact SIAI about them directly before discussing them in public.

Comment author: whpearson 19 July 2010 06:14:45PM 2 points [-]

I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.

SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.

So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.

The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.

Comment author: Vladimir_Nesov 19 July 2010 07:33:13PM *  1 point [-]

Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.

Comment author: thomblake 19 July 2010 07:37:41PM 5 points [-]

Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).

Comment author: whpearson 19 July 2010 07:41:10PM 2 points [-]

Apart from they both need a fair amount of computer science to predict their capabilities and dangers?

Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.

Comment author: Vladimir_Nesov 19 July 2010 07:57:29PM 2 points [-]

Apart from they both need a fair amount of computer science to predict their capabilities and dangers?

I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.

Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.

A truly devious plan.

Comment author: cupholder 20 July 2010 02:41:00AM 1 point [-]

I think that's a clever idea that deserves more eyeballs.

Comment author: FAWS 19 July 2010 07:51:02PM 0 points [-]

Nothing whatsoever is a bit strong. About as much as preventing tiger attacks and fighting malaria, perhaps?

Comment author: Vladimir_Nesov 19 July 2010 07:54:37PM 1 point [-]

Saving tigers from killer robots.

Comment author: Soki 19 July 2010 05:10:37PM *  1 point [-]

This video addresses this question : Anna Salamon's 2nd Talk at Singularity Summit 2009 -- How Much it Matters to Know What Matters: A Back of the Envelope Calculation
It is 15 minutes long, but you can take a look at 11m37s

Edit : added the name of the video, thanks for the remark Vladimir.

Comment author: Vladimir_Nesov 19 July 2010 05:15:23PM 5 points [-]

The link above is Anna Salamon's 2nd Talk at Singularity Summit 2009 "How Much it Matters to Know What Matters: A Back of the Envelope Calculation."

(You should give some hint of the content of a link you give, at least the title of the talk.)

Comment deleted 19 July 2010 04:48:34PM *  [-]
Comment author: multifoliaterose 19 July 2010 05:04:13PM 2 points [-]

Okay, let's try again. My current belief is that at present, donations to SIAI are a less cost effective way of accomplishing good than donating to a charity like VillageReach or StopTB which improves health in the developing world.

My internal reasoning is as follows:

Roughly speaking the potential upside of donating to SIAI (whatever research SIAI would get done) is outwieghed by the potential downside (the fact that SIAI could divert funding away from future existential risk organizations). By way of contrast, I'm reasonably confident that there's some upside to improving health in the developing world (keep in mind that historically, development has been associated with political stability and getting more smart people in the pool of people thinking about worthwhile things) and giving to accountable effectiveness oriented organizations will raise the standard for accountability across the philanthropic world (including existential risk charities).

I wish that there were better donation opportunities than VillageReach and StopTB and I'm moderately optimistic that some will emerge in the near future (e.g. over the next ten years) but I don't see any at the moment.

Comment deleted 19 July 2010 05:27:50PM [-]
Comment author: multifoliaterose 19 July 2010 06:07:28PM 2 points [-]

Good question. I haven't considered this point - thanks for bringing it to my consideration!

Comment deleted 19 July 2010 06:28:05PM *  [-]
Comment author: multifoliaterose 19 July 2010 07:09:27PM *  6 points [-]

•I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.

•I'm very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).

Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.

For these reasons, I think that the expected value of the research that SIAI is doing is negligible in comparison with the expected value of the publicity that SIAI generates. At the margin, I'm not convinced that SIAI is generating good publicity for the cause of existential risk. I think that SIAI may be generating bad publicity for the cause of existential risk. See my exchange with Vladimir Nesov. Aside from the general issue of it being good to encourage accountability, this is why I don't think that funding SIAI is a good idea right now. But as I said to Vladimir Nesov, I will write to SIAI about this and see what happens.

•I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn't demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.

•Thanks for mentioning the Lifeboat foundation.

Comment deleted 19 July 2010 08:59:49PM *  [-]
Comment deleted 19 July 2010 08:22:17PM *  [-]
Comment author: Vladimir_Nesov 19 July 2010 07:38:48PM 1 point [-]

My impression is that existential risk charity is very much unlike third-world aid charity, in that how to deliver third world aid is not a philosophically challenging problem. Everyone has a good intuitive understanding of people, of food and the lack thereof, and at least some understanding of things like incentive problems.

I suspect helping dead states efficiently and sustainably is very difficult, possibly more so than developing FAI as a shortcut. Of course, it's a completely different kind of challenge.

Comment deleted 19 July 2010 08:09:23PM *  [-]
Comment author: FAWS 19 July 2010 04:27:56PM 0 points [-]

I read "not clear that X has positive expected value" as something like "I'm not sure an observer with perfect knowledge of all relevant information, but not of future outcomes would assign X a positive expected value."

Comment deleted 19 July 2010 04:56:57PM *  [-]
Comment author: FAWS 19 July 2010 05:04:12PM 0 points [-]

To clarify: No knowledge of things like the state of individual electrons or photons, and therefore no knowledge of future "random" (chaos theory) outcomes. This was one of the possible objections I had considered, but decided against addressing in advance, turns out I should have.

Comment author: Vladimir_Nesov 19 July 2010 05:06:55PM 0 points [-]

Logical uncertainty is also something you must fight on your own. Like you can't know what's actually in the world, if you haven't seen it, you can't know what logically follows from what you know, if you didn't perform the computation.

Comment author: FAWS 19 July 2010 05:16:34PM 0 points [-]

And that was the other possible objection I had thought of!

I had meant to include that sort of thing in "relevant knowledge", but couldn't think of any good way to phase it in the 5 seconds I thought about it. I wasn't trying to make any important argument, it was just a throwaway comment.

Comment author: Vladimir_Nesov 19 July 2010 05:23:33PM *  0 points [-]

And that was the other possible objection I had thought of!

I don't understand what this refers to. (Objection to what? What objection? In what context did you think of it?)

Comment author: FAWS 19 July 2010 06:20:25PM *  0 points [-]

I commented on the objection that being unsure whether the expected value of something is positive conflicts with the definition of expected value with:

I read "not clear that X has positive expected value" as something like "I'm not sure an observer with perfect knowledge of all relevant information, but not of future outcomes would assign X a positive expected value."

When writing this I thought of two possible objections/comments/requests for clarification/whatever:

  1. That perfect knowledge implies knowledge of future outcomes.

  2. Your logical uncertainty point (though I had no good way to phrase this).

I briefly considered addressing them in advance, but decided against it. Both whatevers were made in fairly rapid succession (though yours apparently not with that comment in mind?), so I definitely should have.

There is no way that short throwaway comment deserved a seven post comment thread.

Comment deleted 19 July 2010 02:24:41PM *  [-]
Comment author: multifoliaterose 19 July 2010 02:31:52PM 6 points [-]

What SIAI/FHI are trying to do has very high expected value, but in general, because unaccountable charities often exhibit gross inefficiency at accomplishing their stated goals, donating to organizations with low levels of accountability may hurt the causes that the charities work toward (on account of resulting in the charities ballooning and making it harder for more promising organizations that work on the same causes to emerge).

Comment deleted 19 July 2010 03:33:54PM [-]
Comment author: multifoliaterose 19 July 2010 03:36:31PM *  2 points [-]

I don't think that SIAI and FHI are less-than-averagely accountable. I think that the standard for accountability in the philanthropic world is in general is very low and that there's an opportunity for rationalists to raise it by insisting that the organizations that they donate to demonstrate high levels of accountability.

Comment author: Vladimir_Nesov 19 July 2010 02:36:42PM *  6 points [-]

I can't imagine how you could come to the conclusion that SIAI/FHI have zero or negative expected value.

SIAI has a higher risk of producing uFAI than your average charity.

Comment deleted 19 July 2010 03:47:55PM [-]
Comment author: Vladimir_Nesov 19 July 2010 03:59:35PM 4 points [-]

They could be dangerously deluded, for example, even if their aim is right. Currently, I don't believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.

Comment author: FAWS 19 July 2010 03:59:02PM *  3 points [-]

Maybe FAI is impossible, humanity's only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?

Comment author: Vladimir_Nesov 19 July 2010 04:05:03PM *  3 points [-]

Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)

Comment author: FAWS 19 July 2010 04:10:17PM 0 points [-]

(Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity grow up for itself.)

Could you rephrase that? I have no idea what you are saying here.

Comment author: Vladimir_Nesov 19 July 2010 04:14:34PM *  5 points [-]

FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it's in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.

Comment author: FAWS 19 July 2010 04:20:41PM 2 points [-]

Thank you, now I see how the short version says the same thing, even though it sounded like gibberish to me before. I think I agree.

Comment deleted 19 July 2010 04:04:13PM *  [-]
Comment author: Vladimir_Nesov 19 July 2010 05:11:54PM *  1 point [-]

Now what kind of civilized rational conversation is that?

Comment deleted 19 July 2010 03:45:00PM *  [-]