multifoliaterose comments on (One reason) why capitalism is much maligned - Less Wrong

1 Post author: multifoliaterose 19 July 2010 03:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 19 July 2010 06:07:28PM 2 points [-]

Good question. I haven't considered this point - thanks for bringing it to my consideration!

Comment deleted 19 July 2010 06:28:05PM *  [-]
Comment author: multifoliaterose 19 July 2010 07:09:27PM *  6 points [-]

•I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.

•I'm very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).

Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.

For these reasons, I think that the expected value of the research that SIAI is doing is negligible in comparison with the expected value of the publicity that SIAI generates. At the margin, I'm not convinced that SIAI is generating good publicity for the cause of existential risk. I think that SIAI may be generating bad publicity for the cause of existential risk. See my exchange with Vladimir Nesov. Aside from the general issue of it being good to encourage accountability, this is why I don't think that funding SIAI is a good idea right now. But as I said to Vladimir Nesov, I will write to SIAI about this and see what happens.

•I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn't demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.

•Thanks for mentioning the Lifeboat foundation.

Comment deleted 19 July 2010 08:59:49PM *  [-]
Comment author: multifoliaterose 19 July 2010 09:10:40PM 2 points [-]

•I agree that there's gap between when rich individuals see the benefits of existential risk research and when the general public sees the benefits of existential risk research.

•The gap may nevertheless be inconsequential relative to the time that it will take to build a general AI.

•I presently believe that it's not desirable for general AI research to be done in secret. Secret research proceeds slower than open research, and we may be "on the clock" because of existential risks unrelated to general AI. In my mind this factor outweighs the arguments that Eliezer has advanced for general AI research being done in secret.

Comment author: CronoDAS 19 July 2010 09:26:06PM 2 points [-]

That, and secrets are damn hard to keep. In all of history, there has only been one military secret that has never been exposed, and that's the composition of Greek fire. Someone is going to leak.

Comment deleted 19 July 2010 08:22:17PM *  [-]
Comment author: multifoliaterose 19 July 2010 08:46:14PM *  4 points [-]

Do you buy the argument that we should take the ~10^50 future people the universe could support into our expected utility calculations?

Yes, I buy this argument.

If so, then it is hard to see how anything other than existential risks matters.

The question is just whether donating to an existential risk charity is the best way to avert existential risk.

•I believe that political instability is conducive to certain groups desperately racing to produce and utilize powerful technologies. This points in the direction of promotion of political stability reducing existential risk.

•I believe that when people are leading lives that they find more fulfilling, they make better decisions, so that improving quality of life reduces existential risk

•I believe that (all else being equal), economic growth reduces "existential risk in the broad sense." By this I mean that economic growth may prevent astronomical waste.

Of course, as a heuristic it's more important that technologies develop safely than that they develop quickly, but one could still imagine that at some point, the marginal value of an extra dollar spent on existential risk research drops so low that speeding up economic growth is a better use of money.

•Of the above three points, the first two are more compelling than the third, but the third could still play a role, and I believe that there's a correlation between each pair of political stability, quality of life, and economic growth, so that it's possible to address the three simultaneously.

•As I said above, at the margin I think that a good charity devoted to studying existential risk should be getting more funding, but at present I do not believe that a good charity devoted to studying existential risk could cost effectively absorb arbitrarily many dollars.

Comment author: rhollerith_dot_com 19 July 2010 10:18:20PM *  0 points [-]

Do you buy the argument that we should take the ~10^50 future people the universe could support into account in our expected utility calculations?

I do. In fact, I assign a person certain to be born a million years from now about the same intrinsic value as a person who exists today though there are a lot of ways in which my doing good for a person who exist today has significant insttrumental value which doing good for a person certain to be born a million years does not.

Comment author: Vladimir_Nesov 19 July 2010 07:38:48PM 1 point [-]

My impression is that existential risk charity is very much unlike third-world aid charity, in that how to deliver third world aid is not a philosophically challenging problem. Everyone has a good intuitive understanding of people, of food and the lack thereof, and at least some understanding of things like incentive problems.

I suspect helping dead states efficiently and sustainably is very difficult, possibly more so than developing FAI as a shortcut. Of course, it's a completely different kind of challenge.

Comment deleted 19 July 2010 08:09:23PM *  [-]
Comment author: Vladimir_Nesov 19 July 2010 09:10:00PM *  1 point [-]

I disagree strongly. You can repeatedly get it it wrong with failed states, and learn from your mistakes. The utility cost for each failure is additive, whereas the first FAI failure is fatal.

Distinguish the difficulty of developing an adequate theory, from the difficulty of verifying that a theory is adequate. It's the failure with the latter that might lead to disaster, while not failing requires a lot of informed rational caution. On the other hand, not inventing an adequate theory doesn't directly lead to a disaster, and failure to invent an adequate theory of FAI is something you can learn from (the story of my life for the last three years).