multifoliaterose comments on (One reason) why capitalism is much maligned - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
•I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.
•I'm very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.
For these reasons, I think that the expected value of the research that SIAI is doing is negligible in comparison with the expected value of the publicity that SIAI generates. At the margin, I'm not convinced that SIAI is generating good publicity for the cause of existential risk. I think that SIAI may be generating bad publicity for the cause of existential risk. See my exchange with Vladimir Nesov. Aside from the general issue of it being good to encourage accountability, this is why I don't think that funding SIAI is a good idea right now. But as I said to Vladimir Nesov, I will write to SIAI about this and see what happens.
•I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn't demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.
•Thanks for mentioning the Lifeboat foundation.
•I agree that there's gap between when rich individuals see the benefits of existential risk research and when the general public sees the benefits of existential risk research.
•The gap may nevertheless be inconsequential relative to the time that it will take to build a general AI.
•I presently believe that it's not desirable for general AI research to be done in secret. Secret research proceeds slower than open research, and we may be "on the clock" because of existential risks unrelated to general AI. In my mind this factor outweighs the arguments that Eliezer has advanced for general AI research being done in secret.
That, and secrets are damn hard to keep. In all of history, there has only been one military secret that has never been exposed, and that's the composition of Greek fire. Someone is going to leak.
Yes, I buy this argument.
The question is just whether donating to an existential risk charity is the best way to avert existential risk.
•I believe that political instability is conducive to certain groups desperately racing to produce and utilize powerful technologies. This points in the direction of promotion of political stability reducing existential risk.
•I believe that when people are leading lives that they find more fulfilling, they make better decisions, so that improving quality of life reduces existential risk
•I believe that (all else being equal), economic growth reduces "existential risk in the broad sense." By this I mean that economic growth may prevent astronomical waste.
Of course, as a heuristic it's more important that technologies develop safely than that they develop quickly, but one could still imagine that at some point, the marginal value of an extra dollar spent on existential risk research drops so low that speeding up economic growth is a better use of money.
•Of the above three points, the first two are more compelling than the third, but the third could still play a role, and I believe that there's a correlation between each pair of political stability, quality of life, and economic growth, so that it's possible to address the three simultaneously.
•As I said above, at the margin I think that a good charity devoted to studying existential risk should be getting more funding, but at present I do not believe that a good charity devoted to studying existential risk could cost effectively absorb arbitrarily many dollars.
I do. In fact, I assign a person certain to be born a million years from now about the same intrinsic value as a person who exists today though there are a lot of ways in which my doing good for a person who exist today has significant insttrumental value which doing good for a person certain to be born a million years does not.