JoshuaZ comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 28 December 2010 09:14:45PM *  9 points [-]

My concerns about the SIAI are mostly about their competence. It seems rather easy for me to imagine another organisation in the SIAI's niche doing a much better job. Are 63 chapters of a Harry Potter fanfic really helping, for instance?

That isn't an SIAI thing, that's Eliezer's thing. But if you really want to know, it seems from anecdotal evidence that HPMR is helping raise the general sanity waterline. Not only has it made more people be interested in LW in general, I can personally attest to it helping modify irrational beliefs that friends have had.

(Also, Tim, I know you are very fond of capitalizing "DOOM" and certain other phrases, but the rest of us find it distracting and disruptive. Could you please consider not doing it here?)

Also, if they think using fear of THE END OF THE WORLD is a good way to stimulate donations, I would be very interested to see information about the effect on society of such marketing. Will it produce a culture of fear? What about The risks of caution?

I'm not sure why you think they think that doomsday predictions are a good way to stimulate donations. They are simply being honest in their goals. Empirically, existential risk is not a great motivator for getting money. Look for example at how much trouble people concerned with asteroid impacts have getting money (although now that the WISE survey is complete we're in much better shape in understanding and handling that risk.)

My general impression is that spreading the DOOM virus around is rarely very constructive. It may well be actively harmful.

So should people not say what they are honestly thinking?

In financial markets, prophesying market crashes may actually help make them happen, since the whole system works like a big rumour mill - and if a crash is coming, it makes sense to cash in and buy gold - and, if everyone does that, then the crash happens. A case of the self-fulfilling prophesy. The prophet may look smug - but if only they had kept their mouth shut!

Yes, that can happen in markets. What is the analogy here? Is there a situation where simply talking about the risk of unFriendly AI will somehow make unFriendly AI more likely? (And note, improbable decision-theory basilisks don't count.)

Have the DOOM merchants looked into this kind of thing? Where are their reassurances that prophesying DOOM - and separating passing punters from their cash in the process - is a harmless pass-time, with no side effects?

If your standard is that they have to be clear there are no side effects, that's a pretty high standard. How certain do they need to be? To return to the asteroid example, thanks to the WISE mission we now are tracking about 95% of all asteroids that could pose an extinction threat if they impacted, and are tracking a much higher percentage of those that live in severely threatening orbits. But, whenever we spend any money it means we might be missing that small percentage. We'll feel really stupid if our donations to any cause turn out not to matter because we missed another one. If a big asteroid hits the Earth tomorrow we'll feel really dumb. By the same token, we'll feel really stupid if tomorrow someone makes an approximation of AIXI devoted to playing WoW that goes foom. The fact that we have the asteroids charted won't make any difference. No matter how good an estimate we do, there's a chance we'll be wrong. And no matter what happens there are side effects, simply due at minimum to the fact that we have a finite set of resources. And the more we talk about any issue the less we are focusing on others. And yes, obviously if fooming turns out to not be an issue, there will be negative side effects. So where is the line?