Wei_Dai comments on The $125,000 Summer Singularity Challenge - Less Wrong

20 Post author: Kaj_Sotala 29 July 2011 09:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (259)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 31 July 2011 02:15:03AM 1 point [-]

You said "I therefore consider the above-mentioned approach more effective.", but if all you're claiming is that the above mentioned approach ("a well-reasoned statistical approach with good software engineering methodologies") has a P*0.9 probability of causing an existential disaster, and not claiming that it has a significant chance of causing a positive Singularity, then why do you think funding such projects is effective for reducing existential risk? Is the idea that each such project would displace a "hacked together" project that would otherwise be started?

Comment author: jsteinhardt 31 July 2011 04:07:54PM *  0 points [-]

EDIT: I originally misinterpreted your post slightly, and corrected my reply accordingly.

Not quite. The hope is that such a project will succeed before any other hacked-together project succeeds. More broadly, the hope is that partial successes using principled methodologies will convince them to be more widely adopted in the AI community as a whole, and more to the point that a contingent of highly successful AI researchers advocating Friendliness can change the overall mindset of the field.

The default is a hacked-together AI project. SIAI's FAI research is trying to displace this, but I don't think they will succeed (my information on this is purely outside-view, however).

An explicit instantiation of some of my calculations:

SIAI approach: 0.1% chance of replacing P with 0.1P Approach that integrates with the rest of the AI community: 30% chance of replacing P with 0.9P

In the first case, P is basically staying constant, in the second case it is being replaced with 0.97P.