Will_Newsome comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 29 March 2012 10:56:46PM *  2 points [-]

The same action can make immediate risk worse, but probability of eventually winning higher.

Near/Far. Long-term effects aren't predictable and shouldn't be traded for more predictable short-term losses. In my experience it fails the Predictable Retrospective Stupidity test. Even when you try to factor in structural uncertainty, you still end up getting burned. And even if you still want to make such a tradeoff then you should halt all research until you've come to agreement or a natural stopping point with Wei Dai or others who have reservations. Stop, melt, catch fire, don't destroy the world.

(Disclaimer: This comment is fueled by a strong emotional reaction due to contingent personal details that might or might not upon further reflection deserve to be treated as substantial evidence for the policy I recommend.)

Comment author: Vladimir_Nesov 29 March 2012 11:11:04PM *  4 points [-]

Just to make clear what specific idea this is about: Wei points out that researching FAI might increase UFAI risk, and suggests that therefore FAI shouldn't be researched. My reply is to the effect that while FAI research might increase UFAI risk within any given number of years, it also decreases the risk of never solving FAI (which IIRC I put at something like 95% if we research it pre-WBE, and 97% if we don't).

Comment author: wedrifid 30 March 2012 07:00:16AM *  2 points [-]

When I have analyzed this problem previously my reasoning matched that listed by Nesov here.