HamletHenna comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 23 April 2012 05:10:39PM *  0 points [-]

He didn't say other architectures would be no good, he said they're less likely to be safe.

He thinks the distribution P(Outcome | do(complete Oracle AI project)) isn't as highly peaked at Weirdtopia as P(outcome | do(complete FAI)); Oracle AI puts more weight on regions like "Lifeless universe", "Eternal Torture", "Rainbows and Slow Death", and "Failed Utopia".

However, "Complete FAI" isn't an actionable procedure, so he examines the chance of completion conditional on different actions he can take. "Not worth pursuing because non-implementable" means that available FAI supporting actions don't have a reasonable chance of producing friendly AI, which discounts the peak in the conditional outcome distribution at valuable futures relative to do(complete FAI). And supposedly he has some other available oracle AI supporting strategy which fares better.

Eating a sandwich isn't as cool as building an interstellar society with wormholes for transportation, but I'm still going to make a sandwich for lunch, because it's going to work and maybe be okay-ish.