Polymeron comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Polymeron 24 May 2012 08:55:04AM 0 points [-]

My point was that the AI is likely to start performing social experiments well before it is capable of even that conversation you depicted. It wouldn't know how much it doesn't know about humans.

Comment author: TheOtherDave 24 May 2012 01:13:22PM 0 points [-]

(nods) Likely.

And I agree that humans might be able to detect attempts at deception in a system at that stage of its development. I'm not vastly confident of it, though.

Comment author: Polymeron 26 May 2012 06:01:19AM 0 points [-]

I have likewise adjusted down my confidence that this would be as easy or as inevitable as I previously anticipated. Thus I would no longer say I am "vastly confident" in it, either.

Still good to have this buffer between making an AI and total global catastrophe, though!

Comment author: TheOtherDave 26 May 2012 03:05:13PM 0 points [-]

Sure... a process with an N% chance of global catastrophic failure is definitely better than a process with N+delta% chance.