Luke_A_Somers comments on [link] [poll] Future Progress in Artificial Intelligence - Less Wrong

8 Post author: Pablo_Stafforini 09 July 2014 01:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 11 July 2014 01:24:15PM *  0 points [-]

Having to start over from scratch would be a very significant impediment. Don't forget that we're talking about the pre-super-intelligence phase, here.

So, no, I don't think I missed the point at all.

Comment author: [deleted] 12 July 2014 07:00:45AM *  0 points [-]

Gah, no, my point wasn't about starting over from scratch at all. It was that most AGI architectures include self-modification as a core and inseparable part of the architecture. For example, by running previously evolved thinking processes. You can't just say "we'll disable the self-modification for safety's sake" -- you'd be giving it a total lobotomy!

I was then only making a side point that even if you designed an architecture that didn't self-modify -- unlikely for performance reasons -- it would still discover how to wire itself into self-modification eventually. So that doean't really solve the safety issue, alone.

Comment author: Luke_A_Somers 12 July 2014 01:03:58PM 0 points [-]

I was disagreeing that that architectural change would not be helpful on the safety issue.