Luke_A_Somers comments on [link] [poll] Future Progress in Artificial Intelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (89)
Having to start over from scratch would be a very significant impediment. Don't forget that we're talking about the pre-super-intelligence phase, here.
So, no, I don't think I missed the point at all.
Gah, no, my point wasn't about starting over from scratch at all. It was that most AGI architectures include self-modification as a core and inseparable part of the architecture. For example, by running previously evolved thinking processes. You can't just say "we'll disable the self-modification for safety's sake" -- you'd be giving it a total lobotomy!
I was then only making a side point that even if you designed an architecture that didn't self-modify -- unlikely for performance reasons -- it would still discover how to wire itself into self-modification eventually. So that doean't really solve the safety issue, alone.
I was disagreeing that that architectural change would not be helpful on the safety issue.