You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on [link] [poll] Future Progress in Artificial Intelligence - Less Wrong Discussion

8 Post author: Pablo_Stafforini 09 July 2014 01:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 10 July 2014 09:24:08PM *  1 point [-]

I'd dispute your last paragraph. An AI that doesn't examine and modify its own source code as its core architecture can still examine and modify its own source code if you let it. That's like keeping the fissionable materials wrapped in graphite like in a pebble-bed reactor.

I would class baking it in from the start not as simply having fissionable material, but more as juggling bare, slightly subcritical blocks of Plutonium.

Comment author: [deleted] 11 July 2014 04:54:06AM *  0 points [-]

That's missing the point I'm afraid. What I meant was that the operation of the AI itself necessarily involves modifying its own "source code." The act of (generalized!) thinking itself is self-modifying. An artificial general intelligence is capible of solving any problem, including the problem of artificial general intelligence. And the architecture of most actual AGIs involve modifying internal behavior based on the output of thinking processes in such a way that is Turing complete. So even if you didn't explicitly program the machine to modify its own source code (although any efficient AGI would need to), it could learn or stumble upon a self-aware, self-modifying method of thinking. Even if it involves something as convoluted as using the memory database as a read/write store, and updating belief networks as gates in an emulated CPU.

Comment author: Luke_A_Somers 11 July 2014 01:24:15PM *  0 points [-]

Having to start over from scratch would be a very significant impediment. Don't forget that we're talking about the pre-super-intelligence phase, here.

So, no, I don't think I missed the point at all.

Comment author: [deleted] 12 July 2014 07:00:45AM *  0 points [-]

Gah, no, my point wasn't about starting over from scratch at all. It was that most AGI architectures include self-modification as a core and inseparable part of the architecture. For example, by running previously evolved thinking processes. You can't just say "we'll disable the self-modification for safety's sake" -- you'd be giving it a total lobotomy!

I was then only making a side point that even if you designed an architecture that didn't self-modify -- unlikely for performance reasons -- it would still discover how to wire itself into self-modification eventually. So that doean't really solve the safety issue, alone.

Comment author: Luke_A_Somers 12 July 2014 01:03:58PM 0 points [-]

I was disagreeing that that architectural change would not be helpful on the safety issue.

Comment author: TheAncientGeek 13 July 2014 09:39:27PM -1 points [-]

You put source code in scare quotes. Most AIs don't literally modify their source cor, they just adjust weighting .ir whatever...essentially data.

Comment author: [deleted] 13 July 2014 11:01:55PM *  0 points [-]

I don't disagree with this comment. The scare quotes is because the AI wouldn't literally be editing the C++ (or whatever) code directly, the sort of things that a reader might think of when I say "editing source code." Rather it will probably manipulate encodings of thinking processes in some sort of easy to analyze recombinant programming language, as well as adjust weighting vectors as you mention. There's a reason LISP, where code is data and data is code is the traditional or stereotypical language of artificial intelligence, although personally I think a more strongly typed concatenative language would be a better choice. Such a language is what the AI would use to represent its own thinking processes, and what it would manipulate to "edit its own source code."