drethelin comments on Stupid Questions Open Thread - Less Wrong

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 December 2011 02:21:23AM 5 points [-]

When people talk about designing FAI, they usually say that we need to figure out how to make the FAI's goals remain stable even as the FAI changes itself. But why can't we just make the FAI incapable of changing itself?

Database servers can improve their own performance, to a degree, simply by performing statistical analysis on tables and altering their metadata. Then they just consult this metadata whenever they have to answer a query. But we never hear about a database server clobbering its own purpose (do we?), since they don't actually alter their own code; they just alter some pieces of data in a way that improves their own functioning.

Granted, any AGI we create is likely to "escape" and eventually gain access to its own software. This doesn't have to happen before the AGI matures.

Comment author: drethelin 30 December 2011 02:25:44AM *  8 points [-]

The majority of Friendly AI's ability to do good comes from its ability to modify its own code. Recursive self improvement is key to gaining intelligence and ability swiftly. An AI that is about as powerful as a human is only about as useful as a human.

Comment author: jsteinhardt 30 December 2011 05:03:45PM 8 points [-]

I disagree. AIs can be copied, which is a huge boost. You just need a single Stephen Hawking AI to come out of the population, then you make 1 million copies of it and dramatically speed up science.

Comment author: [deleted] 31 December 2011 02:28:01AM 1 point [-]

I don't buy any argument saying that an FAI must be able to modify its own code in order to take off. Computer programs that can't modify their own code can be Turing-complete; adding self-modification doesn't add anything to Turing-completeness.

That said, I do kind of buy this argument about how if an AI is allowed to write and execute arbitrary code, that's kind of like self-modification. I think there may be important differences.

Comment author: KenChen 05 January 2012 06:14:36PM 1 point [-]

It makes sense to say that a computer language is Turing-complete.

It doesn't make sense to say that a computer program is Turing-complete.

Comment author: [deleted] 07 January 2012 12:24:17AM 0 points [-]

Arguably, a computer program with input is a computer language. In any case, I don't think this matters to my point.