NancyLebovitz comments on Fusing AI with Superstition - Less Wrong

-6 Post author: Drahflow 21 April 2010 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 24 April 2010 10:00:34PM 2 points [-]

It's still an idea I'm working on, but it's plausible that any AI which is trying to accomplish something complicated in the material world will pursue knowledge of math, physics, and engineering. Even an AI which doesn't have an explicitly physical goal (maybe it's a chess program) still might get into physics and engineering in order to improve its own functioning.

This does seem very likely. See Steve Omohundro's "The Basic AI Drives" for one discussion.

What I'm wondering is whether Friendliness might shake out from more general goals.

This seems very unlikely; see Value is Fragile.

(One exception: It seems conceivable that game theory, plus the possibility of being in a simulation, might give rise to a general rule like "treat your inferiors as you would be treated by your superiors" that would restrain arbitrary AIs.)

Comment author: NancyLebovitz 24 April 2010 10:30:11PM *  1 point [-]

Whether boredom is a universally pro-survival trait for any entity which is capable of feeling it (I've heard that turtles will starve if they aren't given enough variety in their food) is a topic worth investigating. I bet that having some outward focus rather than just wanting internal states reliably increases the chances of survival.

On the other hand, "treat your inferiors as you would be treated by your superiors" is assuredly not a reliable method of doing well in a simulation, just considering the range of human art and the popularity of humor based on humiliation.

Are you more entertaining if you torture Sims or if you build the largest possible sustainable Sim city? It depends on the audience.