TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 18 May 2015 07:16:47PM 2 points [-]

It's got to confer some degree of dumbness avoidance.

In any case, MIRI has already conceded that superintelligent AIs won't misbehave through stupidity. They maintain the problem is motivation ... the Genie KNOWS but doesn't CARE.

Comment author: OrphanWilde 18 May 2015 08:23:42PM 1 point [-]

It's got to confer some degree of dumbness avoidance.

Does it? On what grounds?

In any case, MIRI has already conceded that superintelligent AIs won't misbehave through stupidity. They maintain the problem is motivation ... the Genie KNOWS but doesn't CARE.

That's putting an alien intelligence in human terms; the very phrasing inappropriately anthropomorphizes the genie.

We probably won't go anywhere without an example.

Market economics ("capitalism") is an intelligence system which is very similar to the intelligence system Richard is proposing. Very, very similar; it's composed entirely of independent nodes (seven billion of them) which each provide their own set of constraints, and promote or demote information as it passes through them based on those constraints. It's an alien intelligence which follows Richard's model which we are very familiar with. Does the market "know" anything? Does it even make sense to suggest that market economics -could- care?

Does the market always arrive at the correct conclusions? Does it even consistently avoid stupid conclusions?

How difficult is it to program the market to behave in specific ways?

Is the market "friendly"?

Does it make sense to say that the market is "stupid"? Does the concept "stupid" -mean- anything when talking about the market?

Comment author: TheAncientGeek 19 May 2015 07:52:25AM *  0 points [-]

got to confer some degree of dumbness avoidance.

Does it?

On what grounds?

On the grounds of the opposite meanings of dumbness and intelligence.

That's putting an alien intelligence in human terms; the very phrasing inappropriately anthropomorphizes the genie.

Take it up with the author,

Does it make sense to say that the market is "stupid"? Does the concept "stupid" -mean- anything when talking about the market?

Economic systems affect us because wrong are part of them. How is an some neither-intelligent-nor-stupid-system in a box supposed to effect us?

And if AIs are neither-intelligent-nor-stupid, why are they called AIs?

And if AIs are alien, why are they able to do comprehensible and useful thing like winning jeopardy and guiding us to our destinations.

Comment author: OrphanWilde 19 May 2015 01:12:30PM 1 point [-]

On the grounds of the opposite meanings of dumbness and intelligence.

Dumbness isn't merely the opposite of intelligence.

Take it up with the author,

I don't need to.

Economic systems affect us because wrong are part of them. How is an some neither-intelligent-nor-stupid-system in a box supposed to effect us?

Not really relevant to the discussion at hand.

And if AIs are neither-intelligent-nor-stupid, why are they called AIs?

Every AI we've created so far has resulted in the definition of "AI" being changed to not include what we just created. So I guess the answer is a combination of optimism and the word "AI" having poor descriptive power.

And if AIs are alien, why are they able to do comprehensible and useful thing like winning jeopardy and guiding us to our destinations.

What makes you think an alien intelligence should be useless?

Comment author: TheAncientGeek 21 May 2015 12:16:38PM 0 points [-]

What makes you think that a thing designed by humans to be useful to humans, which is useful to humans would be alien?

Comment author: OrphanWilde 21 May 2015 02:37:20PM -1 points [-]

Because "human" is a tiny piece of a potential mindspace whose dimensions we mostly haven't even identified yet.

Comment author: TheAncientGeek 22 May 2015 02:28:06PM *  1 point [-]

That's about a quarter of an argument. You need to show that AI research is some kind of random shot into mind space, and not anthropomorphically biased for the reasons given.

Comment author: OrphanWilde 22 May 2015 02:36:57PM -1 points [-]

The relevant part of the argument is this: "whose dimensions we mostly haven't even identified yet."

If we created an AI mind which was 100% human, as far as we've yet defined the human mind, we have absolutely no idea how human that AI mind would actually behave. The unknown unknowns dominate.

Comment author: TheAncientGeek 23 May 2015 02:41:14PM 0 points [-]

Alien isnt the most transparent term to use fir human unknowns.