TheAncientGeek comments on The Brain as a Universal Learning Machine - Less Wrong

82 Post author: jacob_cannell 24 June 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Loosemore 26 June 2015 04:31:16PM *  1 point [-]

A very good question indeed. Although ... there is a depressing answer.

This is a core-belief issue. For some people (like Yudkowsky and almost everyone in MIRI) artificial intelligence must be about the mathematics of artificial intelligence, but without the utility-function approach, that entire paradigm collapses. Seriously: it all comes down like a house of cards.

So, this is a textbook case of a Kuhn / Feyerabend - style clash of paradigms. It isn't a matter of "Okay, so utility functions might not be the best approach: so let's search for a better way to do it" .... it is more a matter of "Anyone who thinks that an AI cannot be built using utility functions is a crackpot." It is a core belief in the sense that it is not allowed to be false. It is unthinkable, so rather than try to defend it, those who deny it have to be personally attacked. (I don't say this because of personal experience, I say it because that kind of thing has been observed over and over when paradigms come into conflict).

Here, for example, is a message sent to the SL4 mailing list by Yudkowsky in August 2006:

Dear Richard Loosemore:

When someone doesn't have anything concrete to say, of course they always trot out the "paradigm" excuse.

Sincerely, Eliezer Yudkowsky.

So the immediate answer to your question is that it will never be treated as a matter of urgency, it will be denied until all the deniers drop dead.

Meanwhile, I went beyond that problem and outlined a solution, soon after I started working in this field in the mid-80s. And by 2006 I had clarified my ideas enough to present them at the AGIRI workshop held in Bethesda that year. The MIRI (then called SIAI) crowd were there, along with a good number of other professional AI people.

The response was interesting. During my presentation the SIAI/MIRI bunch repeatedly interrupted with rude questions or pointed, very loud, laughter. Insulting laughter. Loud enough to make the other participants look over and wonder what the heck was going on.

That's your answer, again, right there.

But if you want to know what to do about it, the paper I published after the workshop is a good place to start.

Comment author: TheAncientGeek 28 June 2015 05:51:02PM -1 points [-]

I don't see it as dogmatism so much as a verbal confusion. The ubiquity of UFs can be defended using a broad (implicit) definition, but the conclusions typically drawn about types of AI danger and methods of AI safety relate to a narrower definition, where a Ufmks

  • Explicitly coded And/or
  • Fixed, unupdateable And/or
  • "Thick" containing detailed descriptions of goals.