shminux comments on Tiling Agents for Self-Modifying AI (OPFAI #2) - Less Wrong

55 Post author: Eliezer_Yudkowsky 06 June 2013 08:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (260)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 27 June 2013 10:06:20PM 5 points [-]

There's essentially only one existing example of an entity with general intelligence: a human. I think that our prior should be that the first AGI will have internal structure analogous to that of a human. Here I'm not suggesting that an AGI will have human values by default: I'm totally on board with your points about the dangers of anthropomorphization in that context. Rather, what I mean is that I envisage the first AGI as having many interacting specialized modules

Okay. This sounds like you're trying to make up your own FAI theory in much the same fashion as Holden (and it's different from Holden's, of course). Um, what I'd like to do at this point is take out a big Hammer of Authority and tell you to read "Artificial Intelligence: A Modern Approach" so your mind would have some better grist to feed on as to where AI is and what it's all about. If I can't do that... I'm not really sure where I could take this conversation. I don't have the time to personally guide you to understanding of modern AI starting from that kind of starting point. If there's somebody else you'd trust to tell you about AI, with more domain expertise, I could chat with them and then they could verify things to you. I just don't know where to take it from here.

On the object level I will quickly remark that some of the first attempts at heavier-than-air flying-machines had feathers and beaks and they did not work very well, that 'interacting specialized modules' is Selling Nonapples, that there is an old discussion in cognitive science about the degree of domain specificity in human intelligence, and that the idea that 'humans are the only example we have' is generally sterile, for reasons I've already written about but I can't remember the links offhand, hopefully someone else does. It might be in Levels of Organization in General Intelligence, I generally consider that pretty obsolete but it might be targeted to your current level.

Comment author: shminux 27 June 2013 10:36:17PM -1 points [-]

Just wondering why you see Jonah Sinick of high enough status to be worth explaining to what's been discussed on LW repeatedly. Or maybe I'm totally misreading this exchange.

Comment author: JonahSinick 27 June 2013 11:00:39PM *  0 points [-]

I'm puzzled as to what you think I'm missing: can you say more?

Comment author: Kawoomba 27 June 2013 11:09:46PM *  2 points [-]

Matching "first AGI will [probably] have internal structure analogous to that of a human" and "first AGI [will probably have] many interacting specialized modules" in a literal (cough uncharitable cough) manner, as evidenced by "heavier-than-air flying-machines had feathers and beaks". Your phrasing hints at an anthropocentric architectural bias, analogous to the one you specifically distance yourself from regarding values.

Maybe you should clarify that part, it's crucial to the current misunderstanding, and it's not clear whether by "interacting specialized modules" you'd also refer to "Java classes not corresponding to anything 'human' in particular", or whether you'd expect a "thalamus-module".

Comment author: JonahSinick 27 June 2013 11:33:58PM 2 points [-]

Matching "first AGI will [probably] have internal structure analogous to that of a human" and "first AGI [will probably have] many interacting specialized modules" in a literal (cough uncharitable cough) manner, as evidenced by "heavier-than-air flying-machines had feathers and beaks". Your phrasing hints at an anthropocentric architectural bias, analogous to the one you specifically distance yourself from regarding values.

I think that people should make more of an effort to pay attention to the nuances of people's statements rather than using simple pattern matching.

Maybe you should clarify that part, it's crucial to the current misunderstanding, and it's not clear whether by "interacting specialized modules" you'd also refer to "Java classes not corresponding to anything 'human' in particular", or whether you'd expect a "thalamus-module".

There's a great deal to write about this, and I'll do so at a later date.

To give you a small taste of what I have in mind: suppose you ask "How likely is it that the final digit of the Dow Jones will be 2 in two weeks." I've never thought about this question. A priori, I have no Bayesian prior. What my brain does, is to amalgamate

  1. The Dow Jones index varies in a somewhat unpredictable way
  2. The last digit is especially unpredictable.
  3. Two weeks is a really long time for unpredictable things to happen in this context
  4. The last digit could be one of 10 values between 0 and 9
  5. The probability of a randomly selected digit between 0 and 9 being 2 is equal to 10%

Different parts of my brain generate the different pieces, and another part of my brain combines them. I'm not using a single well-defined Bayesian prior, nor am I satisfying a well defined utility function.

Comment author: shminux 27 June 2013 11:57:08PM 0 points [-]

I don't want to comment on the details, as this is way outside my area of expertise, but I do want to point out that you appear to be a victim of the bright dilettante fallacy. You appear to think that your significant mathematical background makes you an expert in an unrelated field without having to invest the time and effort required to get up to speed in it.

Comment author: JonahSinick 28 June 2013 12:04:19AM *  0 points [-]

I don't claim to have any object level knowledge of AI.

My views on this point are largely based on what I've heard from people who work on AI, together with introspection as to how I and other humans reason, and the role of heuristics in reasoning.

Comment author: pop 15 July 2013 06:53:20AM 0 points [-]

Maybe something to do with Jonah being previously affiliated with GiveWell?