shminux comments on Reflection in Probabilistic Logic - Less Wrong

63 Post author: Eliezer_Yudkowsky 24 March 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (171)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 31 March 2013 08:05:15AM 1 point [-]

I'm also generally skeptical of the sentiment "build an intelligence which mimics a human as closely as possible." This is competing with the principle "build things you understand,"

Do you really think that one can build an AGI without first getting a good understanding of human intelligence, to the degree where one can be reproduced (but possibly shouldn't be)?

Comment author: Eugine_Nier 02 April 2013 04:46:24AM 9 points [-]

Do you really think that one can build an AGI without first getting a good understanding of human intelligence, to the degree where one can be reproduced

It was possibly to achieve heavier than air flight without reproducing the flexible wings of birds.

Comment author: shminux 02 April 2013 06:52:31AM 5 points [-]

Right, an excellent point. Biology can be unnecessarily messy.

Comment author: Kawoomba 31 March 2013 09:18:15AM 2 points [-]

Good understanding of the design principles may be enough, or of the organisation into cortical columns and the like. The rest is partly a mess of evolutionary hacks, such as "let's put the primary visual cortex in the back of the brain" (excuse the personification), and probably not integral for sufficient understanding. So I guess my question would be what granularity of "understanding" you're referring to. 'So that it can be reproduced' seems too low a barrier: Consider we found some alien technology that we could reproduce strictly by copying it, without having any idea how it actually worked.

Do you 'understand' large RNNs that exhibit strange behavior because you understand the underlying mechanisms and could use them to create other RNNs?

There is a sort of trade-off, you can't go too basic and still consider yourself to understand the higher-level abstractions in a meaningful way, just as the physical layer of the TCP/IP stack in principle encapsulates all necessary information, but is still ... user-unfriendly. Otherwise we could say we understand a human brain perfectly just because we know the laws that governs it on a physical level.

I shouldn't comment when sleep deprived ... ignore at your leisure.