Manfred comments on The Brain as a Universal Learning Machine - Less Wrong

82 Post author: jacob_cannell 24 June 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 22 June 2015 03:13:41AM *  4 points [-]

Thank you for this overview. A couple of thoughts:

  1. There is a recent and interesting result by Miller et al. (2015, MIT) supporting the hypothesis that the cortex doesn't process tasks in highly specialized modules, which is perhaps some evidence for a ULM in the human brain.

  2. The importance of redundancy in biological systems might be another piece evidence for ULMs.

  3. You write that "Infant emotions appear to simplify down to a single axis of happy/sad", which I think is not true. Surprise, fear and embarrassment are for example very early emotions as well (can't find a citation for this, sorry).

  4. Minor nitpick: I think it is clumsy to say: "a ULM is more powerful than a TM because a ULM can automatically programs itself", since a TM can likely emulate a ULM, it might just be a bad model for it (bad in the sense of representation efficiency).

  5. Designing a body for a superintelligence will possibly still be a difficult task. What makes humans friendly is, I think, largely a result of (1) a dependency on others due a need for maintaining a body and needs to interact with other people (conversation, physical contact), and (2) empathy. That is, being emotionally disconnected from other humans is one way to turn against them. If you don't have these emotional responses deeply built into a body for a ULM, the AI will probably turn out to be indifferent towards humans leading to a large set of other problems.

  6. You write that Yudkowsky's box problem is a strawman and a distraction. How do you arrive at this conclusion exactly?

Comment author: Manfred 22 June 2015 08:49:52AM 3 points [-]

You write that Yudkowsky's box problem is a strawman and a distraction. How do you arrive at this conclusion exactly?

Since I don't think we can make a very realistic sandbox (at least not in the near future), perhaps the idea is to have an AI design that is known to work similarly with and without interaction with the world (looking at training data sampled from an environment versus the environment itself). Then, putatively, we could test the AI in the non-interactive case before getting anywhere near an AI-box scenario.