Sebastian_Hagen comments on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities - Less Wrong

25 Post author: KatjaGrace 16 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

You are viewing a single comment's thread. Show more comments above.

Comment author: paulfchristiano 16 September 2014 04:23:55AM *  5 points [-]

I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.

My own best guess is that the computational work that humans are doing while they do the "thinking" tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth's characterization.

I would be really curious to get the perspectives of AI researchers involved with work in the "thinking" domains.

Comment author: Sebastian_Hagen 16 September 2014 07:55:20PM *  4 points [-]

I find it particularly important because of the example of automating research, which is probably the task I care most about.

Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the "thinking" parts remain hard to implement AI.

Comment author: xrchz 03 October 2014 08:53:56PM 2 points [-]

They're not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.

Comment author: JonathanGossage 17 September 2014 05:57:23PM 2 points [-]

Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.

Comment author: KatjaGrace 22 September 2014 03:20:18AM 2 points [-]

I'm not sure what you mean when you say 'determining what the program needs to do' - this sounds very general. Could you give an example?

Comment author: LeBleu 07 October 2014 08:42:03AM 0 points [-]

Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple "non-programmers" can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.

Comment author: xrchz 03 October 2014 08:53:04PM 0 points [-]

They're not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.