You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ikrase comments on Supposing you inherited an AI project... - Less Wrong Discussion

-5 Post author: bokov 04 September 2013 08:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: ikrase 04 September 2013 08:34:23AM *  -1 points [-]

This sounds like almost nothing, reminds me of the person who wrote a command-line interpreter and language interpretation/synth library, leaving only the problem of figuring out how to write code to come up with intelligent responses to questions. Frankly, what is described here this sounds like something that guy in my robotics class could write in a week in Python. In fact, this sounds suspiciously like an assignment for an elementary programming class for non-computer-science majors that I took.

(This assumes that A.1 and A.2 are NOT provided!)

You still need to write some drivers and interpretation scripts, on top of which go basic perception, on top of which go basic human thoughts, on top of which go culture and morality, on top of which go CEV or whatever other major human morality system you want to use to make the AI friendly.

It also... sounds like this thing doesn't even search the hypothesis space. Which makes it much safer and much, much less useful.

Edit: Actually, I realize that this is more substantial, and wish to apologize for condescension. But the OP still sounds like the job is not being sliced even slightly in the middle, and like it would take a lot of time, work, and additional stuff to make something simple and useless like a chatterbox.

Comment author: Mitchell_Porter 04 September 2013 01:08:47PM 0 points [-]

The scripts (A) are like utility functions, and the program (B) is a general problem solver that can maximize/satisfice any utility function. So B must be powerful.

Comment author: ikrase 05 September 2013 07:54:34AM -1 points [-]

It sounds... lower level than that, more like some kind of numeric optimization thingie that needs you to code the world before you even get to utility functions.

Comment author: Mitchell_Porter 05 September 2013 09:18:44AM 0 points [-]

You're right, in the sense that there's nothing here about how to generate accurate representations of the world. According to A.2, the user provides the representations. But even if the program is just a numerical optimizer, it's a powerful one, because it's supposed to be able to optimize an arbitrary function (arbitrary network of nodes, as represented in the script).

So it's as if the unfinished AI project already has the part of the code that will do the heavy lifting when problems are solved, and what remains to be done - which is still both important and difficult - is everything that involves transmitting intentions correctly to this AI core, and ensuring that all that raw power isn't used in the service of the wrong goals.

Comment author: bokov 04 September 2013 12:40:37PM *  0 points [-]

You still need to write some drivers and interpretation scripts, on top of which go basic perception,

What is the distinction between these and A.2?

on top of which go basic human thoughts,

What are those, what is the minimum set of capabilities within that space that are needed for our goals, and why are they needed?

on top of which go culture

What is it, and why is it needed?

and morality, on top of which go CEV

Is there any distinction, for the purposes of writing a world-saving AI?

If there is, it implies that the two will sometimes give conflicting answers. Is that something we would want to happen?

Comment author: ikrase 05 September 2013 07:52:31AM -1 points [-]

I'm mostly just rambling about stuff that is totally missing. Basically, I'm respectively referring to 'Don't explode the gas main to blow the people out of the burning building', 'Don't wirehead' and 'How do you utilitarianism?'.

Comment author: bokov 06 September 2013 03:20:38AM 0 points [-]

I understand. And if/when we crack those philosophical problems in a sufficiently general way, we will still be left with the technical problem of "how do we represent the relevant parts of reality and what we want out of it in a computable form so the AI can find the optimum"?