You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

bokov comments on Supposing you inherited an AI project... - Less Wrong Discussion

-5 Post author: bokov 04 September 2013 08:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: bokov 04 September 2013 12:40:37PM *  0 points [-]

You still need to write some drivers and interpretation scripts, on top of which go basic perception,

What is the distinction between these and A.2?

on top of which go basic human thoughts,

What are those, what is the minimum set of capabilities within that space that are needed for our goals, and why are they needed?

on top of which go culture

What is it, and why is it needed?

and morality, on top of which go CEV

Is there any distinction, for the purposes of writing a world-saving AI?

If there is, it implies that the two will sometimes give conflicting answers. Is that something we would want to happen?

Comment author: ikrase 05 September 2013 07:52:31AM -1 points [-]

I'm mostly just rambling about stuff that is totally missing. Basically, I'm respectively referring to 'Don't explode the gas main to blow the people out of the burning building', 'Don't wirehead' and 'How do you utilitarianism?'.

Comment author: bokov 06 September 2013 03:20:38AM 0 points [-]

I understand. And if/when we crack those philosophical problems in a sufficiently general way, we will still be left with the technical problem of "how do we represent the relevant parts of reality and what we want out of it in a computable form so the AI can find the optimum"?