somervta comments on Open thread, September 2-8, 2013 - Less Wrong

0 Post author: David_Gerard 02 September 2013 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (376)

You are viewing a single comment's thread. Show more comments above.

Comment author: Strilanc 02 September 2013 10:44:37PM 4 points [-]

You're demonstrating a whole bunch of misconceptions Eliezer has covered in the sequences. In particular, you're talking about the AI using fuzzy high level human concepts like "morals" and "philosophies" instead of as algorithms and code.

I suggest you try to write code that "figures out a worthwhile moral goal" (without pre-supposing a goal). To me that sounds as absurd as writing a program that writes the entirety of its own code: you're going to run into a bit of a bootstrapping problem. The result is not the best program ever, it's no program at all.

Comment author: Oscar_Cunningham 02 September 2013 11:04:17PM 6 points [-]

To me that sounds as absurd as writing a program that writes the entirety of its own code:

This is totally possible, you just do something like this:

Write the following out twice, the second time in quotes: "Write the following out twice, the second time in quotes: "

It's called a Quine.

Comment author: Strilanc 03 September 2013 02:32:52PM 4 points [-]

To clarify: I meant that I, as the programmer, would not be responsible for any of the code. Quines output themselves, but they don't bring themselves into existence.

Good catch on that ambiguity, though.

Comment author: DanielLC 03 September 2013 04:06:36AM 4 points [-]

That's what I thought of at first too.

I think he means a program that is the designer of itself. A quine is something that you wrote that writes a copy of itself.

Comment author: Darklight 02 September 2013 11:00:39PM -1 points [-]

Well, I don't expect to need to write code that does that explicitly. A sufficiently powerful machine learning algorithm with sufficient computational resources should be able to:

1) Learn basic perceptions like vision and hearing. 2) Learn higher level feature extraction to identify objects and create concepts of the world. 3) Learn increasingly higher level concepts and how to reason with them. 4) Learn to reason about morals and philosophies.

Brains already do this, so its reasonable to assume it can be done. And yes, I am advocating a Bottom Up approach to A.I. rather than the Top Down approach Mr. Yudkowsky seems to prefer.