You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Oscar_Cunningham comments on Open thread, September 2-8, 2013 - Less Wrong Discussion

0 Post author: David_Gerard 02 September 2013 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (376)

You are viewing a single comment's thread. Show more comments above.

Comment author: Strilanc 02 September 2013 10:44:37PM 4 points [-]

You're demonstrating a whole bunch of misconceptions Eliezer has covered in the sequences. In particular, you're talking about the AI using fuzzy high level human concepts like "morals" and "philosophies" instead of as algorithms and code.

I suggest you try to write code that "figures out a worthwhile moral goal" (without pre-supposing a goal). To me that sounds as absurd as writing a program that writes the entirety of its own code: you're going to run into a bit of a bootstrapping problem. The result is not the best program ever, it's no program at all.

Comment author: Oscar_Cunningham 02 September 2013 11:04:17PM 6 points [-]

To me that sounds as absurd as writing a program that writes the entirety of its own code:

This is totally possible, you just do something like this:

Write the following out twice, the second time in quotes: "Write the following out twice, the second time in quotes: "

It's called a Quine.

Comment author: Strilanc 03 September 2013 02:32:52PM 4 points [-]

To clarify: I meant that I, as the programmer, would not be responsible for any of the code. Quines output themselves, but they don't bring themselves into existence.

Good catch on that ambiguity, though.

Comment author: DanielLC 03 September 2013 04:06:36AM 4 points [-]

That's what I thought of at first too.

I think he means a program that is the designer of itself. A quine is something that you wrote that writes a copy of itself.