Oscar_Cunningham comments on Open thread, September 2-8, 2013 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (376)
You're demonstrating a whole bunch of misconceptions Eliezer has covered in the sequences. In particular, you're talking about the AI using fuzzy high level human concepts like "morals" and "philosophies" instead of as algorithms and code.
I suggest you try to write code that "figures out a worthwhile moral goal" (without pre-supposing a goal). To me that sounds as absurd as writing a program that writes the entirety of its own code: you're going to run into a bit of a bootstrapping problem. The result is not the best program ever, it's no program at all.
This is totally possible, you just do something like this:
It's called a Quine.
To clarify: I meant that I, as the programmer, would not be responsible for any of the code. Quines output themselves, but they don't bring themselves into existence.
Good catch on that ambiguity, though.
That's what I thought of at first too.
I think he means a program that is the designer of itself. A quine is something that you wrote that writes a copy of itself.