timtyler comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
Morality has a long tradition of negative phrasing. "Thou shalt not" dates back to biblical times. Many laws are prohibitions. Bad deeds often get given more weight than good ones. That is just part of the nature of the beast - IMHO.
That's nice, but precisely fails to answer the issue I'm raising: what is a "friendly intelligence", in terms other than stating what it isn't? What answer makes the term less mysterious?
To paraphrase a radio conversation with one of SI's employees:
and then do find/replace on "human value" with Eliezer's standard paragraph:
Not that I agree this is the proper definition, just one which I've pieced together from SI's public comments.
The obvious loophole in your paraphrase is that this accounts for the atoms the humans are made of, but not for other atoms the humans are interested in.
But yes, this is a bit closer to an answer not phrased as a negation.
Here is the podcast where the Skeptics' Guide to the Universe (SGU) interviews Michael Vassar (MV) on 23-Sep-2009. The interview begins at 26:10 and the transcript below is 45:50 to 50:11.
The original quote had: "human-benefiting" as well as "non-human-harming". You are asking for "human-benefiting" to be spelled out in more detail? Can't we just invoke the 'pornography' rule here?
No, not if the claimed goal is (as it is) to be able to build one from toothpicks and string.
Right, but surely they'd the the first to admit that the details about how to do that just aren't yet available. They do have their `moon-onna-stick' wishlist.