Comment author: TheOtherDave 04 January 2012 08:35:00PM 2 points [-]

I agree with you that "my desire is not do do X, therefore I wouldn't do X even if I knew it was the right thing to do" isn't a valid argument. It's also not what I said. What I said was "my desire is not do do X, therefore I wouldn't choose to desire to do X even if I could choose that." Whether it's right or wrong doesn't enter into it.

As for your scenario... yes, I agree with you that IF "eating babies is wrong" is the sort of thing that can be discovered about the world, THEN an AI could discover it, and THEREFORE is not guaranteed to continue eating babies just because it initially values baby-eating.

It is not clear to me that "eating babies is wrong" is the sort of thing that can be discovered about the world. Can you clarify what sort of information I might find that might cause me to "find out" that eating babies is wrong, if I didn't already believe that?

Comment author: Dwelle 04 January 2012 09:23:02PM 0 points [-]

Let me get this straight, are you saying that if you believe X, there can't possibly exist any information that you haven't discovered yet that could convince your belief is false? You can't know what connections and conclusions might AI deduce out of every information put together. They might conclude that humanity is a stain of universe and even if they thought wiping humanity out wouldn't accomplish anything (and they strongly desired against doing so), they might wipe us out purely because the choice "wipe humanity" would be assigned higher value than the choice "not to wipe out humanity".

Also, is the statement "my desire is not do do X, therefore I wouldn't choose to desire to do X even if I could choose that." your subjective feeling, or do you base it on some studies? For example, this statement doesn't apply to me, as I would, under certain circumstances, choose to desire to do X, even if it was not my desire initially. Therefore it's not an universal truth, therefore may not apply to AI either.

Comment author: TheOtherDave 03 January 2012 11:12:54PM 5 points [-]

I don't want to eat babies.
If you gave me a pill that would make me want to eat babies, I would refuse to take that pill, because if I took that pill I'd be more likely to eat babies, and I don't want to eat babies.
That's a special case of a general principle: even if an AI can modify itself and act independently, if it doesn't want to do X, then it won't intentionally change its goals so as to come to want to do X.
So it's not pointless to design an AI with a particular goal, as long as you've built that AI such that it won't accidentally experience goal changes.

Incidentally, if you're really interested in this subject, reading the Sequences may interest you.

Comment author: Dwelle 04 January 2012 08:26:12PM 0 points [-]

I am not sure your argument is entirely valid. The AI would have access to every information humans ever conceived, including the discussions, disputes and research put into programming this AI's goals and nature. It may then adopt new goals based on the information gathered, realizing its former ones are no longer desirable.

Let's say that you're programmed not to kill baby eaters. One day you find out, that eating babies is wrong (based on the information you gather), and killing the baby eaters is therefore right, you might kill the baby eaters no matter what your desire is.

I am not saying my logic isn't wrong, but I don't think that the argument - "my desire is not do do X, therefore I wouldn't do X even if I knew it was the right thing to do" is right, either.

Anyway, I plan to read the sequences, when I have time.

Comment author: Dwelle 03 January 2012 10:48:24PM 0 points [-]

Wouldn't it be pointless to try to instill into an AI a friendly goal, as a self-aware improving AI should be able to act independently regardless of however we might write them in the beginning?

Comment author: Normal_Anomaly 06 November 2011 09:45:07PM 0 points [-]

If this universe is completely reductionistic, which a simulation probably would be, then your "actual thoughts" (and the existence of trees, etc.) are logical implications of the configuration. Does an entity with logical uncertainty still count as omniscient? But then we've gotten into definitions again.

I still don't know whether you, personally, think a deistic god implies that one or more religions is true. It doesn't particularly matter, though. Your original point that the answer to the god question depends on the answer to the simulation question is a good one.

Comment author: Dwelle 10 November 2011 02:48:55PM 0 points [-]

Depends, of course, how you define religion. I'm not sure what the original question was but there is of course a religion stating the universe is a simulation, god or no god.

Comment author: Dwelle 05 November 2011 10:42:43AM 5 points [-]

Took the survey and was quite unsure how to answer the god questions... If we took it, for example, that there's 30% chance of universe being simulated then the same probability should be assigned to P(God) too and to P(one of the religions is correct) as well.

View more: Prev