djm comments on Assessors that are hard to seduce - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (12)
If we get to set the criteria of this man in the clouds, we can get useful behaviour. The main criticism of religion is that it's untrue, and that a dedicated observer will realise this. Here we know it to be untrue, but the AI cannot act on that knowledge (see my post on false thermodynamic miracles).
I agree that useful behavior could come of this - religion has always been a very effective control mechanism.
Unfortunately, it would be a challenging problem to maintain this control over an increasingly intelligent AI.
See http://lesswrong.com/r/discussion/lw/ltf/false_thermodynamic_miracles/
That would likely work for initial versions of an AI, but I still cant help feeling that this is just tampering with the signal and that an advanced AI would detect this.
Would it not question the purpose of the utility function around detecting thermodynamic miracles - how would this work with its utility function to detect tampering or false data.
If I saw a miracle, I would [hope] my thinking would follow the logic below
a) it must be a trick/publicity stunt done with special effects b) I am having some sort of dream / mental breakdown / psychotic episode c) some other explanation I don't of
I don't think an intelligent agent would or should jump to "it's a miracle", and I would be concerned of its response if/when it does realise that it has been tricked all along.
Probably, but it's not programmed to care about that.
Remember, it's not seeing a miracle. It's more that its decisions only matter if a miracle happened, so it's assuming that a miracle happened for decision making purposes.