TheOtherDave comments on Fake Causality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
Thanks for challenging my position. This discussion is very stimulating for me!
I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into its bloodstream and synapses?
It's not a question of stopping it. Gödel is not giving it a stern look, saying: "you can't alter your own code until you've done your homework". It's more that these considerations prevent the agent from being in a state where it will, in fact, alter its own code in certain ways. This claim can and should be proved mathematically, but I don't have the resources to do that at the moment. In the meanwhile, I'd agree if you wanted to disagree.
I believe that this is likely, yes. The "salient feature" is being subject to the laws of nature, which in turn seem to be consistent with particular theories of logic and probability. The problem with such a claim is that these theories are still not fully understood.
When is it appropriate to describe a natural agent as enjoying something?
As I said, when it secretes serotonin into its bloodstream and synapses.
That strikes me as terrible definition of enjoyment - particularly because seratonin release isn't nearly as indicative of enjoyment as popular culture would suggest. Even using dopamine would be better (but still not particularly good).
I wasn't basing it on popular culture, but that doesn't mean I'm not wrong.
Do you have a better suggestion?
If not, I'd ask CuSithBell to please clarify her (or his) ideas without using controversially defined terminology (which was also my sentiment before).
My impression was 'her', not 'his'.
That's a big "ouch" on my part. Sorry. Lesson learned.
You didn't say; rather, you said (well, implied) that it wasn't appropriate to describe an artificial agent as enjoying something in that case. But, OK, you've said now. Thanks for clarifying.