TheOtherDave comments on Fake Causality - Less Wrong

41 Post author: Eliezer_Yudkowsky 23 August 2007 06:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: royf 04 June 2012 11:27:53PM 0 points [-]

Thanks for challenging my position. This discussion is very stimulating for me!

Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?

I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into its bloodstream and synapses?

This seems trivially false - if an AI is instantiated as a bunch of zeros and ones in some substrate, how could Godel or similar concerns stop it from altering any subset of those bits?

It's not a question of stopping it. Gödel is not giving it a stern look, saying: "you can't alter your own code until you've done your homework". It's more that these considerations prevent the agent from being in a state where it will, in fact, alter its own code in certain ways. This claim can and should be proved mathematically, but I don't have the resources to do that at the moment. In the meanwhile, I'd agree if you wanted to disagree.

You see reasons to believe that any artificial intelligence is limited to altering its motivations and desires in a way that is qualitatively similar to humans? This seems like a pretty extreme claim - what are the salient features of human self-rewriting that you think must be preserved?

I believe that this is likely, yes. The "salient feature" is being subject to the laws of nature, which in turn seem to be consistent with particular theories of logic and probability. The problem with such a claim is that these theories are still not fully understood.

Comment author: TheOtherDave 05 June 2012 01:17:33AM 0 points [-]

When is it appropriate to describe a natural agent as enjoying something?

Comment author: royf 05 June 2012 01:47:13AM 0 points [-]

As I said, when it secretes serotonin into its bloodstream and synapses.

Comment author: wedrifid 05 June 2012 02:47:01AM *  0 points [-]

As I said, when it secretes serotonin into its bloodstream and synapses.

That strikes me as terrible definition of enjoyment - particularly because seratonin release isn't nearly as indicative of enjoyment as popular culture would suggest. Even using dopamine would be better (but still not particularly good).

Comment author: royf 05 June 2012 03:09:23AM *  0 points [-]

I wasn't basing it on popular culture, but that doesn't mean I'm not wrong.

Do you have a better suggestion?

If not, I'd ask CuSithBell to please clarify her (or his) ideas without using controversially defined terminology (which was also my sentiment before).

Comment author: wedrifid 05 June 2012 03:46:10AM 0 points [-]

I'd ask CuSithBell to please clarify his ideas

My impression was 'her', not 'his'.

Comment author: royf 05 June 2012 03:49:20AM 0 points [-]

That's a big "ouch" on my part. Sorry. Lesson learned.

Comment author: TheOtherDave 05 June 2012 03:47:37AM 0 points [-]

You didn't say; rather, you said (well, implied) that it wasn't appropriate to describe an artificial agent as enjoying something in that case. But, OK, you've said now. Thanks for clarifying.