Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
kurt20

One could also note that 'liking' is something the mammalian and reptile brains are good at, and 'wanting' is often related to deliberation and executive system motivation. Though there are probably different wanting (drive) systems in lower brain layers and in the frontal lobe.

Also, some things we want because they produce pleasure, and some are just interim steps that we carry out 'because we decided'. Evolutionary history determines that we get rewarded when we get actual things far more than when we make one small step. We can sometimes utilize will power, which could be seen as an evolutionarily novel and therefore weak mechanism to provide short term reward for executing steps towards a longer term benefit. Steps that are not rewarded by the opioid system.

I think that the more we hack into the brain and the more we will discover that 'wanting' and 'liking' are umbrella terms completely unsuited to capture the subtlety and the sheer messiness of the spaghetti code of human motivation.

I am also going to comment on the idea that intelligent agents could have a 'crystalline' (ordered, deterministic) utility evaluation system. We went down that road trying to make AI work, i.e. making brittle systems based on IF/THEN - that stuff doesn't work.

So what makes us think that using the same type of approach will work for utility evaluation (which is a hard problem requiring a lot of intelligence)?

Humans are adaptable because they get bored, and try new things, and their utility function can change, and different drives can interact in novel ways as the person matures and grows wiser. That can be very dangerous in an AI.

But can we really avoid that danger? I am skeptical that we will be able to have a completely bayesian, deterministic utility function. Perhaps we are underestimating how big a chunk of intelligence the evaluation of payoffs really is; and thinking that it won't require the same kind of fine-grained, big-data type of messy, uncertain pattern smashing that we now know is necessary to do anything, like distinguish cars from trees.

We have insufficient information about the universe to judge the hedonic value of all actions in an accurate way, that's another reason to want the utility evaluation to be as plastic as possible. Chaos must be introduced into the system to avoid getting caught in locally optimal spaces. Dangerous yes, but possibly this necessity is what will allow the AI to eventually bypass human stupidity in constructing it.

kurt40

There is a very simple explanation to the seeming discrepancy between wanting and liking, and that is that a person is always experiencing a tension between wanting a bit of pleasure now, versus a lot of pleasure later. Yes, spending time with your family may give you some pleasure now, but staying in NY and putting money aside will give you a lot of security later on. This may not explain the whole difference but perhaps a good chunk of it.

I think wireheading is dismissed far too quickly here and elsewhere. Like it or not (pun intended), the only reason we do things is to obtain a sensation out of them, and if we can obtain the sensation without going through the motions I see no objective reason why we should, other than "we've always done it that way".

I think we have to seriously consider the possibility that wireheading may be the right thing to do. It seems to trivialize the entire human subjective experience to reduce wireheading to "a needle in the brain". That may be the crude state in which it is now. However it should be uncontroversial that with mastery over physics and matter the only events worthy of notice will be mental events (as in a Dyson sphere for instance). Then, engineering will all be about producing internal worlds, not outcomes on the outside - a very, very complex, multidimensional type of wireheading. Which gives us the sum total of the value that the universe can produce, without the mess.

The difference between being stuck in orgasm mode, and having an infinity of worlds and experiences to explore, is simply in whether we value variety more than intensity or vice versa. Hopefully the AI will know what's best for us :D