Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: blankcanvas 27 June 2017 06:43:53PM *  0 points [-]

It doesn't make sense to have internally generated goals, as any goal I make up seems wrong and do not motivate me in the present moment to take action. If a goal made sense, then I could pursue it with instrumental rationality in the present moment, without procrastination as a means of resistance. Because it seems as it simply is resistance of enslavement to forces beyond my control. Not literally, but you know, conditioning in the schooling system etc.

So what I would like, is a goal which is universally shared among you, me and every other Homo Sapiens, which lasts through time. Preferences which are shared.

Comment author: Lumifer 27 June 2017 07:00:37PM 0 points [-]

any goal I make up seems wrong and do not motivate me in the present moment to take action

You are not supposed to "make up" goals, you're supposed to discover them and make them explicit. By and large your consciousness doesn't create terminal goals, only instrumental ones. The terminal ones are big dark shadows swimming in your subconscious.

Besides, it's much more likely your motivational system is somewhat broken, that's common on LW.

a goal which is universally shared among you, me and every other Homo Sapiens, which lasts through time

Some goal, any goal? Sure: survival. Nice terminal goal, universally shared with most living things, lasts through time, allows for a refreshing variety of instrumental goals, from terminating a threat to subscribing to cryo.

Comment author: blankcanvas 27 June 2017 06:19:29PM 0 points [-]

That's why I am asking here. What goal should I have? I use goal and preference interchangeably. I'm also not expecting the goal/preference to change in my lifetime, or multiple lifetimes either.

Comment author: Lumifer 27 June 2017 06:33:26PM *  1 point [-]

What goal should I have?

First, goals, multiple. Second, internally generated (for obvious reasons). Rationality might help you with keeping your goals more or less coherent, but it will not help you create them -- just like Bayes will not help you generate the hypotheses.

Oh, and you should definitely expect your goals and preferences to change with time.

Comment author: Lumifer 27 June 2017 06:30:28PM *  2 points [-]

This is not really a theory. I am not making predictions, I provide no concrete math, and this idea is not really falsifiable in its most generic forms. Why do I still think it is useful? Because it is a new way of looking at physics, and because it makes everything so much more easy and intuitive to understand, and makes all the contradictions go away.

Let's compare it with an alternative theory that there are invisible magical wee beasties all around who make the physics actually work by pushing, pulling, and dragging all the stuff. And "there are alternative interpretations for explaining relativity and quantum physics under this perspective" -- sometimes the wee beasties find magic mushrooms and eat them.

  • Not making predictions? Check.
  • No concrete math? Check.
  • Not really falsifiable? Check.
  • New way of looking at physics? Check (sufficiently so).
  • So much more easy and intuitive to understand? Check.
  • Makes all the contradictions go away? Check.
  • Not a theory, but a new perspective? Check.

It's a tie! But the beasties are cuter, so they win.

Comment author: blankcanvas 27 June 2017 05:41:10PM 0 points [-]

I don't know what goal I should have to be a guide for instrumental rationality in the present moment. I want to take this fully seriously, but for the instrumental rationality in of it self with presence.

"More specifically, instrumental rationality is the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences.

Why, my, preferences? Have we not evolved rational thought further than simply anything one self cares about? If there even is such a thing as a self? I understand, it's how our language has evolved, but still.

Said preferences are not limited to 'selfish' preferences or unshared values; they include anything one cares about."

Not limited, to selfish preferences or unshared values, what audience is rationality for?

https://wiki.lesswrong.com/wiki/Rationality

Comment author: Lumifer 27 June 2017 05:58:16PM 0 points [-]

Why, my, preferences?

What are your other options?

Comment author: entirelyuseless 27 June 2017 05:37:52PM 0 points [-]

I think the idea that if one AI says there is a 50% chance of heads, and the other AI says there is a 90% chance of heads, the first AI can describe the second AI as knowing that there is a 50% chance, but caring more about the heads outcome. Since it can redescribe the other's probabilities as matching its own, agreement on what should be done will be possible. None of this means that anyone actually decides that something will be worth more to them in the case of heads.

Comment author: Lumifer 27 June 2017 05:57:40PM 1 point [-]

the first AI can describe the second AI as knowing that there is a 50% chance, but caring more about the heads outcome.

First of all this makes any sense solely in the decision-taking context (and not in the forecast-the-future context). So this is not about what will actually happen but about comparing the utilities of two outcomes. You can, indeed, rescale the utility involved in a simple case, but I suspect that once you get to interdependencies and non-linear consequences things will get more hairy, if possible at all.

Besides, this requires you to know the utility function in question.

Comment author: cousin_it 27 June 2017 03:30:50PM *  0 points [-]

I think sharing all information is doable. As for priors, there's a beautiful LW trick called "probability as caring" which can almost always make priors identical. For example, before flipping a coin I can say that all good things in life will be worth 9x more to me in case of heads than tails. That's purely a utility function transformation which doesn't touch the prior, but for all decision-making purposes it's equivalent to changing my prior about the coin to 90/10 and leaving the utility function intact. That handles all worlds except those that have zero probability according to one of the AIs. But in such worlds it's fine to just give the other AI all the utility.

Comment author: Lumifer 27 June 2017 03:47:27PM 1 point [-]

sharing all information is doable

In all cases? Information is power.

before flipping a coin I can say that all good things in life will be worth 9x more to me in case of heads than tails

There is an old question that goes back to Abraham Lincoln or something:

If you call a dog's tail a leg, how many legs does a dog have?

Comment author: MaryCh 27 June 2017 02:30:37PM 1 point [-]

Isn't it odd how fanon dwarves [from 'Hobbit'] are seen as 'fatally and irrationally enamoured' by the gold of the Lonely mountain? I mean, any other place and any other time, put an enormous heap of money in front of a few poor travellers, tell them it's their, by right, and they would get attached to it and nobody would find it odd in the least. But Tolkien's dwarves get the flak. Why?

Comment author: Lumifer 27 June 2017 03:24:03PM *  0 points [-]

put an enormous heap of money in front of a few poor travellers

Put an enormous heap of money with a big nasty dragon on top of it in front a few poor travelers...

Comment author: ChristianKl 27 June 2017 09:08:47AM 0 points [-]

To me, this sounds like saying that sufficiently rational agents will never defect in prisoner dilemma provided they can communicate with each other.

Comment author: Lumifer 27 June 2017 03:19:26PM *  0 points [-]

The whole point of the prisoner's dilemma is that the prisoners cannot communicate. If they can, it's not a prisoner's dilemma any more.

Comment author: gjm 27 June 2017 09:02:21AM 0 points [-]

If https://en.wikiquote.org/wiki/Vladimir_Lenin is to be believed then it's more complicated still because what Lenin actually said was exactly the opposite.

Comment author: Lumifer 27 June 2017 03:18:39PM 0 points [-]

A meme necessarily looks better than the actual source :-/

Comment author: cousin_it 27 June 2017 08:24:46AM *  0 points [-]

I don't believe it. War wastes resources. The only reason war happens is because two agents have different beliefs about the likely outcome of war, which means at least one of them has wrong and self-harming beliefs. Sufficiently rational agents will never go to war, instead they'll agree about the likely outcome of war, and trade resources in that proportion. Maybe you can't think of a way to set up such trade, because emails can be faked etc, but I believe that superintelligences will find a way to achieve their mutual interest. That's one reason why I'm interested in AI cooperation and bargaining.

Comment author: Lumifer 27 June 2017 03:15:33PM 1 point [-]

Sufficiently rational agents will never go to war, instead they'll agree about the likely outcome of war, and trade resources in that proportion.

Not if the "resource" is the head of one of the rational agents on a plate.

The Aumann theorem requires identical priors and identical sets of available information.

View more: Next