Lumifer comments on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging - Less Wrong

20 Post author: Kaj_Sotala 16 September 2015 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (176)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 18 September 2015 05:33:13PM *  0 points [-]

Let me see if I understand you correctly.

You have a matrix of (number of individuals) x (number of time-slices). Each matrix cell has value ("happiness") that's constrained to lie in the [-1..1] interval. You call the cell value "local utility", right?

And then you, basically, sum up the cell values, re-scale the sum to fit into a pre-defined range and, in the process, add a transformation that makes sure the bounds are not sharp cut-offs, but rather limits which you approach asymptotically.

As to the second part, I have trouble visualising the language in which the description-length would work as you want. It seems to me it will have to involve a lot scaffolding which might collapse under its own weight.

Comment author: gjm 18 September 2015 09:19:34PM 1 point [-]

"You have a matrix ...": correct. "And then ...": whether that's correct depends on what you mean by "in the process", but it's certainly not entirely unlike what I meant :-).

Your last paragraph is too metaphorical for me to work out whether I share your concerns. (My description was extremely handwavy so I'm in no position to complain.) I think the scaffolding required is basically just the agent's knowledge. (To clarify a couple of points: not necessarily minimum description length, which of course is uncomputable, but something like "shortest description the agent can readily come up with"; and of course in practice what I describe is way too onerous computationally but some crude approximation might be manageable.)

Comment author: Lumifer 18 September 2015 10:25:45PM 1 point [-]

The basic issue is whether the utility weights ("description lengths") reflect the subjective preferences. If they do, it's an entirely different kettle of fish. If they don't, I don't see why "my wife" should get much more weight than "the girl next to me on a bus".

Comment author: gjm 19 September 2015 01:01:23AM 1 point [-]

I think real people have preferences whose weights decay with distance -- geographical, temporal and conceptual. I think it would be reasonable for artificial agents to do likewise. Whether the particular mode of decay I describe resembles real people's, or would make an artificial agent tend to behave in ways we'd want, I don't know. As I've already indicated, I'm not claiming to be doing more than sketch what some kinda-plausible bounded-utility agents might look like.