private_messaging comments on Your existence is informative - Less Wrong

2 Post author: KatjaGrace 30 June 2012 02:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread.

Comment author: private_messaging 01 July 2012 10:15:11AM *  1 point [-]

The large world issues seem kind of confused.

Suppose an ideal agent is using Solomonoff induction to predict it's inputs. The models which have the agent located very far away, at positions with enormously huge spatial distance, have to encode this distance into the model somehow, to be able to predict input that you are getting. That makes them very huge (all of them) and they all combined have incredibly tiny contribution to algorithmic probability.

If you are to do confused Solomonoff induction whereby you seek 'explanation' rather than a proper model - seek anything that contains the agent somewhere inside of it - then the whole notion just breaks down and you do not get anything useful out, you just get iterator over all possible (or if you skip the low level fundamental problem, you run into some form of big-universe issue where you hit 'why bother if there's a copy of me somewhere far away' and 'what is the meaning of measurement if there's some version of me measuring something wrong', but ultimately if you started from scratch you wouldn't even get to that point as you'd never be able to form any even remotely useful world model).

Comment author: KatjaGrace 01 July 2012 11:55:31PM 2 points [-]

I don't know what you mean by 'large world issues'.

Why is the agent's distance from you relevant to predicting its inputs? Why does a large distance imply huge complexity?

Comment author: paulfchristiano 02 July 2012 12:59:30AM *  1 point [-]

A model for your observations consists (informally) of a model for the universe and then coordinates within the universe which pinpoint your observations, at least in the semantics of Solomonoff induction. So in an infinite universe, most observations must be very complicated, since the coordinates must already be quite complicated. Solomonoff induction naturally defines a roughly-uniform measure over observers in each possible universe, which very slightly discounts observers as they get farther away from distinguished landmarks. The slight discounting makes large universes unproblematic.

I wrote about these things at some point, here, though that was when I was just getting into these things and it now looks silly even to current me. But that's still the only framework I know for reasoning about big universes, splitting brains, and the born probabilities.

Comment author: Vladimir_Nesov 03 July 2012 08:17:36AM 2 points [-]

But that's still the only framework I know for reasoning about big universes, splitting brains, and the born probabilities.

I get by with none...

Comment author: Tyrrell_McAllister 03 July 2012 05:36:30PM 0 points [-]

Are you sure?

Comment author: Vladimir_Nesov 03 July 2012 09:57:26PM *  0 points [-]

Consequentialist decision making on "small" mathematical structures seems relatively less perplexing (and far from entirely clear), but I'm very much confused about what happens when there are too "many" instances of decision's structure or in the presence of observations, and I can't point to any specific "framework" that explains what's going on (apart from the general hunch that understanding math better clarifies these things, and it does so far).

Comment author: Tyrrell_McAllister 03 July 2012 10:06:06PM 1 point [-]

If X has a significant probability of existing, but you don't know at all how to reason about X, how confident can you be that your inability to reason about X isn't doing tremendous harm? (In this case, X = big universes, splitting brains, etc.)