Will_Newsome comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (154)
"far away" from what?
If you use your current location as a reference point than the theory becomes non-updateless and incoherent and falls apart. You don't "get" any starting point when you try to locate someone.
I think the universe implicitly defines a reference point in the physics. By way of illustration, I think Tegmark sometimes talks about an inflation scenario where an actually infinite space is the same as a finite bubble that expands from a definite point, but with different coordinates that mix up space and time; and in that case I think that definite point would be algorithmically privileged. But I'm even fuzzier on all this than before.
I think the focus on a physical reference point here seems misguided. Perhaps more conceptually well-founded would be something like a search for a logical reference point, using your existence in some form at some level of abstraction and your reasoning about that logical reference point both as research of and as evidence about attractors in agentspace, via typical acausal means.
Vladimir Nesov's decision theory mailing list comments on the role of observational uncertainty in ambient-like decision theories seems relevant. Not to imply he wouldn't think what I'm saying here is complete nonsense.
In one of my imaginable-ideal-barely-possible worlds, Eliezer's current choice of "thing to point your seed AI at and say 'that's where you'll find morality content'" was tentatively determined to be what it currently nominally is (instead of tempting alternatives like "the thing that makes you think that your proposed initial dynamic is the best one" or "the thing that causes you to care about doing things like perfecting things like the choice of initial dynamic" or something) after he did a year straight of meditation on something like the lines of reasoning I suggest above, except honed to something like perfection-given-boundedness (e.g. something like the best you could reasonably expect to get at poker given that most of your energy has to be put into retaining your top 5 FIDE chess rating while writing a bestselling book popular science book).
I think it depends on the physics. Some have privileged points, some don't.
But surely given any scheme to assign addresses in an infinite universe, for every L there's a finite bubble of the universe outside of which all addresses are at least L in length?
If a universe is tiled with a repeating pattern then you can assign addresses to parts of the pattern, each an infinite number of points.
I don't know how this applies to other universes.
If hypothetically our universe had a privileged point, what would you do if you discovered you were much farther away from it than average?
Naively, you wouldn't use some physical location, but instead logical descriptions in the space of algorithms given axioms you predict others will predict are Schelling points (using your own (your past) architecture/reasoning as evidence of course).
Naively, this is a question of ethics and not game theory, so I don't see why Schelling points should enter into it.
I thought "Schelling point" was used by the decision theory workshop folk, I may be wrong. Anyway, decision theory shares many aspects of cooperative game theory as pointed out by Wei Dai long ago, and many questions of ethics must be determined/resolved/explored by such (acausal) cooperation/control.
Relevance? (That people in group Y use a word doesn't obviously clarify why you used it.)
I mistakenly thought that Will Sawin was in said group and was thus expressing confusion that he wasn't already familiar with its broader not-quite-game-theoretic usage, or at least what I perceived to be a broader usage. Our interaction is a lot more easily interpreted in that light.
(I didn't understand what you meant either when I wrote that comment, now I see the intuition, but not a more technical referent.)
And if you meant that you don't see a more technical referent for my use of Schelling point then there almost certainly isn't one, and thus it could be claimed that I was sneaking in technical connotations with my naive intuitions. Honestly I thought I was referring to a standard term or at least concept, though.
The term is standard, it was unclear how it applies, the intuition I referred to is about how it applies.
Can you explain that intuition to me or point me to a place where it is explained or something?
Or, alternately, tell me that the intuition is not important?
The intuition that "Schelling points" are an at all reasonable or non-bastardized way of thinking about this, or the intuition behind the "this" I just mentioned? If the latter, I did preface it with "naively", and I fully disclaim that I do not have a grasp of the technical aspects, just aesthetics which are hard to justify or falsify, and the only information I pass on that might be of practical utility to folk like you or Sawin will be ideas haphazardly stolen from others and subsequently half-garbled. If you weren't looking closely, you wouldn't see anything, and you have little reason to look at all. Unfortunately there is no way for me to disclaim that generally.
link? explanation? something of that nature?
EDIT: Private message sent instead of comment reply.