Notion of valued Identity — Physically
Let's locally define “VI” as “whatever you want to preserve by the means of personal immortality” (“means” such as anti-aging, cryonics, mind uploading, etc.)
Question is: how do you define your VI physically, in a way that makes physical sense?
* Note: Please avoid using the bare term “identity” unless you can define it non-vaguely (and even then it's better to apply some different identifier.)
* Edit: If you cannot (quite expectedly) give a precise answer, please at least point to the direction where, you think, it might be (i.e. way of finding and verifying that answer).
Rationally Agreeing to Disagree
By Aumann's agreement theorem (and some related/inferred ideas), two rationalists can't agree to disagree.
However, there's vast amount of cases I know of that are not sufficiently important to spend significant time on discussing them (for the sake of narrowing the topic, I propose to concentrate on cases where it's doubtfully valuable to spend more than just few minutes discussing them).
More generally, it's also a question of optimizing amount of time (or mental resources) spent on updating own probabilities regarding some particular topic (choosing the option with maximal expected value of information divided by value of time, AFAIU).
It is rather obvious that, after discussing some topic for few minutes, it is suboptimal to not update probabilities about the discussed topic (and update just the knowledge regarding disagreement with other person).
But the question is, what probability update is most appropriate in such situation? Or, in a slightly different way, what would be an instrumentally optimal course of action given disagreement of two rationalists on some particular topic? Given some expectations like own expectation (probability) that other person is a honest rationalist. And a time limit.
In more detail: What initial expectations regarding other person's knowledge can be easily updated? It is rather simple to state some approximation of the size of evidence. Possible but more hard to state reasons to update expectations regarding own evidence being biased or unbiased. Also, would it be more right to shift probability distribution towards the other person's beliefs or towards uncertainty (unitary distribution of probabilities, AFAIU)? And, how much probability can be shifted in such a short time (given sufficiently complex topic) — would that amount be tiny?
Related references are http://wiki.lesswrong.com/wiki/Likelihood_ratio and most links 2 or 3 levels deep from there. And post tags, ofc. Related but doubtfully important chatlog (with an example of the situation) is here.
Multiverse and complexity of [laws of] the observed universe
Meta: This might be a request for explanation of my ignorance (or just some madness) rather than a usable idea.
Given multiverse hypothesis (universes with different physical constants / laws), the number of universes with infinitely large set of laws is much larger (both being infinite, though) than number of universes with finite sets of laws. But then wouldn't it be appropriate to (subjectively) expect to be in a universe with infinitely large set of laws?
(I initially asked that in IRC, chatlog here might be useful for clarification on what I mean)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)