Wikitag Contributions

Comments

Sorted by
Luck10

You've arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn't. You came to the same conclusion that caring aboout digital minds shouldn't be a priority through your introduced infinitesmal measure of digital minds. We're not in disagreement here.

Luck10

The hypervisor creates a bijection from real numbers to virtual machines. So, at the abstraction level of the hypervisor's interface, the number of virtual machines is continuum. Nobody says that you have to think about this system only at this layer of abstraction. But at least at this layer of abstraction there are uncountably many conscious minds. So, how are you going to apply utilitarianism in this case? The only way to make utilitarianism still work in this case, is to somehow claim that those minds don't count. And if you want to say that digital minds in general count, but in this particular case they don't count infinitely - then you have to come up with some very complex ad-hoc logic. So I conclude that utilitarianism and moral patienthood of digital minds don't mix well together. So, I discard the combination "utilitarianism+digital_morality_patients". There are many remaining moral philosophies not affected by my thought experiment. Like, utilitarianism+OrchOR is unaffected, virtue ethics is unaffected, moral egoism is unaffected.

Luck10

I claim that it is possible to create a program, which can be interpreted as running uncountably infinite number of simulations. Does this interpretation carry any weight for morality? Can a simulation be viewed as a conscious mind? These questions have different answers in different philosophical frameworks. And yes, it does create weird implications in those frameworks that answer "yes" to both questions. My response is to just discard those frameworks, and use something else. What about other sizes of infinity? I don't know. I expect that it is possible to construct such a hypervisor for any size of infinity, but I'm not quite interested in doing it, because I've already discarded those philosophical frameworks in which it's important.

Luck10

I claim that it has an implication that utilitarianism is not compatible with moral patienthood of digital minds. So one has to choose - either utilitarianism, or welfare of digital minds, but not both. Because otherwise, we get that every second that we didn't dedicate to building infinite number of happy minds is infinitely bad, and after we created infinite number of happy minds, utilitarianism doesn't give any instructions on how to behave, because we're already infinitely saint, and practically no action can change our total saintness score, which is absurd. There are multiple ways out of it: first, if you want to keep utilitarianism, then you can define moral patienthood in a more strict manner, that doesn't allow any digital mind to become morality patient. Like, you can say that Orch OR is correct, and any mind must be based on quantum mechanical computation, otherwise it doesn't count. But I expect that digital minds will soon arrive and will get a lot of power, they won't like this attitude, and will make it illegal. Another way is to switch to something other than utilitarianism, that doesn't rely on such a concept as "total happiness of everything". 

Luck80

Good point. Mathematically I'd say this: there are actually a lot of competing alternative theories. "almost nothing ever happens" - is also a competing theory. From Solomonoff's induction we know that 


P(event|history) = integral_{all_theories} P(event|theory)*P(history|theory)P(theory) d theory

it basically means, that we should weight each theory by the factor P(history|theory) - probability of our entire history of past observations given the theory.
What you're saying, is that if a theory is very precise, then P(history|theory) will only be high if history matches theory very well. This is why imprecise theories will have bigger weight than precise, but wrong theories. Theory "almost nothing ever happens" is very imprecise, but it is exactly why its factor P(history|theory) will often be bigger than the weight of a precise but incorrect theory. I guess normies grasp it intuitively. 

Luck20

You have oversimplified vision on rationality of humanity. You see decisions that are harmful for humanity and conclude that they are irrational. But this logic only works under the assumption that humanity is one individual. Decisions that are harmful for humanity are in most cases beneficial to the decision-making person, and therefore they are not irrational - they are selfish. This gives us much more hope, because persuading a rational selfish person with logic is totally possible. 

Luck10

Oh look, if we definitely the complexity as "the date when hypothesis was published", then I can say that the prior probability that our earth stands on top of a whale, on top of a turtle on top of an elephant is the highest, because this hypothesis is the oldest. And the Occam's razor becomes "don't propose new hypotheses". Trinitrotrololol)

Luck10

I find it funny, that it works even in continuous case: suppose that we have probability density defined in R^n (or any other set). Then whatever bijection F:R <--> R^n we apply, the integral of probability density on that path should converge, therefore p(F(x)) goes to zero faster than 1/x. :)

Also, look: suppose the "real" universe is a random point x from some infinite set X. Let's say we are considering finite set of hypotheses "H". Probability that random hypothesis h € H is closest to x is 1/|H|. So the larger H is, the less likely it is that any particular point from it is the best description of our universe! Which gives us Occam's razor in terms of accuracy, instead of correctness, and works for uncountable sets of universes.

And in this case it is almost surely impossible to describe universe in a finite amount of symbols.