Comment author: Mark_Lu 11 July 2012 12:20:02PM 0 points [-]

Okay, I have a "stupid" question. Why is the longer binary sequence that represents the hypothesis less likely to be 'true' data generator? I read the part below but I don't get the example, can someone explain in a different way?

We have a list, but we're trying to come up with a probability, not just a list of possible explanations. So how do we decide what the probability is of each of these hypotheses? Imagine that the true algorithm is produced in a most unbiased way: by flipping a coin. For each bit of the hypothesis, we flip a coin. Heads will be 0, and tails will be 1. In the example above, 01001101, the coin landed heads, tails, heads, heads, tails, and so on. Because each flip of the coin has a 50% probability, each bit contributes ½ to the final probability.

Therefore an algorithm that is one bit longer is half as likely to be the true algorithm. Notice that this intuitively fits Occam's razor; a hypothesis that is 8 bits long is much more likely than a hypothesis that is 34 bits long. Why bother with extra bits? We’d need evidence to show that they were necessary.

Comment author: TheOtherDave 29 June 2012 06:18:35PM 1 point [-]

My usual attitude is that our brains are not unified coherent structures, our minds still less so, and that just because I want X doesn't mean I don't also want Y where Y is incompatible with X.

So the search for some single thing in my brain that I can maximize in order to obtain full satisfaction of everything I want is basically doomed to failure, and the search for something analogous in my mind still more so, and the idea that the former might also be the latter strikes me as pure fantasy.

So I approach these sorts of thought experiments from two different perspectives. The first is "do I live in a world where this is possible?" to which my answer is "probably not." The second is "supposing I'm wrong, and this is possible, is it good?"

That's harder to answer, but if I take seriously the idea that everything I value turns out to be entirely about states of my brain that can be jointly maximized via good enough wireheading, then sure, in that world good enough wireheading is a fine thing and I endorse it.

Comment author: Mark_Lu 30 June 2012 09:05:48AM 1 point [-]

just because I want X doesn't mean I don't also want Y where Y is incompatible with X

In real life you are still forced to choose between X and Y, and through wireheading you can still cycle between X and Y at different times.

Comment author: Lukas_Gloor 28 June 2012 08:47:12PM 2 points [-]

Sure, existing people tend to have such preferences. But hypothetically it's possible that they didn't, and the mere possibility is enough to bring down an ethical theory if you can show that it would generate absurd results.

Comment author: Mark_Lu 28 June 2012 09:10:44PM *  1 point [-]

This might be one reason why Eliezer talks about morality as a fixed computation.

P.S. Also, doesn't the being itself have a preference for not-suffering?

Comment author: Lukas_Gloor 28 June 2012 03:03:25PM 5 points [-]

If we don't allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.

That's Peter Singer's view, prior-existence instead of total. A problem here seems to be that creating a being in intense suffering would be ethically neutral, and if even the slightest preference for doing so exists, and if there were no resource trade-offs in regard to other preferences, then creating that miserable being would be the right thing to do. One can argue that in the first millisecond after creating the miserable being, one would be obliged to kill it, and that, foreseeing this, one ought not have created it in the first place. But that seems not very elegant. And one could further imagine creating the being somewhere unreachable, where it's impossible to kill it afterwards.

One can avoid this conclusion by axiomatically stating that it is bad to bring into existence a being with a "life not worth living". But that still leaves problems, for one thing, it seems ad hoc, and for another, it would then not matter whether one brings a happy child into existence or one with a neutral life, and that again seems highly counterintuitive.

The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You'd end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence. Depending on how much pre-existing beings want to have children, it wouldn't necessarily entail complete anti-natalism, but the overall goal would at some point be a universe without unsatisfied preferences. Or is there another way out?

Comment author: Mark_Lu 28 June 2012 08:30:19PM 1 point [-]

A problem here seems to be that creating a being in intense suffering would be ethically neutral

Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?

Comment author: Stuart_Armstrong 28 June 2012 12:10:34PM *  2 points [-]

I suppose I take it on faith that there's a lot of room for more advanced technology before we hit mathematical limits.

Yes, yes, much progress can (and will) be made fomalising our intuitions. But we don't need to assume ahead of time that the progress will take the form of "better individual utilities and definition of summation" rather than "other ways of doing population ethics".

In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don't give credit for creating potential people, isn't most people's preference not to be killed enough to stop preference utilitarians from killing them?

Yes, the act is not morally neutral in preference utilitarianism. In those cases, we'd have to talk about how many people we'd have to create with satisficiable preferences, to compensate for that one death. You might not give credit for creating potential people, but preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour.

If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.

This is not preference total utilitarianism. It's something like "satisfying the maximal amount of preferences of currently existing people". In fact, it's closer to preference average utilitarianism (satisfy the current majority preference) that to total utilitarianism (probably not exactly that either; maybe a little more path dependency).

So I don't see what you mean when you say this reasoning "pre-supposes total utiltarianism".

Most reasons for rejecting the reasoning that blocks the repugnant conclusion pre-suppose total utiltarianism. Without the double negative: most justifications of the repugnant conclusion pre-suppose total utilitarianism.

Comment author: Mark_Lu 28 June 2012 12:58:27PM 4 points [-]

preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour

Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?

Comment author: private_messaging 27 June 2012 03:47:07PM *  0 points [-]

Well, it sure uses linear intuition. 3^^^3 is bigger than number of distinct states, its far past point where you are only increasing exactly-duplicated dust speck experience, so you could reasonably expect it to flat out.

One can go perverse and proclaims that one treats duplicates the same, but then if there's a button which you press to replace everyone's mind with mind of happiest person, you should press it.

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition. Simulation of pinprick slowed down 1000000 times is not ultra long torture. The 'murder' is a form of irreversible state transition. The morality as it exist is about state transitions not about states.

Comment author: Mark_Lu 27 June 2012 04:29:11PM -1 points [-]

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.

"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.

Comment author: Vladimir_M 27 June 2012 01:09:16AM 0 points [-]

Why do you believe that interpersonal comparison of pleasure is straightforward? To me this doesn't seem to be the case.

Comment author: Mark_Lu 27 June 2012 09:00:53AM 0 points [-]

Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.