All of Mark_Lu's Comments + Replies

Mark_Lu00

Okay, I have a "stupid" question. Why is the longer binary sequence that represents the hypothesis less likely to be 'true' data generator? I read the part below but I don't get the example, can someone explain in a different way?

We have a list, but we're trying to come up with a probability, not just a list of possible explanations. So how do we decide what the probability is of each of these hypotheses? Imagine that the true algorithm is produced in a most unbiased way: by flipping a coin. For each bit of the hypothesis, we flip a coin. Heads

... (read more)
2Viliam_Bur
This is a completely different explanation, and I am not sure about its correctness. But it may help your intuition. Imagine that you randomly generate programs in a standard computer language. And -- contrary to the S.I. assumption -- let's say that all those programs have the same length N. Let's try a few examples. For simplicity, let's suppose that the input is in variable x, and the programming language has only two commands: "x++" (increases x by one) and "return x" (returns the current value of x as a result). There are four possible programs: x++; x++; x++; return x; return x; x++; return x; return x; The first program does not return a value. Let's say that it is not a valid program, and remove it from the pool. The second program returns the value "x+1". The third and fourth programs both return the value "x", because both are equivalent to one-line program "return x". It seems that the shorter programs have an advantage, because a longer program can give the same results as the shorter program. More precisely, for every short program, there are infinitely many equivalent longer programs; as you increase the maximum allowed length N, you increase their numbers exponentially (though not necessarily 2^x exponentially; this is just a result of our example language having 2 instructions). This is why in this example the one-line program won the competition, even if it wasn't allowed to participate. OK, this is not a proof, just an intuition. Seems to me that even if you use something else instead of S.I., the S.I. may appear spontaneously anyway; so it is a "natural" distribution. (Similarly, the Gauss curve is a natural distribution for adding independent random variables. If you don't like it, just choose a different curve F(x), and then calculate what happens if you add together thousand or million random numbers selected by F(x)... the result will be more and more similar to the Gauss curve.)
Mark_Lu10

just because I want X doesn't mean I don't also want Y where Y is incompatible with X

In real life you are still forced to choose between X and Y, and through wireheading you can still cycle between X and Y at different times.

Mark_Lu00

This might be one reason why Eliezer talks about morality as a fixed computation.

P.S. Also, doesn't the being itself have a preference for not-suffering?

Mark_Lu10

A problem here seems to be that creating a being in intense suffering would be ethically neutral

Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?

3Lukas_Gloor
Sure, existing people tend to have such preferences. But hypothetically it's possible that they didn't, and the mere possibility is enough to bring down an ethical theory if you can show that it would generate absurd results.
Mark_Lu40

preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour

Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?

3Lukas_Gloor
Indeed, that's a very counterintuitive conclusion. It's the reason why most preference-utilitarians I know hold a prior-existence view.
Mark_Lu00

To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.

Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for 'morality' that we do today to recognize cats using a computer.

2private_messaging
Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)
Mark_Lu00

similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic

But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that's because brains have a more efficient architecture/algorithms than today's generic computers.

0private_messaging
Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can't do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality. edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of "if the eyes have vertical slits, it's a cat" which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like 'if it doesn't harm anyone else it's okay')
Mark_Lu-20

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.

"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.

-5private_messaging
Mark_Lu10

Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.

4Vladimir_M
You make it sound as if there is some signal or register in the brain whose value represents "pleasure" in a straightforward way. To me it seems much more plausible that "pleasure" reduces to a multitude of variables that can't be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared. That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.