Posts

Sorted by New

Wiki Contributions

Comments

Mark_Lu12y00

Okay, I have a "stupid" question. Why is the longer binary sequence that represents the hypothesis less likely to be 'true' data generator? I read the part below but I don't get the example, can someone explain in a different way?

We have a list, but we're trying to come up with a probability, not just a list of possible explanations. So how do we decide what the probability is of each of these hypotheses? Imagine that the true algorithm is produced in a most unbiased way: by flipping a coin. For each bit of the hypothesis, we flip a coin. Heads will be 0, and tails will be 1. In the example above, 01001101, the coin landed heads, tails, heads, heads, tails, and so on. Because each flip of the coin has a 50% probability, each bit contributes ½ to the final probability.

Therefore an algorithm that is one bit longer is half as likely to be the true algorithm. Notice that this intuitively fits Occam's razor; a hypothesis that is 8 bits long is much more likely than a hypothesis that is 34 bits long. Why bother with extra bits? We’d need evidence to show that they were necessary.

Mark_Lu12y10

just because I want X doesn't mean I don't also want Y where Y is incompatible with X

In real life you are still forced to choose between X and Y, and through wireheading you can still cycle between X and Y at different times.

Mark_Lu12y00

This might be one reason why Eliezer talks about morality as a fixed computation.

P.S. Also, doesn't the being itself have a preference for not-suffering?

Mark_Lu12y10

A problem here seems to be that creating a being in intense suffering would be ethically neutral

Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?

Mark_Lu12y40

preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour

Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?

Mark_Lu12y00

To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.

Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for 'morality' that we do today to recognize cats using a computer.

Mark_Lu12y00

similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic

But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that's because brains have a more efficient architecture/algorithms than today's generic computers.

Mark_Lu12y-20

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.

"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.

Mark_Lu12y10

Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.