Posts

Sorted by New

Wiki Contributions

Comments

I still don't see the actual practical benefit of suffering. I've lived a very sheltered life, physically and emotionally. I've never needed stitches, never had my heart broken, I've always been pretty low-key emotionally, and I don't feel like I'm missing anything.

Besides, what are we going to do NEXT time we run into a more advanced race of aliens? I suppose we can just keep blowing up starlines, but what happens if they get the jump on us, like the superhappies got the jump on the babyeaters? It seems like we need powerful allies much more than we need our precious aches and pains.

I preferred the superhappy ending, and in fact the story nudged me further in that direction. I guess I don't really get what the big deal about pain and suffering is, there's no physical pain on the internet and it seems to work just fine.

If instead of ten experiments per set, there were only 3, who here would pick theory B instead?

Part of the problem here is that the situation presented is an extremely unusual one. Unless scientist B's theory is deliberately idiotic, experiment 21 has to strike at a point of contention between two theories which otherwise agree, and it has to be the only experiment out of 21 which does so. On top of that, both scientists have to pick one of these theories, and they have to pick different ones. Even if those theories are the only ones which make any sense, and they're equally likely from the available data, your chance of ending up in the situation the problem presents is less than 1/100.

Even if you're given 21 data points that follow a pattern which could be predicted from the first 10, and you have to deliberately come up with a theory that fits the first 20 but not the 21st, it's quite tricky to do so. I would be surprised if anyone could come up with even a single example of the situation presented in this puzzle (or an analogous one with even more experiments) ever occurring in the real world.

Unless experiment 21 is of a different nature than experiments 1-20. A different level of precision, say. Then I'd go with scientist B, because with more data he can make a model that's more precise, and if precision suddenly matters a lot more, it's easy to see how he could be right and A could be wrong.

The question is whether the likelihood that the 21st experiment will validate the best theory constructed from 20 data points and invalidate the best theory constructed from 10 data points, when that theory also fits the other ten, is greater than the likelihood scientist B is just being dumb.

The likelihood of the former is very hard to calculate, but it's definitely less than 1/11, in other words, over 91% of the time the first theory will still be, if not the best possible theory, good enough to predict the results of one more experiment. The likelihood that a random scientist, who has 20 data points and a theory that explains them, will come up with a different theory which is total crap, is easily more than 1 in 10.

Ergo, we trust theory A.

I still think CEV is dangerously vague. I can't really hold up anything as an alternative, and I agree that all the utility functions that have been offered so far have fatal flaws in them, but pointing at some humans with brains and saying "do what's in there, kind of! but, you know, extrapolate..." doesn't give me a lot of confidence.

I've asked this before without getting an answer, but can you break down CEV into a process with discrete ordered steps that transforms the contents of my head into the utility function the AI uses? Not just a haphazard pile of modifiers (knew more, thought faster, were more the people we would wish we were if we knew what we would know if we were the people we wanted to be), but an actual flowchart or something.