All of Norman_Noman's Comments + Replies

I still don't see the actual practical benefit of suffering. I've lived a very sheltered life, physically and emotionally. I've never needed stitches, never had my heart broken, I've always been pretty low-key emotionally, and I don't feel like I'm missing anything.

Besides, what are we going to do NEXT time we run into a more advanced race of aliens? I suppose we can just keep blowing up starlines, but what happens if they get the jump on us, like the superhappies got the jump on the babyeaters? It seems like we need powerful allies much more than we need our precious aches and pains.

I preferred the superhappy ending, and in fact the story nudged me further in that direction. I guess I don't really get what the big deal about pain and suffering is, there's no physical pain on the internet and it seems to work just fine.

If instead of ten experiments per set, there were only 3, who here would pick theory B instead?

Part of the problem here is that the situation presented is an extremely unusual one. Unless scientist B's theory is deliberately idiotic, experiment 21 has to strike at a point of contention between two theories which otherwise agree, and it has to be the only experiment out of 21 which does so. On top of that, both scientists have to pick one of these theories, and they have to pick different ones. Even if those theories are the only ones which make any sense, and they're equally likely from the available data, your chance of ending up in the situation t... (read more)

The question is whether the likelihood that the 21st experiment will validate the best theory constructed from 20 data points and invalidate the best theory constructed from 10 data points, when that theory also fits the other ten, is greater than the likelihood scientist B is just being dumb.

The likelihood of the former is very hard to calculate, but it's definitely less than 1/11, in other words, over 91% of the time the first theory will still be, if not the best possible theory, good enough to predict the results of one more experiment. The likelihood ... (read more)

I still think CEV is dangerously vague. I can't really hold up anything as an alternative, and I agree that all the utility functions that have been offered so far have fatal flaws in them, but pointing at some humans with brains and saying "do what's in there, kind of! but, you know, extrapolate..." doesn't give me a lot of confidence.

I've asked this before without getting an answer, but can you break down CEV into a process with discrete ordered steps that transforms the contents of my head into the utility function the AI uses? Not just a haph... (read more)