In response to False Laughter
Comment author: RobinHanson 22 December 2007 01:40:20PM 14 points [-]

Would jokes where Dilbert's pointy-headed boss says idiotic things be less funny if the boss were replaced by a co-worker? If so, does that suggest bosses are Hated Enemies, and Dilbert jokes bring false laughter?

In response to comment by RobinHanson on False Laughter
Comment author: CG_Morton 15 August 2011 11:32:13AM 0 points [-]

I'd call that character humor, where the character of the boss is funny because of his exaggerated stupidity. It wouldn't be funny if the punchline was just the boss getting hit in the face by a pie (well, beyond the inherent humor of pie-to-face situations). Besides, most of the co-workers say idiotic things too!

Comment author: Hul-Gil 19 June 2011 12:22:26AM *  1 point [-]

I think there would be more overall pleasure if mankind continued on its merry way. It might be possible to wirehead the entire human population for the rest of the universes' lifespan, for instance; any scenario which ends the human race would necessarily have less pleasure than that.

But would I want the entire human race to be wireheaded against their will? No... I don't think so. It's not the worst fate I can think of, and I wouldn't say it's a bad result; but it seems sub-optimal. I value pleasure, but I also care about how we get it - even I would not want to be just a wirehead, but rather a wirehead who writes and explores and interacts.

Does this mean I value things other than pleasure, if I think it is the Holy Grail but it matters how it is attained? I'm not certain. I suppose I'd say my values can be reduced to pleasure first and freedom second, so that a scenario in which everyone can choose how to obtain their pleasure is better than a scenario in which everyone obtains a forced pleasure, but the latter is better than a scenario in which everyone is free but most are not pleasured.

I'm not certain if my freedom-valuing is necessary or just a relic, though. At least it (hopefully) protects against moral error by letting others choose their own paths.

Comment author: CG_Morton 19 June 2011 08:56:03PM 1 point [-]

The high value you place on freedom may be because, in the past, freedom has tended to lead to pleasure. The idea that people are better suited to choosing how to obtain their pleasure makes sense to us now, because people usually know how best to achieve their own subjective pleasure, whereas forced pleasures often aren't that great. But by the time wireheading technology comes around, we'll probably know enough about neurology and psychology that such problems no longer exist, and a computer could well be trusted to tell you what you would most enjoy more accurately than your own expectations could.

I agree with the intuition that most people value freedom, and so would prefer a free pleasure over a forced one if the amount of pleasure was the same. But I think that it's a situational intuition, that may not hold in the future. (And is a value really a value if it's situational?)

Comment author: DanielLC 27 December 2009 07:49:43PM 0 points [-]

I had the same problem.

I think it would need some genetic algorithm in order to figure out about how "close" it is to the solution, then make a tree structure where it figures out what happens after every combination of however many moves, and it does the one that looks closest to the solution.

It would update the algorithm based on how close it is to the closest solution. For example, if it's five moves away from something that looks about 37 moves away from finishing, then it's about 42 moves away now.

The problem with this is that when you start it, it will have no idea how close anything is to the solution except for the solution, and there's no way it's getting to that by chance.

Essentially, you'd have to cheat and start by giving it almost solved Rubik's cubes, and slowly giving it more randomized ones. It won't learn on its own, but you can teach it pretty easily.

Comment author: CG_Morton 14 June 2011 04:47:48PM 0 points [-]

Exactly the difficulty of solving a Rubik's cube is that it doesn't respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least). In general, -humans- solve a Rubik's cube by memorizing sequences of moves with certain results, and then string these sub-solutions together. An AI, though, probably has the computational power to brute force a solution much faster than it could manipulate the cube.

The more interesting question (I think) is how it figures out a model for the cube in the first place. What makes the cube a good problem is that it's designed to match human pattern intuitions (in that we prefer the colors to match, and we quickly notice the seams that we can rotate through), but an AI has no such intuitions.

Comment author: DanielLC 04 May 2011 03:32:58AM 0 points [-]

I have a theory that I will post this comment. By posting the comment, I'm seeking evidence to confirm the theory. If I post the comment, my probability will be higher than before.

Similarly, in Newcomb's problem, I seek evidence that box A has a million dollars, so I refrain from taking box B. There was money in box B, but I didn't take it, because that would give me evidence that box A was empty.

In short, there's one exception to this: when your choice is the evidence.

Comment author: CG_Morton 11 June 2011 08:16:29AM 0 points [-]

The simple answer is that your choice is also probabilistic. Let's say that your disposition is one that would make it very likely you will choose to take only box A. Then this fact about yourself becomes evidence for the proposition that A contains a million dollars. Likewise if your disposition was to take both, it would provide evidence that A was empty.

Now let's say that you're pretty damn certain that this Omega guy is who he says he is, and that he was able to predict this disposition of yours; then, noting your decision to take only A stands as strong evidence that the box contains the million dollars. Likewise with the decision to take both.

But what if, you say, I already expected to be the kind of person who would take only box A? That is, that the probability distribution over my expected dispositions was 95% only box A and 5% both boxes? Well then it follows that your prior over the contents of box A will be 95% that is contains the million and 5% that it is empty. And as a result, the likely case of you actually choosing to take only box A need only have a small effect on your expectation of the contents of the box (~.05 change to reach ~1), but in the case that you introspect and find that really, you're the kind of person who would take both, then your expectation that the box has a million dollars will drop by exactly 19(=.95/.05) times as much as it would get raised by the opposite evidence (resulting in ~0 chance that it contains the million). Making the less likely choice will create a much greater change in expectation, while the more common choice will induce a smaller change (since you already expected the result of that choice).

Hope that made sense.

View more: Prev