9/16ths of the people present are female Virtuists, and 2/16ths are male Virtuists. If you correctly calculate that 2/(9+2) of Virtuists are male, but mistakenly add 9 and 2 to get 12, you'd get one-sixth as your final answer. There might be other equivalent mistakes, but that seems the most likely to lead to the answer given.
Of course, it's irrelevant what the actual mistake was since the idea was to see if you'll let your biases sway you from the correct answer.
The later Ed Stories were better.
In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn't seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn't receive anything.
Good point, but not actually answering the question. I guess what I'm asking is: given a single use of the time machine (Primer-style, you turn it on...
Last time I tried reasoning on this one I came up against an annoying divide-by-infinity problem.
Suppose you have a CD with infinite storage space - if this is not possible in your universe, use a normal CD with N bits of storage, it just makes the maths more complicated. Do the following:
If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.
If a CD arrives from the future, read the number on it. Call this number X. Write X+1 on your own CD and send it back in time.
What is the probability distribution of t...
The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters.
I don't follow. They waged a genocidal war against the babyeaters and signed an alliance with humanity. That looks like separate decisions to me.
And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.
For one, because they're symmetrists. They asked something of humanity, so it was only fair that they should give something of equal value in return. (They're annoyingly ethical in that regard.) A...
I'd say it would make a better creepypasta than an SCP. Still, if you're fixed on the SCP genre, I'd try inverting it.
Say the Foundation discovers an SCP which appears to have mind-reading abilities. Nothing too outlandish so far; they deal with this sort of thing all the time. The only slightly odd part is that it's not totally accurate. Sometimes the thoughts it reads seem to come from an alternate universe, or perhaps the subject's deep subconscious. It's only after a considerable amount of testing that they determine the process by which the divergence is caused - and it's something almost totally innocuous, like going to sleep at an altitude of more than 40,000 feet.
I agree with Wilson's conclusions, though the quote is too short to tell if I reached this conclusion in the same way as he did.
Using several maps at once teaches you that your map can be wrong, and how to compare maps and find the best one. The more you use a map, the more you become attached to it, and the less inclined you are to experiment with other maps, or even to question whether your map is correct. This is all fine if your map is perfectly accurate, but in our flawed reality there is no such thing. And while there are no maps which state "Th...
(1) I'm not hurting other people, only myself
But after the fork, your copy will quickly become another person, won't he? After all, he's being tortured and you're not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?
I reject treating human life, or preservation of the human life, as a "terminal goal" that outweighs the "intermediate goal" of human freedom.
Hmm... not a viewpoint that I share, but one that I empathise with easily. I approve of freedom because it allows people to make the choices that make them happy, and because choice itself makes them happy. So freedom is valuable to me because it leads to happiness.
I can see where you're coming from though. I suppose we can just accept that our utility functions are different but not contradictory, and move on.
I don't think you meant to write "against", I think you probably meant "for" or "in favor of".
Typo, thanks for spotting it.
Also, I'm not entirely sure that Less Wrong wants to be used as a forum for politics.
I posted this on LessWrong instead of anywhere else because you can be trusted to remain unbiased to the best of your ability. I had completely forgotten that part of the wiki though; it's been a while since I actively posted on LW. Thanks for the reminder.
Good point. Since karma is gained by making constructive and insightful posts, any "exploit" that let one generate a lot of karma in a short time would either be quickly reversed or result in the "karma hoarder" becoming a very helpful member of the community. I think this post is more a warning that you may lose karma from making such polls, though since it's possible to gain or lose hundreds of points by making a post to the main page this seems irrelevant.
Are you suggesting that AIs would get bored of exploring physical space, and just spend their time thinking to themselves? Or is your point that a hyper-accelerated civilisation would be more prone to fragmentation, making different thought patterns likely to emerge, maybe resulting in a war of some sort?
If I got bored of watching a bullet fly across the room, I'd probably just go to sleep for a few milliseconds. No need to waste processor cycles on consciousness when there are NP-complete problems that need solving.
and I think other mathematicians I've met are generally bad with numbers
Let me add another data point to your analysis: I'm a mathematician, and a visual thinker. I'm not particularly "good with numbers", in the sense that if someone says "1000 km" I have to translate that to "the size of France" before I can continue the conversation. Similarly with other units. So I think this technique might work well for me.
I do know my times tables though.
Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.
Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.
I don't think it's quite the same. The underlaying mathematics are the same, but this version side-steps the philosophical and game-theoretical issues with the other (namely, acausal behaviour).
Incidentally; If you take both boxes with probability p each time you enter the room, then your expected gain is p1000 + (1-p) 1000000. For maximum gain, take p=0; i.e. always take only box B.
EDIT: Assuming money is proportional to utility.
This is a very interesting read. I have, on occasion, been similarly aware of my own subsystems. I didn't like it much; there was a strong impulse to reassert a single "self", and I wouldn't be able to function normally in that state. Moreover, some parts of my psyche belonged to several subsystems at once, which made it apparently impossible to avoid bias (at least for the side that wanted to avoid bias).
In case you're interested, the split took the form of a debate between my atheist leanings, my Christian upbringing, and my rationalist "judge". In decreasing order of how much they were controlled by emotion.
we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.
Let's be Bayesian about this.
Observation: Earth has not been assimilated by UFAIs at any point in the last billion years or so. Otherwise life on Earth would be detectably different.
It is unlikely that there are no/few UFAIs in our galaxy/universe, but if they do exist it is unlikely that they would not already have assimilated us.
I don't have enough information to give exact probabi...
Three years late, but: there doesn't even have to be an error. The Gatekeeper still loses for letting out a Friendly AI, even if it actually is Friendly.