There have been other sci-fi writers talking about AI and the singularity. Charles Stross, Greg Egan, arguably Cory Doctorow... I haven't seen the episode in question, so I can't say who I think they took the biggest inspiration from.
9/16ths of the people present are female Virtuists, and 2/16ths are male Virtuists. If you correctly calculate that 2/(9+2) of Virtuists are male, but mistakenly add 9 and 2 to get 12, you'd get one-sixth as your final answer. There might be other equivalent mistakes, but that seems the most likely to lead to the answer given.
Of course, it's irrelevant what the actual mistake was since the idea was to see if you'll let your biases sway you from the correct answer.
The later Ed Stories were better.
In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn't seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn't receive anything.
Good point, but not actually answering the question. I guess what I'm asking is: given a single use of the time machine (Primer-style, you turn it on...
I wasn't reasoning under NSCP, just trying to pick holes in cousin_it's model.
Though I'm interested in knowing why you think that one outcome is "more likely" than any other. What determines that?
You make a surprisingly convincing argument for people not being real.
Last time I tried reasoning on this one I came up against an annoying divide-by-infinity problem.
Suppose you have a CD with infinite storage space - if this is not possible in your universe, use a normal CD with N bits of storage, it just makes the maths more complicated. Do the following:
If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.
If a CD arrives from the future, read the number on it. Call this number X. Write X+1 on your own CD and send it back in time.
What is the probability distribution of t...
The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters.
I don't follow. They waged a genocidal war against the babyeaters and signed an alliance with humanity. That looks like separate decisions to me.
And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.
For one, because they're symmetrists. They asked something of humanity, so it was only fair that they should give something of equal value in return. (They're annoyingly ethical in that regard.) A...
I'd say it would make a better creepypasta than an SCP. Still, if you're fixed on the SCP genre, I'd try inverting it.
Say the Foundation discovers an SCP which appears to have mind-reading abilities. Nothing too outlandish so far; they deal with this sort of thing all the time. The only slightly odd part is that it's not totally accurate. Sometimes the thoughts it reads seem to come from an alternate universe, or perhaps the subject's deep subconscious. It's only after a considerable amount of testing that they determine the process by which the divergence is caused - and it's something almost totally innocuous, like going to sleep at an altitude of more than 40,000 feet.
They came impressively close considering they didn't have any giant shoulders to stand on.
Yep. If nothing of what Archimedes did counts as ‘science’, you're using an overly narrow definition IMO.
I think it's more the point that some of us have more dislikable alleles than others.
Yeah, that should work.
The latter one doesn't work at all, since it sounds rather like you're ignoring the very advice you're trying to give.
I agree with Wilson's conclusions, though the quote is too short to tell if I reached this conclusion in the same way as he did.
Using several maps at once teaches you that your map can be wrong, and how to compare maps and find the best one. The more you use a map, the more you become attached to it, and the less inclined you are to experiment with other maps, or even to question whether your map is correct. This is all fine if your map is perfectly accurate, but in our flawed reality there is no such thing. And while there are no maps which state "Th...
Or accept that each map is relevant to a different area, and don't try to apply a map to a part of the territory that it wasn't designed for.
And if you frequently need to use areas of the territory which are covered by no maps or where several maps give contradictory results, get better maps.
Does it matter? People read Glenn Beck's books; this both raises awareness about the Singularity and makes it a more "mainstream" and popular thing to talk about.
I think this conversation just jumped one of the sharks that swim in the waters around the island of knowledge.
Actually, x=y=0 still catches the same flaw, it just catches another one at the same time.
My personal philosophy in a nutshell.
Not all of them. Which applies to Old Testament gods too, I guess: the Bible is pretty consistent with that "no killing" thing.
Possible corollary: I can change my reality system by moving to another planet.
How about: Given the chance, would you rather die a natural death, or relive all your life experiences first?
(1) I'm not hurting other people, only myself
But after the fork, your copy will quickly become another person, won't he? After all, he's being tortured and you're not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?
Well, I was at the time I wrote the comment. I wrote it specifically to get LW's opinions on the matter. I am now pro-choice.
And doesn't our society consider that children can't make legally binding statements until they're 16 or 18l?
Oh for crying out loud. Please tell me it's fixed now.
I think it's been blown rather out of proportion by political forces, so what you're describing seems very likely.
Agreed.
I reject treating human life, or preservation of the human life, as a "terminal goal" that outweighs the "intermediate goal" of human freedom.
Hmm... not a viewpoint that I share, but one that I empathise with easily. I approve of freedom because it allows people to make the choices that make them happy, and because choice itself makes them happy. So freedom is valuable to me because it leads to happiness.
I can see where you're coming from though. I suppose we can just accept that our utility functions are different but not contradictory, and move on.
And a fetus lacks the sentience which makes humans so important, so killing it, while still undesirable, is less so than the loss of freedom which is the alternative. Thanks! I'm convinced again.
I don't think you meant to write "against", I think you probably meant "for" or "in favor of".
Typo, thanks for spotting it.
Also, I'm not entirely sure that Less Wrong wants to be used as a forum for politics.
I posted this on LessWrong instead of anywhere else because you can be trusted to remain unbiased to the best of your ability. I had completely forgotten that part of the wiki though; it's been a while since I actively posted on LW. Thanks for the reminder.
I naturally take a stance against abortion. It's easy to see why: a woman's freedom is much more important than another human's right to live
Fixed, thanks.
Good point. Since karma is gained by making constructive and insightful posts, any "exploit" that let one generate a lot of karma in a short time would either be quickly reversed or result in the "karma hoarder" becoming a very helpful member of the community. I think this post is more a warning that you may lose karma from making such polls, though since it's possible to gain or lose hundreds of points by making a post to the main page this seems irrelevant.
Are you suggesting that AIs would get bored of exploring physical space, and just spend their time thinking to themselves? Or is your point that a hyper-accelerated civilisation would be more prone to fragmentation, making different thought patterns likely to emerge, maybe resulting in a war of some sort?
If I got bored of watching a bullet fly across the room, I'd probably just go to sleep for a few milliseconds. No need to waste processor cycles on consciousness when there are NP-complete problems that need solving.
and I think other mathematicians I've met are generally bad with numbers
Let me add another data point to your analysis: I'm a mathematician, and a visual thinker. I'm not particularly "good with numbers", in the sense that if someone says "1000 km" I have to translate that to "the size of France" before I can continue the conversation. Similarly with other units. So I think this technique might work well for me.
I do know my times tables though.
Weiner has a blog? My life is even more complete.
No, then too.
IIRC, he uses this joke several times.
And if you reject science, you conclude that scientists are out to get you. The boot fits; upvoted.
Point taken.
Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.
Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.
In Dirk Gently's universe, a number of everyday events involve hypnotism, time travel, aliens, or some combination thereof. Dirk gets to the right answer by considering those possibilities, but we probably won't.
I made a prediction with sha1sum 0000000000000000000000000000000000000000. It's the prediction that sha1sum will be broken. I'll only reveal the exact formulation once I know whether it was true or false.
Out of curiosity, which time was Yudkowsky actually telling the truth? When he said those five assertions were lies, or when he said the previous sentence was a lie? I don't want to make any guesses yet. This post broke my model; I need to get a new one before I come back.
It is a process lesson, not a lesson about facts.
But, if you have to know the facts, it is easy enough to click on the provided link to the Meyer article and find out. Which, I suppose, is another process lesson.
Sorry, my mistake. I misread the OP.
I don't think it's quite the same. The underlaying mathematics are the same, but this version side-steps the philosophical and game-theoretical issues with the other (namely, acausal behaviour).
Incidentally; If you take both boxes with probability p each time you enter the room, then your expected gain is p1000 + (1-p) 1000000. For maximum gain, take p=0; i.e. always take only box B.
EDIT: Assuming money is proportional to utility.
The first time you enter the room, the boxes are both empty, so you can't ever get more than $1,000,000. But you're otherwise correct.
Er... yes. But I don't think it undermines my point that we are unlikely to be assimilated by aliens in the near future.
This is a very interesting read. I have, on occasion, been similarly aware of my own subsystems. I didn't like it much; there was a strong impulse to reassert a single "self", and I wouldn't be able to function normally in that state. Moreover, some parts of my psyche belonged to several subsystems at once, which made it apparently impossible to avoid bias (at least for the side that wanted to avoid bias).
In case you're interested, the split took the form of a debate between my atheist leanings, my Christian upbringing, and my rationalist "judge". In decreasing order of how much they were controlled by emotion.
we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.
Let's be Bayesian about this.
Observation: Earth has not been assimilated by UFAIs at any point in the last billion years or so. Otherwise life on Earth would be detectably different.
It is unlikely that there are no/few UFAIs in our galaxy/universe, but if they do exist it is unlikely that they would not already have assimilated us.
I don't have enough information to give exact probabi...
Three years late, but: there doesn't even have to be an error. The Gatekeeper still loses for letting out a Friendly AI, even if it actually is Friendly.