When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my stand...
Rational assessment can be misleading when dealing with experiential knowledge that is not yet scientifically proven, has no obvious external function but is, nevertheless, experientially accessible.
So, uh, is the typical claim that has an equal lack of scientific evidence true, or false? (Maybe if we condition on how difficult it is to prove.)
If true - then the rational assessment would be to believe such claims, and not wait for them to be scientifically proven.
If false - then the rational assessment would be to disbelieve such claims. But for most su...
I think that in the interests of being fair to the creators of the video, you should link to http://www.nottingham.ac.uk/~ppzap4/response.html, the explanation written by (at least one of) the creators of the video, which addresses some of the complaints.
In particular, let me quote the final paragraph:
...There is an enduring debate about how far we should deviate from the rigorous academic approach in order to engage the wider public. From what I can tell, our video has engaged huge numbers of people, with and without mathematical backgrounds, and got them
No, I think I meant what I said. I think that this song lyric can in fact only make a difference given a large pre-existing weight, and I think the distribution of being weirded out by Solstices is bimodal: there are not people that are moderately weirded out but not enough to leave.
Extremely unlikely that people exist that aren't weirded out by Solstices in general but one song lyric is the straw that breaks the camel's back.
Not quite. I outlined the things that have to be going on for me to be making a decision.
In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.
If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's prob...
Let's assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12/20, and if - is correct, then Pr[A returns +] = 8/20).
The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.
If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds ...
If you're looking for high-risk activities that pay well, why are you limiting yourself to legal options?
On the subject of Arimaa, I've noted a general feeling of "This game is hard for computers to play -- and that makes it a much better game!"
Progress of AI research aside, why should I care if I choose a game in which the top computer beats the top human, or one in which the top human beats the top computer? (Presumably both the top human and the top computer can beat me, in either case.)
Is it that in go, you can aspire (unrealistically, perhaps) to be the top player in the world, while in chess, the highest you can ever go is a top human that wi...
This ought to be verified by someone to whom the ideas are genuinely unfamiliar.
I know that's what you're trying to say because I would like to be able to say that, too. But here's the problems we run into.
Try writing down "For all x, some number of subtract 1's cause it to equal 0". We can write the "∀x. ∃y. F(x,y) = 0" but in place of F(x,y) we want "y iterations of subtract 1's from x". This is not something we could write down in first-order logic.
We could write down sub(x,y,0) (in your notation) in place of F(x,y)=0 on the grounds that it ought to mean the same thing as "y iterations of su
Repeating S n times is not addition: addition is the thing defined by those axioms, no more, and no less. You can prove the statements:
∀x. plus(x, 1, S(x))
∀x. plus(x, 2, S(S(x)))
∀x. plus(x, 3, S(S(S(x))))
and so on, but you can't write "∀x. plus(x, n, S(S(...n...S(x))))" because that doesn't make any sense. Neither can you prove "For every x, x+n is reached from x by applying S to x some number of times" because we don't have a way to say that formally.
From outside the Peano Axioms, where we have our own notion of "number", we ...
What makes you think that decision making in our brains is free of "regular certainty in physics"? Deterministic systems such as weather patterns can be unpredictable enough.
To be fair, if there's some butterfly-effect nonsense going on where the exact position of a single neuron ends up determining your decision, that's not too different from randomness in the mechanics of physics. But I hope that when I make important decisions, the outcome is stable enough that it wouldn't be influenced by either of those.
I´d say this is not needed, when people say "Snow is white" we know that it really means "Snow seems white to me", so saying it as "Snow seems white to me" adds length without adding information.
Ah, but imagine we're all-powerful reformists that can change absolutely anything! In that case, we can add a really simple verb that means "seems-to-me" (let's say "smee" for short) and then ask people to say "Snow smee white".
Of course, this doesn't make sense unless we provide alternatives. For inst...
Insurance makes a profit in expectation, but an insurance salesman does have some tiny chance of bankruptcy, though I agree that this is not important. What is important, however, is that an insurance buyer is not guaranteed a loss, which is what distinguishes it from other Dutch books for me.
Prospect theory and similar ideas are close to an explanation of why the Allais Paradox occurs. (That is, why humans pick gambles 1A and 2B, even though this is inconsistent.) But, to my knowledge, while utility theory is both a (bad) model of humans and a guide to ho...
Yyyyes and no. Our utility functions are nonlinear, especially with respect to infinitesimal risk, but this is not inherently bad. There's no reason for our utility to be everywhere linear with wealth: in fact, it would be very strange for someone to equally value "Having $1 million" and "Having $2 million with 50% probability, and having no money at all (and starving on the street) otherwise".
Insurance does take advantage of this, and it's weird in that both the insurance salesman and the buyers of insurance end up better off in expec...
When it comes to neutral geometry, nobody's ever defined "parallel lines" in any way other than "lines that don't intersect". You can talk about slopes in the context of the Cartesian model, but the assumptions you're making to get there are far too strong.
As a consequence, no mathematicians ever tried to "prove that parallel lines don't intersect". Instead, mathematicians tried to prove the parallel postulate in one of its equivalent forms, of which some of the more compelling or simple are:
The sum of the angles in a trian
Understandable; perhaps. In mathematics, it is very easy to say understandable things that are simply false. In this case, those false things become nonsense when you realize that the meaning of "parallel lines" is "lines that do not intersect".
You might say that an explanation gets these facts completely wrong, then it is still a good explanation if it makes you think the right things. I say that such an explanation goes against the spirit of all mathematics. It is not enough that your argument is understandable, for many understandabl...
Only a single mile to the mile? I've seen maps in biology textbooks that were much larger than that.
Okay, then interpret my answer as "rape and murder are bad because they make others sad, and making others sad is bad by definition".
You can always keep asking why. That's not particularly interesting.
It occurs to me that we can express this problem in the following isomorphic way:
Omega makes an identical copy of you.
One copy exists for a week. You get to pick whether that week is torture or nirvana.
The other copy continues to exist as normal, or maybe is unconscious for a week first, and depending on what you picked for step 2, it may lose or receive lots of money.
I'm not sure how enlightening this is. But we can now tie this to the following questions, which we also don't have answers to: is an existence of torture better than no existence at all? And is an existence of nirvana good when it does not have any effect on the universe?
Yes, this, exactly.
I do nice things for myself not because I have deep-seated beliefs that doing nice things for myself is the right thing to do, but because I feel motivated to do nice things for myself.
I'm not sure that I could avoid doing those things for myself (it might require willpower I do not have) or that I should (it might make me less effective at doing other things), or that I would want to if I could and should (doing nice things for myself feels nice).
But if we invent a new nice thing to do for myself that I don't currently feel motivated to...
I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?
I am not trying to fight the hypothetical, I am trying to explain why one's intuition cannot resist fighting it. This makes the answer I give seem unintuitive.
So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.
Your formulation, however, doesn't work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don't precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.
Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice....
Result spoilers: Fb sne, yvxvat nypbuby nccrnef gb or yvaxrq gb yvxvat pbssrr be pnssrvar, naq gb yvxvat ovggre naq fbhe gnfgrf. (Fbzr artngvir pbeeryngvba orgjrra yvxvat nypbuby naq yvxvat gb qevax ybgf bs jngre.)
I haven't done the responsible thing and plotted these (or, indeed, done anything else besides take whatever correlation coefficient my software has seen fit to provide me with), so take with a grain of salt.
I believe editing polls resets them, so there's no reason to do it if it's just an aesthetically unpleasant mistake that doesn't hurt the accuracy of the results.
Absolutely. We're bad at anything that we can't easily imagine. Probably, for many people, intuition for "torture vs. dust specks" imagines a guy with a broken arm on one side, and a hundred people saying 'ow' on the other.
The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn't take the number of people saved by an intervention into account; we just picture the typical effect on a single person.
What, I wonder, are the ...
That wasn't obvious to me. It's certainly false that "people who use the strategy of always paying have the same odds of losing $1000 as people who use the strategy of never paying". This means that the oracle's prediction takes its own effect into account. When asking about my future, the oracle doesn't ask "Will Kindly give me $1000 or die in the next week?" but "If hearing a prophecy about it, will Kindly give me $1000 or die in the next week?"
Hearing the prediction certainly changes the odds that the first clause will come...
You're saying that it's common knowledge that the oracle is, in fact, predicting the future; is this part of the thought experiment?
If so, there's another issue. Presumably I wouldn't be giving the oracle $1000 if the oracle hadn't approached me first; it's only a true prediction of the future because it was made. In a world where actual predictions of the future are common, there should be laws against this, similar to laws against blackmail (even though it's not blackmail).
(I obviously hand over the $1000 first, before trying to appeal to the law.)
Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it's not an unreasonable objection to at least some forms of teaching statistics.
Hopefully people with statistics degrees move beyond that stage, though.
There are varieties of strawberries that are not sour at all, so I suppose it's possible that you simply have limited experience with strawberries. (Well, you probably must, since you don't like them, but maybe that's the reason you don't think they're sour, as opposed to some fundamental difference in how you taste things.)
I actually don't like the taste of purely-sweet strawberries; the slightly-sour ones are better. A very unripe strawberry would taste very sour, but not at all sweet, and its flesh would also be very hard.
Do you have access to the memory wiping mechanism prior to getting your memory wiped tomorrow?
If so, wipe your memory, leaving yourself a note: "Think of the most unlikely place where you can hide a message, and leave this envelope there." The envelope contains the information you want to pass on.
Then, before your memory is wiped tomorrow, leave yourself a note: "Think of the most unlikely place where you can hide a message, and open the envelope hidden there."
Hopefully, your two memory-wiped selves should be sufficiently similar that ...
Wouldn't you forget the password once your memories are wiped?
In an alternate universe, Peter and Sarah could have had the following conversation instead:
P: I don't know the numbers.
S: I knew you didn't know the numbers.
P: I knew that you knew that I didn't know the numbers.
S: I still don't know the numbers.
P: Now I know the numbers.
S: Now I also know the numbers.
But I'm worried that my version of the puzzle can no longer be solved without brute force.
I believe I have it. rot13:
Sbyq naq hasbyq gur cncre ubevmbagnyyl, gura qb gur fnzr iregvpnyyl, gb znex gur zvqcbvag bs rnpu fvqr. Arkg, sbyq naq hasbyq gb znex sbhe yvarf: vs gur pbearef bs n cncre ner N, O, P, Q va beqre nebhaq gur crevzrgre, gura gur yvarf tb sebz N gb gur zvqcbvag bs O naq P, sebz O gb gur zvqcbvag bs P naq Q, sebz P gb gur zvqcbvag bs N naq Q, naq sebz Q gb gur zvqcbvag bs N naq O.
Gurfr cnegvgvba gur erpgnatyr vagb avar cvrprf: sbhe gevnatyrf, sbhe gencrmbvqf, naq bar cnenyyrybtenz. Yrg gur cnenyyrybtenz or bar cneg, naq tebhc rnpu ge...
Desensitization training is great if it (a) works and (b) is less bad than the problem it's meant to solve.
(I'm now imagining Alice and Carol's conversation: "So, alright, I'll turn my music down this time, but there's this great program I can point you to that teaches you to be okay with loud noise. It really works, I swear! Um, I think if you did that, we'd both be happier.")
Treating thin-skinned people (in all senses of the word) as though they were already thick-skinned is not the same, I think. It fails criterion (a) horribly, and does not satisfy (b) by definition: it is the problem desensitization training ought to solve.
I wanted to upvote you for amusing me, but I changed my vote to one I think you would prefer.
What if we assume a finite universe instead? Contrary to what the post we're discussing might suggest, this actually makes recurrence more reasonable. To show that every state of a finite universe recurs infinitely often, we only need to know one thing: that every state of the universe can be eventually reached from every other state.
Is this plausible? I'm not sure. The first objection that comes to mind is entropy: if entropy always increases, then we can never get back to where we started. But I seem to recall a claim that entropy is a statistical law: i...
To Bob, I would point out that:
Contrary to C, it is easy to prove that you have an ear or mental condition that makes you sensitive to noise; a note from a doctor or something suffices.
Contrary to D, in case such a condition exists, "toughening up and growing a thicker skin" is not actually a possible response. In some cases, it appears that loud noises make the condition worse. Even when this is not the case, random exposure to noises at the whim of the environment doesn't help.
I realize that you are appealing to a metaphor, but I think that these points often apply to the unmetaphored things as well.
Regarding my style: many philosophies have both a function and a form. In writing, some philosophies have a message to convey and a style that it is often conveyed in. There is a style to objectivist essays, Maoist essays, Buddhist essays, and often there is a style to less wrong essays. I wrote my egoist essay in the egoist style, in honor of those egoists who led to me including Max Stirner, Dora Marsden, Apio Ludd and especially Malfew Seklew. Egoism - it's not for everybody.
The things that make your writing style unapproachable are not features of &...
That's true, but I think I agree with TheOtherDave that the things that should make you start reconsidering your strategy are not bad outcomes but surprising outcomes.
In many cases, of course, bad outcomes should be surprising. But not always: sometimes you choose options you expect to lose, because the payoff is sufficiently high. Plus, of course, you should reconsider your strategy when it succeeds for reasons you did not expect: if I make a bad move in chess, and my opponent does not notice, I still need to work on not making such a move again.
I also w...
Part of it might just be the order. Compare that paragraph to the following alternative:
The rationality of Rationality: AI to Zombies isn't about using cold logic to choose what to care about. Reasoning well has little to do with what you're reasoning towards. If your goal is to annihilate as many puppies as possible, then this kind of rationality will help you annihilate more puppies. But if your goal is to enjoy life to the fullest and love without restraint, then better reasoning (while hot or cold, while rushed or relaxed) will also help you do so.
I'm not sure that regretting correct choices is a terrible downside, depending on how you think of regret and its effects.
If regret is just "feeling bad", then you should just not feel bad for no reason. So don't regret anything. Yeah.
If regret is "feeling bad as negative reinforcement", then regretting things that are mistakes in hindsight (as opposed to correct choices that turned out bad) teaches you not to make such mistakes. Regretting all choices that led to bad outcomes hopefully will also teach this, if you correctly identify mi...
My proposal: the ideas that goodness or evil are substances and they can formed into magic objects such as sword made of pure evil.
Of course, some novels also subvert this delightfully. Patricia Wrede's The Seven Towers, for instance, is all about exactly what goes wrong when you try to make a magical object out of pure good.
(Edit: that is, Wrede does not literally spend the whole book talking about this problem. It is merely mentioned as backstory. But still.)
What changes is that I would like to have a million dollars as much as Joe would. Similarly, if I had to trade between Joe's desire to live and my own, the latter would win.
In another comment you claim that I do not believe my own argument. This is false. I know this because if we suppose that Joe would like to be killed, and Joe's friends would not be said if he died, then I am okay with Joe's death. So there is no other hidden factor that moves me.
I'm not sure what the observation that I do not give all of my money away to charity has to do with anything.
I don't think that's true in any important way.
I might say: "Killing Joe is bad because Joe would like not to be killed, and enjoys continuing to live. Also, Joe's friends would be sad if Joe died." This is not a sophisticated argument. If an atheist would have a hard time making it, it's only because one feels awkward making such an unsophisticated argument in a debate about morality.
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don't see it as a problem.
The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvant... (read more)