Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: raisin 01 April 2014 03:45:37PM *  22 points [-]

Richard Feynmann claimed that he wasn't exceptionally intelligent, but that he focused all his energies on one thing. Of course he was exceptionally intelligent, but he makes a good point.

I think one way to improve your intelligence is to actually try to understand things in a very fundamental way. Rather than just accepting the kind of trite explanations that most people accept - for instance, that electricity is electrons moving along a wire - try to really find out and understand what is actually happening, and you'll begin to find that the world is very different from what you have been taught and you'll be able to make more intelligent observations about it.

http://www.reddit.com/r/askscience/comments/e3yjg/is_there_any_way_to_improve_intelligence_or_are/c153p8w

reddit user jjbcn on trying to improve your intelligence


If you're not a student of physics, The Feynman Lectures on Physics is probably really useful for this purpose. It's free for download!

http://www.feynmanlectures.caltech.edu/

It seems like the Feynman lectures were a bit like the Sequences for those Caltech students:

The intervening years might have glazed their memories with a euphoric tint, but about 80 percent recall Feynman's lectures as highlights of their college years. “It was like going to church.” The lectures were “a transformational experience,” “the experience of a lifetime, probably the most important thing I got from Caltech.” “I was a biology major but Feynman's lectures stand out as a high point in my undergraduate experience … though I must admit I couldn't do the homework at the time and I hardly turned any of it in.” “I was among the least promising of students in this course, and I never missed a lecture. … I remember and can still feel Feynman's joy of discovery. … His lectures had an … emotional impact that was probably lost in the printed Lectures.”

Comment author: rstarkov 15 April 2014 01:59:37PM 2 points [-]

Indeed, terse "explanations" that handwave more than explain are a pet peeve of mine. They can be outright confusing and cause more harm than good IMO. See this question on phrasing explanations in physics for some examples.

Comment author: rstarkov 01 March 2014 02:27:24PM 2 points [-]

One useful definition of Bayesian vs Frequentist that I've found is the following. Suppose you run an experiment; you have a hypothesis and you gather some data.

  • if you try to obtain the probability of the data, given your hypothesis (treating the hypothesis as fixed), then you're doing it the frequentist way
  • if you try to obtain the probability of the hypothesis, given the data you have, then you're doing it the Bayesian way.

I'm not sure whether this view holds up to criticism, but if so, I sure find the latter much more interesting than the former.

Comment author: rstarkov 05 December 2013 02:52:21AM 15 points [-]

This has been the most fun, satisfying survey I've ever been part of :) Thanks for posting this. Can't wait to see the results!

One question I'd find interesting is closely related to the probability of life in the universe. Namely, what are the chances that a randomly sampled spacefaring lifeform would have an intelligence similar enough to ours for us to be able to communicate meaningfully, both in its "ways" and in general level of smarts, if we were to meet.

Given that I enjoyed taking part in this, may I suggest that more frequent and in-depth surveys on specialized topics might be worth doing?

Comment author: rstarkov 26 July 2013 03:15:31PM 8 points [-]

Maybe we've finally reached the point where there's no work left to be done

If so, this is superb! This is the end goal. A world in which there is no work left to be done, so we can all enjoy our lives, free from the requirement to work.

The thought that work is desirable has been hammered into our heads so hard that it's a really, really dubious proposition that actually a world where nobody has to work is the ultimate goal. Not one in which everyone works. That world sucks. That's world in which 85% of us live today.

In response to Taboo Your Words
Comment author: rstarkov 08 February 2013 07:52:42AM *  2 points [-]

I've first read this about two years ago and it has been an invaluable tool. I'm sure it has saved countless hours of pointless arguments around the world.

When I realise that an inconsistency in how we interpret a specific word is a problem in a certain argument and apply this tool, it instantly transforms arguments which actually are about the meaning of the word to make them a lot more productive (it turns out it can be unobvious that the actual disagreement is about what a specific word means). In other cases it just helps get back on the right track instead of getting distracted by what we mean when we say a certain word that is actually beside the point.

It does occasionally take a while to convince the other party to the argument that I'm not trying to fool or trick them when I ask for us to apply this method. Another observation is that the article on Empty Labels has transformed my attitude towards the meaning of words, so when it turns out we disagree about meanings, I instantly lose interest and this can confuse the other party.

Solving the two envelopes problem

32 rstarkov 09 August 2012 01:42PM

Suppose you are presented with a game. You are given a red and a blue envelope with some money in each. You are allowed to ask an independent party to open both envelopes, and tell you the ratio of blue:red amounts (but not the actual amounts). If you do, the game master replaces the envelopes, and the amounts inside are chosen by him using the same algorithm as before.

You ask the independent observer to check the amounts a million times, and find that half the time the ratio is 2 (blue has twice as much as red), and half the time it's 0.5 (red has twice as much as blue). At this point, the game master discloses that in fact, the way he chooses the amounts mathematically guarantees that these probabilities hold.

Which envelope should you pick to maximize your expected wealth?

It may seem surprising, but with this set-up, the game master can choose to make either red or blue have a higher expected amount of money in it, or make the two the same. Asking the independent party as described above will not help you establish which is which. This is the surprising part and is, in my opinion, the crux of the two envelopes problem.

continue reading »
Comment author: VincentYu 04 August 2012 05:58:31PM *  16 points [-]

But we know from basic probability theory that for two independent random variables, E(X/Y) > 1 does actually imply E(X) > E(Y).

This does not follow; a counterexample:

Suppose X and Y are independent random variables, with X taking the values {2,100}, and Y the values {1,150}, each with equal probability (i.e., each of X and Y follows the Bernoulli distribution with p = 0.5). Then we have
E(X/Y) = (2/1 + 100/1 + 2/150 + 100/150) / 4 = 25.67 > 1,
but
E(X) = 51 < 75.5 = E(Y).

Keep in mind that the equation E(1/Y) = 1/E(Y) does not hold in general, because taking the inverse is not a linear transformation. To evaluate the expectation after a nonlinear transformation, one requires not just the orignal expectation, but also the pdf of the distribution. (I can't be sure this is what you did, but misapplying this gives: E(X/Y) = E(X)E(1/Y) = E(X)/E(Y). The first equality holds if X and Y are independent or by definition if X and 1/Y are uncorrelated, but the second equality does not hold in general.)

Comment author: rstarkov 05 August 2012 06:58:22PM 2 points [-]

Addressed by making a few edits to the "Solution" section. Thank you!

Comment author: Douglas_Knight 04 August 2012 04:39:50PM *  4 points [-]

It is aimed at people with only basic command of probabilities

Such people have probably not heard of the two envelopes problem and thus the title is not informative for them.* It seems to me that it would be useful to put something in the title to indicate that this post is about probabilities and maybe that it is fairly introductory. I could be wrong about how people read this site. Maybe everyone clicks through to the article and reads as far as the italicized section, but it seems low cost to give it a more informative title.

* Also, I doubt that people who do associate "two envelopes" with probability are not very likely to think of this particular problem, for what it's worth.

Added: "this site" = discussion. If the plan is to move this to main, the structure makes more sense. But I still think the title could be better.

Comment author: rstarkov 05 August 2012 03:02:14PM *  1 point [-]

All fair points. I did want to post this to main, but decided against it in the end. Didn't know I could move it to main afterwards. Will work on the title, after I've fixed the error pointed out by VincentYu.

Comment author: hairyfigment 31 August 2011 02:02:07AM 0 points [-]

Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.

Seems to me the language of this rules out faked video. And to explain it as a newsletter scam would, I think, require postulating 2^100 civilizations that have contact with Omega but not each other. Note that we already have some reason to believe that a powerful and rational observer could predict our actions early on.

So you tell me what we should expect here.

Comment author: rstarkov 31 August 2011 03:17:55PM 0 points [-]

I've reviewed the language of the original statement and it seems that the puzzle is set in essentially the real world with two major givens, i.e. facts in which you have 100% confidence.

Given #1: Omega was correct on the last 100 occurrences.

Given #2: Box B is already empty or already full.

There is no leeway left for quantum effects, or for your choice affecting in any way what's in box B. You cannot make box B full by consciously choosing to one-box. The puzzle says so, after all.

If you read it like this, then I don't see why you would possibly one-box. Given #2 already implies the solution. 100 successful predictions must have been achieved through a very low probability event, or a trick, e.g by offering the bet only to those people whose answer you can already predict, e.g. by reading their LessWrong posts.

If you don't read it like this, then we're back to the "gooey vagueness" problem, and I will once again insist that the puzzle needs to be fully defined before it can be attempted. For example, by removing both givens, and instead specifying exactly what you know about those past 100 occurrences. Were they definitely not done on plants? Was there sampling bias? Am I considering this puzzle as an outside observer, or am I imagining myself being part of that universe - in the latter case I have to put some doubt into everything, as I can be hallucinating. These things matter.

With such clarifications, the puzzle becomes a matter of your confidence in the past statistics vs. your confidence about the laws of physics precluding your choice from actually influencing what's in box B.

Comment author: FAWS 29 August 2011 05:07:36PM 0 points [-]

Why is it important to you that the success rate be a frequentialist probability rather than just a bayesian one?

Comment author: rstarkov 31 August 2011 12:14:24AM 0 points [-]

I'm not sure I understand correctly, but let me phrase the question differently: what sort of confidence do we have in "99.9%" being an accurate value for Omega's success rate?

From your previous comment I gather the confidence is absolute. This removes one complication while leaving the core of the paradox intact. I'm just pointing out that this isn't very clear in the original specification of the paradox, and that clearing it up is useful.

To explain why it's important, let me indeed think of an AI like hairyfigment suggested. Suppose someone says they have let 100 previous AIs flip a fair coin 100 times each and it came out heads every single time, because they have magic powers that make it so. This someone presents me video evidence of this feat.

If faced with this in the real world, an AI coded by me would still bet close to 50% on tails if offered to flip its own fair coin against this person, because I have strong evidence that this someone is a cheat, and their video evidence is fake. Just something I know from a huge amount of background information that was not explicitly part of this scenario.

However, when discussing such scenarios, it is sometimes useful to assume hypothetical scenarios unlike the real world. For example, we could state that this someone has actually performed the feat, and that there is absolutely no doubt about that. That's impossible in our real world, but it's useful for the sake of discussing bayesianism. Surely any bayesianist's AI would expect heads with high probability in this hypothetical universe.

So, are we looking at "Omega in the real world where someone I don't even know tells me they are really damn good at predicting the future", or "Omega in some hypothetical world where they are actually known with absolute certainty to be really good at predicting the future"?

View more: Next