Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: vinayak 15 May 2012 04:26:41AM 1 point [-]

I have read this post before and have agreed to it. But I read it again just now and have new doubts.

I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.

Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some vibration, the answer is yes. For the second person, the test is to monitor every person living on earth and see if their brains did the kind of auditory processing that the falling tree would make them do. Since the first person's test has turned out to be positive and the second person's test has turned out to be negative, they say "yes" and "no" respectively as answers to the question, "Did the tree make any sound?"

So the problem here doesn't seem to be an absence of rent in anticipated experiences. There is some problem, true, because there is no single anticipated experience where the two people anticipate opposite outcomes even though one says that the tree makes a sound and the other one says it doesn't. But it seems like that's because of a different reason.

Say person A has a set of observations X, Y, and Z that he thinks are crucial for deciding whether the tree made any sound or not. For example, if X is negative, he concludes that the tree did make a sound otherwise it didn't, if Y is negative, he concludes it did not make a sound and so on. Here, X could be "cause air vibration" for example. For all other kinds of observations, A has a don't care protocol, i.e., the other observations do not say anything about the sound. Similarly, person B has a set X', Y', Z' of crucial observations and other observations lie in the set of don't cares. The problem here is just that X,Y, Z are completely disjoint from X', Y', Z'. Thus even though A and B differ in their opinions about whether the tree made a sound, there is no single aspect where they would anticipate completely opposite experiences.

In response to SotW: Be Specific
Comment author: vinayak 03 April 2012 07:36:12PM *  2 points [-]

How about this:

People are divided into pairs. Say A and B are in one pair. A gets a map of something that's fairly complex but not too complex. For example, an apartment with a sufficiently large number of rooms. A's task is to describe this to B. Once A and B are both satisfied with the description, B is asked questions about the place the map represented. Here are examples of questions that could be asked:

How many left-turns do you need to make to go from the master bed room to the kitchen?

Which one is the washroom nearest to the game room?

You are sitting in room1 and you want to go to room2. You have some guests sitting in room3 and you want to avoid them. Can you still manage to reach room2?

You can also just simulate the story about Y Combinator and Paul Graham. Show a new web-service to person A and ask him to describe it to person B. Finally ask B questions about the web service.

In both cases, the accuracy with which B answers the questions is directly proportional to the quality of A's description.

I think two variants can be tried. In the first one, A does not know what questions will be given to B. In the second one, he does, but he is prohibited from directly including the answers as a part of his description.

Comment author: vinayak 12 November 2011 03:38:35AM 0 points [-]

I will come too.

Comment author: scdoctor 05 June 2011 10:26:36PM 2 points [-]

Hi, I'd like to start a meet up in Waterloo, Ontario, Canada on Wednesdays at 7pm at Whole 'lotta Gelata in Uptown Waterloo starting June 15. I'm not sure where to post that announcement (or how many LW people there are in the area), but here as a good a place as any to start...

Comment author: vinayak 12 June 2011 11:51:57AM 1 point [-]

Hey, I live in Waterloo too. I will join. (Perhaps not this one, but any subsequent ones after the 24th this month that are organized in Waterloo.) Please keep me posted and let me know if you need any help in organizing this.

Comment author: Divide 23 June 2010 11:14:41PM 5 points [-]

I thought I'd share my pick-thing-to-do-at-random app that helps somewhat. You just add things and then it shows you them at random. You can click to commit to do something for a while, or just flick to another thing if you can't do that now. I've added hundreds of both timewasters and productive activities there and it's quite cool to do this kind of lottery to determine what to do now.

Obviously it won't work if you just keep flicking until you happen upon a favorite timewaster, nor when you have something that needs to be done now. It's also essential to have clearly defined activities, even if it's just "think really hard about what to do about <whatever> and make that a new activity" or whatever. Tell me what you think.

http://things-be-done.appspot.com/ (google login needed for persistent storage, but you can play without logging in, data will be associated with a cookie left in your browser (and will be transferred once you do login))

Comment author: vinayak 24 June 2010 06:08:28PM 0 points [-]

Pretty neat. Thanks!

Comment author: vinayak 22 June 2010 12:11:59PM 10 points [-]

If you have many things to do and you are wasting time, then you should number those things from 1 to n and assign n+1 to wasting time and then use http://random.org to generate a random number between 1 and n+1 (1 and n+1 included) to decide what you should do. This adds some excitement and often works.

Comment author: Morendil 09 June 2010 05:02:00PM *  6 points [-]

Please reply to this comment if you intend to participate, and are willing and able to free up a few hours per week or fortnight to work through the suggested reading or exercises.

Please indicate where you live, if you would be willing to have some discussion IRL. My intent is to facilitate an online discussion here on LW but face-to-face would be a nice complement, in locations where enough participants live.

(You need not check in again here if you have already done so in the previous discussion thread, but you can do so if you want to add details such as your location.)

Comment author: vinayak 10 June 2010 03:12:58PM 0 points [-]

I live in Waterloo, Ontario (Canada). Does anyone live nearby?

Comment author: vinayak 10 June 2010 03:08:49PM 0 points [-]

I'm in too.

Comment author: Daniel_Burfoot 01 May 2010 11:56:38PM -2 points [-]

Can someone please show me exactly how to do that?

The problem with your question is that the event you described has never happened. Normally you would take a dataset and count the number of times an event occurs vs. the number of times it does not occur, and that gives you the probability.

So to get estimates here you need to be creative with the definition of events. You could count the number of times a global war started in a decade. Going back to say 1800 and counting the two world wars and the Napoleonic wars, that would give about 3/21. If you wanted to make yourself feel safe, you could count the number of nukes used compared to the number that have been built. You could count the number of people killed due to particular historical events, and fit a power law to the distribution.

But nothing is going to give you the exact answer. Probability is exact, but statistics (the inverse problem of probability) decidedly isn't.

Comment author: vinayak 02 May 2010 05:20:38AM 3 points [-]

Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that's presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?

What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would identify some facts that could be used as evidence in favour or against the hypothesis. And then one would do the necessary Bayesian updates.

I know how to do this for the simple cases of balls in a bin etc. But I get confused when it comes to forming beliefs about statements that are about the real world.

Comment author: Jack 01 May 2010 06:30:42PM *  4 points [-]

Well to begin with we need a prior. You can choose one of two wagers. In the first, 1,000,000 blue marble and one red marble are put in a bag. You get to remove one marble, if it is the red one you win a million dollars. Blue you get nothing. In the second wager, you win a million dollars if a a nuclear weapon is detonated under non-testing and non-accidental conditions before 2020. Otherwise, nothing. In both cases you don't get the money until January 1st 2021. Which wager do you prefer?

If you prefer the nuke bet, repeat with 100,000 blue marbles, if you prefer the marbles try 100,000,000. Repeat until you get wagers that are approximately equal in their estimated value to you.

Edit: Commenters other than vinayak should do this too so that he has someone to exchange information with. I think I stop at maybe 200:1 against nuking.

In response to comment by Jack on Open Thread: May 2010
Comment author: vinayak 01 May 2010 11:39:26PM 0 points [-]

So 200:1 is your prior? Then where's the rest of the calculation? Also, how exactly did you come up with the prior? How did you decide that 200:1 is the right place to stop? Or in other words, can you claim that if a completely rational agent had the same information that you have right now, then that agent would also come up with a prior of 200:1? What you have described is just a way of measuring how much you believe in something. But what I am asking is how do you decide how strong your belief should be.

View more: Next