All of vinayak's Comments + Replies

vinayak10

Haha! Very curious to know how this turns out!

1ShannonFriedman
Really great! Our first guest, Evelyn (http://lesswrong.com/lw/dm4/berkely_visit_report/), is amazing, as is her friend Copt who came with her. The second of three women who have booked flights to visit is here right now along with her brother, she's a physics major from Vienna, Austria. The third is someone who is into math, japanese literature, meditation, and programming! I also have had several contacts with people via the post either tangential or who would like to visit at later dates.
vinayak10

I have read this post before and have agreed to it. But I read it again just now and have new doubts.

I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.

Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some... (read more)

vinayak30

How about this:

People are divided into pairs. Say A and B are in one pair. A gets a map of something that's fairly complex but not too complex. For example, an apartment with a sufficiently large number of rooms. A's task is to describe this to B. Once A and B are both satisfied with the description, B is asked questions about the place the map represented. Here are examples of questions that could be asked:

How many left-turns do you need to make to go from the master bed room to the kitchen?

Which one is the washroom nearest to the game room?

You are sittin... (read more)

vinayak10

Hey, I live in Waterloo too. I will join. (Perhaps not this one, but any subsequent ones after the 24th this month that are organized in Waterloo.) Please keep me posted and let me know if you need any help in organizing this.

vinayak140

If you have many things to do and you are wasting time, then you should number those things from 1 to n and assign n+1 to wasting time and then use http://random.org to generate a random number between 1 and n+1 (1 and n+1 included) to decide what you should do. This adds some excitement and often works.

8Divide
I thought I'd share my pick-thing-to-do-at-random app that helps somewhat. You just add things and then it shows you them at random. You can click to commit to do something for a while, or just flick to another thing if you can't do that now. I've added hundreds of both timewasters and productive activities there and it's quite cool to do this kind of lottery to determine what to do now. Obviously it won't work if you just keep flicking until you happen upon a favorite timewaster, nor when you have something that needs to be done now. It's also essential to have clearly defined activities, even if it's just "think really hard about what to do about and make that a new activity" or whatever. Tell me what you think. http://things-be-done.appspot.com/ (google login needed for persistent storage, but you can play without logging in, data will be associated with a cookie left in your browser (and will be transferred once you do login))
vinayak00

I live in Waterloo, Ontario (Canada). Does anyone live nearby?

vinayak40

Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that's presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?

What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would ident... (read more)

3Mass_Driver
If you haven't already, you might want to take a look at Bayes Theorem by Eliezer. As sort of a quick tip about where you might be getting confused: you summarize the steps involved as (1) come up with a prior, (2) identify potential evidence, and (3) update on the evidence. You're missing one step. You also need to check to see whether the potential evidence is "true," and you need to do that before you update. If you check out Conservation of Expected Evidence, linked above, you'll see why. You can't update just because you've thought of some facts that might bear on your hypothesis and guessed at their probability -- if your intuition is good enough, your guess about the probability of the facts that bear on the hypothesis should already be factored into your very first prior. What you need to do is go out and actually gather information about those facts, and then update on that new information. For example: I feel hot. I bet I'm running a fever. I estimate my chance of having a bacterial infection that would show up on a microscope slide at 20%. I think: if my temperature were above 103 degrees, I would be twice as likely to have a bacterial infection, and if my temperature were below 103 degrees, I would only be half as likely to have a bacterial infection. Considering how hot I feel, I guess there's a 50-50 chance my temperature is above 103 degrees. I STILL estimate my chance of having a bacterial infection at 20%, because I already accounted for all of this. This is just a longhand way of guessing. Now, I take my temperature with a thermometer. The readout says 104 degrees. Now I update on the evidence; now I think the odds that I have a bacterial infection are 40%. The math is fudged very heavily, but hopefully it clarifies the concepts. If you want accurate math, you can read Eliezer's post.
2Matt_Simpson
The answer is... its complicated, so you approximate. A good way of approximating is getting a dataset together and putting together a good model that helps explain that dataset. Doing the perfect Bayesian update in the real world is usually worse than nontrivial - its basically impossible.
vinayak00

So 200:1 is your prior? Then where's the rest of the calculation? Also, how exactly did you come up with the prior? How did you decide that 200:1 is the right place to stop? Or in other words, can you claim that if a completely rational agent had the same information that you have right now, then that agent would also come up with a prior of 200:1? What you have described is just a way of measuring how much you believe in something. But what I am asking is how do you decide how strong your belief should be.

2Jack
It's just the numerical expression of how likely I feel a nuclear attack is. (ETA: I didn't just pick it out of thin air. I can give reasons but they aren't mathematically exact. But we could work up to that by considering information about geopolitics, proliferation etc.) No, I absolutely can't claim that. By making a lot of predictions and hopefully getting good at it while paying attention to known biases and discussing the proposition with others to catch your errors and gather new information. If you were hoping there was a perfect method for relating information about extremely complex propositions to their probabilities... I don't have that. If anyone here does please share. I have missed this! But theoretically, if we're even a little bit rational the more updating we do the closer we should get to the the right answer (though I'm not actually sure we're even this rational). So we pick priors and go from there.
vinayak30

I want to understand Bayesian reasoning in detail, in the sense that, I want to take up a statement that is relevant to our daily life and then try to find exactly how much should I believe in it based on the the beliefs that I already have. I think this might be a good exercise for the LW community? If yes, then let's take up a statement, for example, "The whole world is going to be nuked before 2020." And now, based on whatever you know right now, you should form some percentage of belief in this statement. Can someone please show me exactly how to do that?

0cupholder
Normally I would try and find systematic risk analyses by people who know more about this subject than me. However, Martin Hellman has written a preliminary risk analysis of nuclear deterrence as part of his Defusing the Nuclear Threat project, and he claims that there have been no formal studies of the failure rate of nuclear deterrence. Hellman himself estimates that failure rate as on the order of 1% a year, but I don't know how seriously to take that estimate.
4Morendil
The interesting question isn't so much "how do I convert a degree of belief into a number", but "how do I reconcile my degrees of beliefs in various propositions so that they are more consistent and make me less vulnerable to Dutch books". One way to do that is to formalize what you take that statement to mean, so that its relationships to "other beliefs" becomes clearer. It's what, in the example you suggest, the Doomsday clock scientists have done. So you can look at whatever data has been used by the Doomsday Clock people, and if you have reason to believe they got the data wrong (say, about international agreements), then your estimate would have to be different from theirs. Or you could figure out they forgot to include some evidence that is relevant (say, about peak uranium), or that they included evidence you disagree is relevant. In each of these cases Bayes' theorem would probably tell you at the very least in what direction you should update your degree of belief, if not the exact amount. Or, finally, you could disagree with them about the structural relationships between bits of evidence. That case pretty much amounts to making up your own causal model of the situation. As other commenters have noted it's fantastically hard to apply Bayes rigorously to even a moderately sophisticated causal model, especially one that involves such an intricately interconnected system as human society. But you can always simplify, and end up with something you know is strictly wrong, but has enough correspondence with reality to be less wrong than a more naive model. In practice, it's worth noting that only very seldom does science tackle a statement like this one head-on; as a reductionist approach science generally tries to explicate causal relationships in much smaller portions of the whole situation, treating each such portion as a "black box" module, and hoping that once this module's workings are formalized it can be plugged back into a more general model without
-3Daniel_Burfoot
The problem with your question is that the event you described has never happened. Normally you would take a dataset and count the number of times an event occurs vs. the number of times it does not occur, and that gives you the probability. So to get estimates here you need to be creative with the definition of events. You could count the number of times a global war started in a decade. Going back to say 1800 and counting the two world wars and the Napoleonic wars, that would give about 3/21. If you wanted to make yourself feel safe, you could count the number of nukes used compared to the number that have been built. You could count the number of people killed due to particular historical events, and fit a power law to the distribution. But nothing is going to give you the exact answer. Probability is exact, but statistics (the inverse problem of probability) decidedly isn't.
4Jack
Well to begin with we need a prior. You can choose one of two wagers. In the first, 1,000,000 blue marble and one red marble are put in a bag. You get to remove one marble, if it is the red one you win a million dollars. Blue you get nothing. In the second wager, you win a million dollars if a a nuclear weapon is detonated under non-testing and non-accidental conditions before 2020. Otherwise, nothing. In both cases you don't get the money until January 1st 2021. Which wager do you prefer? If you prefer the nuke bet, repeat with 100,000 blue marbles, if you prefer the marbles try 100,000,000. Repeat until you get wagers that are approximately equal in their estimated value to you. Edit: Commenters other than vinayak should do this too so that he has someone to exchange information with. I think I stop at maybe 200:1 against nuking.
vinayak70

Hello.

Now can I get some Karma score please?

Thanks.

vinayak190

The fact that students who are motivated to get good scores in exams very often get better scores than students who are genuinely interested in the subject is probably also an application of Goodhart's Law?

Partially; but a lot of what is being tested is actually skills correlated with being good in exams - working hard, memorisation, bending youself to the rules, ability to learn skill sets even if you don't love them, gaming the system - rather than interest in the subject.

vinayak20

Yes, I should be more specific about 2.

So let's say the following are the first three questions you ask and their answers -

Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No.

At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question.

Q4. Do you believe in Modus Ponens?

or in other words,

Q4. Do you think that if A and A=>B are both true then B should also be true?

If you think you should ask this question before dec... (read more)

1prase
I think that belief in modus ponens is a part of the definition of "rational", at least practically. So Q1 is enough. However, there are not much tortoises among the general public, so this type of question isn't probably much helpful.
vinayak-10

I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -

  1. Strigli wins the game.
  2. It rains immediately after the match is over.
  3. Strigli wins the game AND it rains immediately after the match is over.

I am not sure if your scheme ensures that this does not happen.

Also, to me, Bayesianism sounds like an iterative way of form... (read more)

1bogdanb
I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I'm asked each question. Then, when I'm asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I'm asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It's a fine point that seems easy to miss. On a somewhat related point, I've looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn't mean what the asker thinks it means. Take For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject. However, what it means is just that before you are asked the question, you don't have any information about it. (And I'm not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today. “Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship). So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one's lifetime, aliens exist, aliens have that travel technology,
0orthonormal
Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2). On the second part: Yep, you need to start with some prior probabilities, and if you don't have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.) In practice, it's not possible to estimate hypothesis complexity with much precision, but it doesn't take all that much precision to judge in cases like Thor vs. Maxwell's Equations; and anyway, as long as your priors aren't too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes. ETA: Good to keep in mind: When (Not) To Use Probabilities
vinayak60

I have two basic questions that I am confused about. This is probably a good place to ask them.

  1. What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

  2. Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probabili

... (read more)
0[anonymous]
For number 1 you should weight "no" more highly. For the answer to be "yes" Strigli must be a team, a Doldun team, and it must win. Sure, maybe all teams win, but it is possible that all teams could lose, they could tie, or the game might be cancelled, so a "no" is significantly more likely to be right. 50% seems wrong to me.
1orthonormal
For #2, I don't see how you could ever be completely sure the other was rationalist or Bayesian, short of getting their source code; they could always have one irrational belief hiding somewhere far from all the questions you can think up. In practice, though, I think I could easily decide within 10 questions whether a given (honest) answerer is in the "aspiring rationalist" cluster and/or the "Bayesian" cluster, and get the vast majority of cases right. People cluster themselves pretty well on many questions.
1Jack
For two, can I just have an extended preface that describes a population, an infection rate for some disease and a test with false positivity rates and false negativity rates and see if the person gives me the right answer?
9MrHen
This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a -3 rating though... so apparently it wasn't too useful. The consensus of the comments was that the correct answer is .5. Also of note is Bead Jar Guesses and its sequel.
0Kaj_Sotala
1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it's not obvious what things should be classified as the alternatives that should be considered equally plausible. Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, "I'm sorry, I'm not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?" 2: This depends a lot how we define a rationalist and a Bayesian. A question like "is the Bible literally true" could reveal a lot of irrational people, but I'm not certain of the amount of questions that'd need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren't probabilities, the strict answer to this question is "it can't be done", but I'm assuming you mean "before we know with such a certainty that in practice we can say it's for sure".)
7JGWeissman
If you truly have no clue, .5 yes and .5 no. Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of "No". How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards "yes".
vinayak10

I think one thing that evolution could have easily done with our existing hardware is to at least allow us to use rational algorithms whenever it's not intractable to do so. This would have easily eliminated things such as Akrasia, where our rational thoughts do give a solution, but our instincts do not allow us to use them.

2wedrifid
It tried that with your great^x uncle. But he actually spent his time doing the things he said he wanted to do instead of what was best for him in his circumstances and had enough willpower to not cheat on his mate with the girls who were giving him looks.
vinayak10

There seem to exist certain measures of quality that are second level, in the sense that they measure quality in a kind of indirect way, mostly because the indirect way seems to be easier. One example is sex appeal. The "quality" of a potential mate should be measured just by the number of healthy offsprings it can give birth to. However, that's difficult to find out and hence evolution has programmed our genes to refer to the sex appeal instead, that is, the number of people who will find the person in question attractive. However, the only prob... (read more)

0wedrifid
Modified, naturally, by how sexy the healthy children are themselves.
vinayak20

I think there's a fundamental flaw in this post.

You're assuming that if we have unlimited willpower, we are actually going to use all of it. Willpower is the ability to do what you think is the most correct thing to do. If what you think is the correct thing to do is actually the correct thing to do, then doing it will, by the definition of correctness, be good. So if you do some "high level reasoning" and conclude that not sleeping for a week is the best thing for you to do and then you use your willpower to do it, it will be the best thing to d... (read more)

vinayak50

We realized that one of the very important things that rationalists need is a put down artist community, as opposed to the pick up artist community which already exists but isn't of much use. This is because of the very large number of rationalists who get into relationships but then aren't able to figure out how to get out of them.

vinayak10

So we have three people now. I hope this happens.

vinayak10

Oops, I guess I'm late - I just saw this post. In any case, I will come too.

vinayak10

It will be nice to come up with a more precise definition of 'lowering the status'. For example, if some person treats me like a non-person, all he is doing is expressing his opinion about me being a non-person. This being the opinion of just one person, should not affect my status in the whole society and yet, I feel offended. So the first question is whether this should be called lowering of my status.

Also, let us assume that one person treating me like a non-person does lower my status in some way. Even then, shouting back at him and informing him that ... (read more)

3BarbaraB
Actually, my experience is, that when I protest against the behaviour, which I perceive as somewhat offensive, the other party gets the message and either stops or at least becomes less intensively offensive. I try to protest in a peaceful manner, not shouting, or offending more, etc. The idea is, that sometimes people do not realize they make someone feel threatened for their social status. By protesting, I am giving them a feedback and a chance the war will not be initiated. Funny enough, my boyfriend often believes, that by giving such a feedback, he is showing he feels threatened, makes himself more vulnerable and it is "not a good way of gaining back the lost status". Well, maybe there is a gender difference. My boyfriend also says, that women live in a different reality than men, because they are known to be generally less aggressive. As a consequence, they are perceived as less threatening in the social interactions, and people are just NICER to them. So there is still a possibility, that the peaceful protest, which often works very well for me, will not work for men. However, I suggest to give it a try.
6tut
If somebody treats you like a nonperson one of two things is true. Either 1) You are a mere possession, and thus have a very low status or 2) the person treating you that way did something inappropriate. Being offended and making a scene is a way to show that 2 is the case, and thus defend yourself against the decreased status.
6Douglas_Knight
Shouting at him may not change his behavior, but it may be good for your status. It certainly tells onlookers something.
vinayak10

Something related happens with me every once in a while when someone makes a statement of the form A -> B and I say 'yes' or 'ok' in response. By saying 'ok' all I am doing is to acknowledge the truth of the statement A -> B, however, in most cases, the person assumes that I am agreeing that A is true and hence ends up concluding that B is true as well.

One example is this -

I go to a stationery shop and ask for an envelope. The storekeeper hands me one and I start inspecting it. The storekeeper observes this and remarks, "If you want a bigger envelope, I can give you one." I say, "Alright." He hands me a bigger envelope.

lavalamp110

When someone asks you if you could pass the salt, do you pass the salt? Or just say "Yes"?

4thomblake
FYI, I think you were looking for "No, thanks"