Hello.
Now can I get some Karma score please?
Thanks.
Hello.
Now can I get some Karma score please?
Thanks.
The fact that students who are motivated to get good scores in exams very often get better scores than students who are genuinely interested in the subject is probably also an application of Goodhart's Law?
1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it's not obvious what things should be classified as the alternatives that should be considered equally plausible.
Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, "I'm sorry, I'm not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?"
2: This depends a lot how we define a rationalist and a Bayesian. A question like "is the Bible literally true" could reveal a lot of irrational people, but I'm not certain of the amount of questions that'd need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren't probabilities, the strict answer to this question is "it can't be done", but I'm assuming you mean "before we know with such a certainty that in practice we can say it's for sure".)
Yes, I should be more specific about 2.
So let's say the following are the first three questions you ask and their answers -
Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No.
At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question.
Q4. Do you believe in Modus Ponens?
or in other words,
Q4. Do you think that if A and A=>B are both true then B should also be true?
If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well.
Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well?
And I can go on and on...
So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of "Modus Ponens is wrong" for some suitably chosen n.
ETA: I think we just each acknowledged that the other has a point. On the Internet, no less!
Isn't it awesome when that happens? :D
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -
I am not sure if your scheme ensures that this does not happen.
Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don't understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?
One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don't know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can't completely convince myself that it gives a correct prior probability.
I have two basic questions that I am confused about. This is probably a good place to ask them.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to 'yes' and a probability to 'no'. What's the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?
I think one thing that evolution could have easily done with our existing hardware is to at least allow us to use rational algorithms whenever it's not intractable to do so. This would have easily eliminated things such as Akrasia, where our rational thoughts do give a solution, but our instincts do not allow us to use them.
There seem to exist certain measures of quality that are second level, in the sense that they measure quality in a kind of indirect way, mostly because the indirect way seems to be easier. One example is sex appeal. The "quality" of a potential mate should be measured just by the number of healthy offsprings it can give birth to. However, that's difficult to find out and hence evolution has programmed our genes to refer to the sex appeal instead, that is, the number of people who will find the person in question attractive. However, the only problem with such second level measures is that they can be feigned. An impotent person can still look attractive.
Similarly, imagine a car manufacturing company that produces two different models. However, the only differences between them are in the looks and the cost. Assuming that the looks are not clearly better than the looks of the other, people who use prestige as the measure of quality will always buy the more expensive car.
I think there's a fundamental flaw in this post.
You're assuming that if we have unlimited willpower, we are actually going to use all of it. Willpower is the ability to do what you think is the most correct thing to do. If what you think is the correct thing to do is actually the correct thing to do, then doing it will, by the definition of correctness, be good. So if you do some "high level reasoning" and conclude that not sleeping for a week is the best thing for you to do and then you use your willpower to do it, it will be the best thing to do, just because you've done the correct analysis and have taken all costs into consideration (including the cost of bad health because of sleep deprivation).
It's always good to be able to do the thing that's the best thing for you to do. What's bad is to not be able to decide what's best for you. So we shouldn't blame willpower. We should blame the inability to take correct decisions.
We realized that one of the very important things that rationalists need is a put down artist community, as opposed to the pick up artist community which already exists but isn't of much use. This is because of the very large number of rationalists who get into relationships but then aren't able to figure out how to get out of them.
I want to understand Bayesian reasoning in detail, in the sense that, I want to take up a statement that is relevant to our daily life and then try to find exactly how much should I believe in it based on the the beliefs that I already have. I think this might be a good exercise for the LW community? If yes, then let's take up a statement, for example, "The whole world is going to be nuked before 2020." And now, based on whatever you know right now, you should form some percentage of belief in this statement. Can someone please show me exactly how to do that?