Comment author: Watercressed 22 November 2013 06:59:30AM 37 points [-]

Survey Taken

Comment author: Watercressed 13 November 2013 11:11:55PM *  3 points [-]

*I keep seeing probability referred to as an estimation of how certain you are in a belief. And while I guess it could be argued that you should be certain of a belief relative to the number of possible worlds left or whatever, that doesn't necessarily follow. Does the above explanation differ from how other people use probability?

One can ground probability in Cox's Theorem, which uniquely derives probability from a few things we would like our reasoning system to do.

Comment author: Kaj_Sotala 05 November 2013 06:15:39AM *  5 points [-]

On this topic, I once wrote:

I used to be frustrated and annoyed by what I thought was short-sightedness and irrationality on the behalf of people. But as I've learned more of the science of rationality, I've become far more understanding.

People having strong opinions on things they know nothing about? It doesn't show that they're stupid. It just shows that on issues of low personal relevance, it's often more useful to have opinions that are chosen to align with the opinions of those you wish to associate yourself with, and that this has been true so often in our evolutionary history that we do it without conscious notice. What right do I have to be annoyed at people who just do what has been the reasonable course of action for who knows how long, and aren't even aware of the fact that they're doing it?

Or being frustrated about people not responding to rational argument? Words are just sounds in the air: arbitrarily-chosen signals that correspond to certain meanings. Some kinds of algorithms (in the brain or in general) will respond to some kinds of input, others to other kinds. Why should anyone expect a specific kind of word input to be capable of persuading everyone? They're just words, not magic spells.

Comment author: Watercressed 05 November 2013 05:40:40PM 3 points [-]

Why should anyone expect a specific kind of word input to be capable of persuading everyone? They're just words, not magic spells.

The specific word sequence is evidence for something or other. It's still unreasonable to expect people to respond to evidence in every domain, but many people do respond to words, and calling them just sounds in air doesn't capture the reasons they do so.

Comment author: Jack 27 October 2013 12:11:56AM 3 points [-]

If not "totally backwards" surely "orthogonal". Why not a test that supplies it's own evidence and asks the one being tested to come to a conclusion? Like the Amanda Knox case was for people here who hadn't heard of it before reading about it here.

Comment author: Watercressed 27 October 2013 12:27:24AM 1 point [-]

I wouldn't call it orthogonal either. Rationality is about having correct beliefs, and I would label a belief-based litmus test rational to the extent it's correct.

Writing a post about how $political_belief is a litmus test is probably a bad idea because of the reasons you mentioned.

Comment author: Jack 26 October 2013 10:13:14PM *  16 points [-]

The whole idea of having a belief as a litmus test for rationality seems totally backward. The whole point is how you change your beliefs in response to new evidence.

Meanwhile, if a lot of people have a belief that isn't true it is almost necessarily politically salient. The existence of God isn't an issue that is debated in the halls of government: but it is still hugely about group identity which means that people can get mind-killed about it. The only reason it works as any kind of litmus test is that everyone here is/was already a part of the same group when it comes to theism.

I think the true objection to Stuart's post was less about climate change and more about branding Less Wrong with an issue that has ideological salience. And that seems totally fair to me. If you have a one issue litmus test it's sort of weird to make it one that isn't specific enough to screen out even the most irrational liberals. At the very least add a sub-test asking if a person thinks carbon emissions are responsible for the Hurricane Sandy disaster, their confidence that climate change causes more hurricanes and what (if any) existential risk they assign to it. Catch the folks who think the moon is made out of gold in the filter.

Comment author: Watercressed 27 October 2013 12:04:51AM 0 points [-]

I generally agree with this post, but since people's beliefs are evidence for how they change their beliefs in response to evidence, I would call it bias-inducing and usually tribal cheering instead of totally backwards.

Comment author: bcoburn 16 September 2013 10:58:14PM 1 point [-]

My first idea is to use something based on cryptography. For example, using the parity of the pre-image of a particular output from a hash function.

That is, the parity of x in this equation:

f(x) = n, where n is your index variable and f is some hash function assumed to be hard to invert.

This does require assuming that the hash function is actually hard, but that both seems reasonable and is at least something that actual humans can't provide a counter example for. It's also relatively very fast to go from x to n, so this scheme is easy to verify.

Comment author: Watercressed 21 September 2013 06:09:08AM -1 points [-]

Hash functions map multiple inputs to the same hash, so you would need to limit the input in some other way, and that makes it harder to verify.

Comment author: calef 11 September 2013 03:26:21AM 1 point [-]

What's to prevent omega from performing the simulation of you wherein a sign appears reading "INDEPENDENT SCENARIO BEGIN", and he tells you "This is considered an artificially independent experiment. Your algorithm for solving this problem will not be used in my simulations of your algorithm for my various other problems. In other words, you are allowed to two-box here but one-box Newcomb's problem, or vice versa."?

Comment author: Watercressed 11 September 2013 03:35:15AM 4 points [-]

The usual formulation of Omega does not lie.

Comment author: Watercressed 10 September 2013 05:13:51AM *  7 points [-]

If Omega maintains a 99.9% accuracy rate against a strategy that changes its decision based on the lottery numbers, it means that Omega can predict the lottery numbers. Therefore, if the lottery number is composite, Omega has multiple choices against an agent that one-boxes when the numbers are different and two-boxes when the numbers are the same: it can pick the same composite number as the lottery, in which case the agent will two-box and earn 2,001,000, or it can pick a different prime number, and have the agent one-box and earn 3,001,000. It seems like the agent that one-boxes all the time does better by eliminating the cases where Omega selects the same number as the lottery, so I would one box.

Comment author: Metus 09 September 2013 01:46:35PM 0 points [-]

It seems like a winner-take-all problem in the comments to me. As LW sorts submissions by date and not by reddit's algorithm it is no problem to see valuable posts by their upvotes. In the comments however posts are sorted by votes and here winner-takes-all comes into play as most people only read the first couple of posts and late-comers drown out.

Comment author: Watercressed 09 September 2013 02:45:37PM *  2 points [-]

Above the top-level comment box, there's an option to sort comments by date. Perhaps that should be the default.

In response to comment by Jiro on Why Eat Less Meat?
Comment author: aelephant 06 September 2013 11:05:56PM 0 points [-]

Right. Like I said, I find it hard to come up with a good argument. I don't like arguments that extend things into the future, because everything has to get all probabilistic. Is it possible to prove that any particular child is going to grow into an adult? Nope.

Comment author: Watercressed 07 September 2013 02:17:01AM *  0 points [-]

But if we're 99.9% confident that a child is going to die (say, they have a very terminal disease), is being cruel to the child 99.99% less bad?

View more: Prev | Next