All of xv15's Comments + Replies

xv1550

The optimist fell ten stories, and at each window bar he shouted to the folks inside: 'Doing all right so far!'

Anonymous; quoted for instance in The Manager's Dilemma

xv1560

It's rather against the point of the article to start talking about the above examples of privileged questions...

Even so, it's worth noting that immigration policy is a rare, important question with first-order welfare effects. Relaxing border fences creates a free lunch in the same way that donating to the Against Malaria Foundation creates a free lunch. It costs on the order of $7 million to save an additional American life, but on the order of $2500 to save a life if you're willing to consider non-Americans.

By contrast, most of politics consists of ... (read more)

8Qiaochu_Yuan
I think it's plausible that immigration policy is in fact an important question but less plausible that that's why people talk about it. (Similarly, a privileged hypothesis need not be wrong.)
xv1520

"Fairness" depends entirely on what you condition on. Conditional on the hare being better at racing, you could say it's fair that the hare wins. But why does the hare get to be better at racing in the first place?

Debates about what is and isn't fair are best framed as debates over what to condition on, because that's where most of the disagreement lies. (As is the case here, I suppose).

xv1510

This is much better than my moral.

xv1540

I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at firs... (read more)

xv15170

"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I must run into."

"You only need to change your direction," said the cat, and ate it up.

-Kafka, A Little Fable

0Document
I briefly read the moral as something like this; something along the lines of "being exposed in the open was the worst thing the mouse could imagine, so it ran blindly away from it without asking what the alternatives were". I'm still not sure I actually get it. Tangentially, keeping mouse traps in a house with a cat seems hazardous (though I could be underestimating cats). And I assume "day" and "chamber" are used abstractly.
wedrifid350

"You only need to change your direction," said the cat, and ate it up.

Moral: Just because the superior agent knows what is best for you and could give you flawless advice, doesn't mean it will not prefer to consume you for your component atoms!

4xv15
I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at first, there comes a day of reckoning. When the false model finally collides with reality, reality wins. The truth is that reality contains many horrors. And they are much harder to escape from a narrow corridor that cuts off most possible avenues for retreat.
xv15200

Joe Pyne was a confrontational talk show host and amputee, which I say for reasons that will become clear. For reasons that will never become clear, he actually thought it was a good idea to get into a zing-fight with Frank Zappa, his guest of the day. As soon as Zappa had been seated, the following exchange took place:

Pyne: I guess your long hair makes you a girl.

Zappa: I guess your wooden leg makes you a table.

Of course this would imply that Pyne is not a featherless biped.

Source: Robert Cialdini's Influence: The Psychology of Persuasion

xv15200

I've always thought there should be a version where the hare gets eaten by a fox halfway through the race, while the tortoise plods along safely inside its armored mobile home.

8A1987dM
http://abstrusegoose.com/494
xv1530

That is true. But there are also such things as holding another person at gunpoint and ordering them to do something. It doesn't make them the same person as you. Their preferences are different even if they seem to behave in your interest.

And in either case, you are technically not deciding the other person's behavior. You are merely realigning their incentives. They still choose for themselves what is the best response to their situation. There is no muscle now-you can flex to directly make tomorrow-you lift his finger, even if you can concoct some... (read more)

xv15100

We can't jettison hyperbolic discounting if it actually describes the relationship between today-me and tomorrow-me's preferences. If today-me and tomorrow-me do have different preferences, there is nothing in the theory to say which one is "right." They simply disagree. Yet each may be well-modeled as a rational agent.

The default fact of the universe is that you aren't the same agent today as tomorrow. An "agent" is a single entity with one set of preferences who makes unified decisions for himself, but today-you can't make decisio... (read more)

1A1987dM
There are such things as commitment devices.
xv1520

Another alternative is to provide doctors with a simple, easy-to-use program called Dr. Bayes. The program would take as input: the doctor's initial estimate of the chance the patient has the disorder (taking into account whatever the doctor knows about various risk factors) the false positive and false negative rates of a test.

The program would spit out the probability of having the disorder given positive and negative test results.

Obviously there are already tools on the internet that will implement Bayes theorem for you. But maybe it could be sold ... (read more)

xv1500

thanks, PPV is exactly what I'm after.

The alternative to giving a doctor positive & negative predictive values for each maternal age is to give false positive & negative rates for the test plus the prevalence rate for each maternal age. Not much difference in terms of the information load.

One concern I didn't consider before is that many doctors would probably resist reporting PPV's to their patients because they are currently recommending tests that, if they actually admitted the PPV's, would look ridiculous! (e.g. breast cancer screening).

xv1510

"False positive rate" and "False negative rate" have strict definitions and presumably it is standard to report these numbers as an outcome of clinical trials. Could we similarly define a rigid term to describe the probability of having a disorder given a positive test result, and require that to be reported right along with false positive rates?

Seems worth an honest try, though it might be too hard to define it in such a way as to forestall weaseling.

7[anonymous]
If I understand the following Wikipedia page correctly: http://en.wikipedia.org/wiki/Positive_predictive_value The term you are requesting is Positive predictive value and Negative predictive value is the term for not having a disorder given a negative test result. It also points out that these are not solely dependent on the test, and also require a prevalence percentage. But that being said, you could require each test to be reported with multiple different prevalence percentages: For instance, using the above example of Downs Syndrome, you could report the results by using the prevalence of Downs Syndrome at several different given maternal ages. (Since prevalence of Down's Syndrome is significantly related to maternal age.)
xv15200

Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test

Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn't know how to combine these into the probability of Down syndrome given a positive test result.

Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matt... (read more)

0prase
The incidence of the disease may be different for different populations while the test manufacturer may not know where and on which patients the test is going to be used. Also, serious diseases are often tested multiple times by different tests. What would a Bayes-ignorant doctor do with positives from tests A and B which are accompanied with information: "when test A is positive, the patient has 90% chance of having the syndrome" and "when test B is positive, the patient has 75% chance of having the syndrome"? I'd guess most statistically illiterate doctors would go with the estimate of the test done last.
2xv15
Another alternative is to provide doctors with a simple, easy-to-use program called Dr. Bayes. The program would take as input: the doctor's initial estimate of the chance the patient has the disorder (taking into account whatever the doctor knows about various risk factors) the false positive and false negative rates of a test. The program would spit out the probability of having the disorder given positive and negative test results. Obviously there are already tools on the internet that will implement Bayes theorem for you. But maybe it could be sold to doctors if the interface were designed specifically for them. I could see a smart person in charge of a hospital telling all the doctors at the hospital to incorporate such a program into their diagnostic procedure. Failing this, another possibility is to solicit the relevant information from the doctor and then do the math yourself. (Being sure to get the doctor's prior before any test results are in). Not every doctor would be cooperative...but come to think of it, refusal to give you a number is a good sign that maybe you shouldn't trust that particular doctor anyway.
2buybuydandavis
Because then they would be assuming they had all relevant prior information for that particular patient. They don't. For example, age of mother, age of father, their genes, when they've lived where, what chemicals they've been exposed to, etc., are many factors the manufacturer has no knowledge of, but the doctor might. Naturally, it would be helpful for the company to make an online diagnostic model of all known relevant factors available online, updated as new information comes in, but given the regulatory and legal climate (at least here in the US), something so sensible is likely completely infeasible.
7CCC
This stops working in the case where some of the people upstream can't be trusted. Consider the following statement: "The previous test, if you have a positive result, means that the baby has a 25% chance of having Down syndrome, according to the manufacturer. But my patented test will return a positive result in 99% of cases in which the baby has Down syndrome."
xv15220

Closeness in the experiment was reasonably literal but may also be interpreted in terms of identification with the torturer. If the church is doing the torturing then the especially religious may be more likely to think the tortured are guilty. If the state is doing the torturing then the especially patriotic (close to their country) may be more likely to think that the tortured/killed/jailed/abused are guilty. That part is fairly obvious but note the second less obvious implication–the worse the victim is treated the more the religious/patriotic will bel

... (read more)
0CCC
It seems to me that the same would apply to any in-group. The reasoning runs more-or-less as follows: It is us (not me personally, but a group with which I strongly identify) that is treating this person badly; since we are doing it, then he must deserve it. Since he deserves it, he must be guilty. This is because if he did not deserve it, then I would be horrified at the actions of people I have always tried to emulate; and that, in turn, would mean that I had already given some support to an evil group, and had indeed put some significant effort into being a part of that group, taking up the group norms. If the group is evil, or does evil actions, then I am evil by association. And a good person does not want to reach that conclusion; therefore, the person being punished must be guilty. And thus, good people do evil things by not acknowledging evil being done in their name as what it is.
8Eugine_Nier
One amusing aspect is that assuming the person is justified in their belief that their church/country is ethical, the above is a valid inference.
xv15160

I dislike this quote because it obscures the true nature of the dilemma, namely the tension between individual and collective action. Being "not in one's right mind" is a red herring in this context. Each individual action can be perfectly sensible for the individual, while still leading to a socially terrible outcome.

The real problem is not that some genius invents nuclear weapons and then idiotically decides to incite global nuclear war, "shooting from the hip" to his own detriment. The real problem is that incentives can be alig... (read more)

xv15-10

This post, by its contents and tone, seems to really emphasize the downside of signaling. So let me play the other side.

Enabling signaling can add or subtract a huge amount of value from what would happen without signaling. You can tweak your initial example to get a "rat race" outcome where everyone, including the stupid people, sends a costly signal that ends up being completely uninformative (since everyone sends it). But you can also make it prohibitively mentally painful for stupid people to go to college, versus neutral or even enjoyable... (read more)

1NancyLebovitz
This relates to something I've wondered about-- why did ancient Greece leave a tremendous legacy while the slave-holding southern states and the Confederacy didn't?
xv15150

This sounds awesome. It would be really cool if you could configure it so that identifying biases actually helps you to win by some tangible measure. For example, if figuring out a bias just meant that person stopped playing with bias (instead of drawing a new bias), figuring out biases would be instrumental in winning. The parameters could be tweaked of course (if people typically figure out the biases quickly, you could make it so they redraw biases several times). Or you could link drawing additional biases with the drawing of epidemic cards?

I have ... (read more)

0chaosmosis
A version of this game where you identify the biases of others but don't announce it would result in this type of competition. You can manipulate the other players more easily when you're aware of their biases.
-1Kaj_Sotala
Possible Arkham Horror variant: each increase of the Terror Track or Doom Counters infects a player with a bias. Closing a gate gives each player a single opportunity to guess the bias of a freely chosen other player, with a correct guess removing the bias. Sealing a gate additionally allows for the removal of a single player’s bias, even if nobody guesses it right. Alternatively, just let everyone make a single guess in the Movement Phase, as per the normal Biased Pandemic rules.
4freyley
I think your terrifying vision sounds like a lot of fun.
xv15290

Luke, I thought this was a good post for the following reasons.

(1) Not everything needs to be an argument to persuade. Sometimes it's useful to invest your limited resources in better illuminating your position instead of illuminating how we ought to arrive at your position. Many LWers already respect your opinions, and it's sometimes useful to simply know what they are.

The charitable reading of this post is not that it's an attempted argument via cherry-picked examples that support your feeling of hopefulness. Instead I read it as an attempt to commu... (read more)

7robertzk
I agree so much I'm commenting.
xv1540

wedrifid, RIGHT. Sorry, got a little sloppy.

By "TDT reasoning" -- I know, I know -- I have been meaning Desrtopa's use of "TDT reasoning," which seems to be like TDT + [assumption that everyone else is using TDT].

I shouldn't say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa's invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.

xv1520

It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains).

"Isomorphic" is a strong word. Let me know if you have a better example.

Anyway let me go back to this from your previous comment:

Tragedies of commons are not universally unresolvable....Simply saying

... (read more)
0Desrtopa
Lack of knowledge of global warming isn't the tragedy of the commons I'm talking about; even if everyone were informed about global warming, it doesn't necessarily mean we'd resolve it. Humans can suffer from global climate change despite the entire population being informed about it, and we might find a way to resolve it that works despite most of the population being ignorant. The question a person starting from a position of ignorance about climate change has to answer is "should I expect that learning about this issue has benefits to me in excess of the effort I'll have to put in to learn about it?" An answer of "no" corresponds to a low general expectation of information value considering the high availability of the information. The reason I brought up TDT was as an example of reasoning that relies on a correlation between one agent's choices and another's. I didn't claim at any point that people are actually using TDT. However, if decision theory that assumes correlation between people's decisions did not outcompete decision theory which does not assume any correlation, we wouldn't have evolved cooperative tendencies in the first place.
5wedrifid
NO! It implies that you go ahead and use TDT reasoning - which tells you to defect in this case! TDT is not about cooperation!
xv1500

Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion.

I think the default is that people change specific opinions more in response to the tactful debate style you're identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one's wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My questi... (read more)

2prase
Since here on LW changing one's opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn't. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
xv1500

I'd like to say yes, but I don't really know. Am I way off-base here?

Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it's not worth it. It's too bad there aren't more people weighing in on these comments because I'd like to know how the community thinks my priorities should be set. In any case you've been around for longer so you probably know better than I.

1prase
I think we are speaking about this scenario: * Alice says: "X is true." * Bob: "No, X is false, because of Z." * Alice: "But Z is irrelevant with respect to X', which is what I actually mean." Now, Bob agrees with X'. What will Bob say? 1. "Fine, we agree after all." 2. "Yes, but remember that X is problematic and not entirely equivalent to X'." 3. "You should openly admit that you were wrong with X." If I were in place of Alice, (1) would cause me to abandon X and believe X' instead. For some time I would deny that they aren't equivalent or think that my saying X was only poor formulation on my part and that I have always believed X'. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob's opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X' too (I don't like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot. (The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.) Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
xv1500

human decisionmaking is isomorphic to TDT in some domains

Maybe it would help if you gave me an example of what you have in mind here.

0Desrtopa
Well, take Stop Voting For Nincompoops, for example. If you were to just spontaneously decide "I'm going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process," you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing. But if you read an essay encouraging you to do so, posted in a place where many people apply reasoning processes similar to your own, the choice you make is a lot more likely to reflect the choice a lot of other people are making.
xv1520

prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.

BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we're also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we're doing, and correct flaws when they ... (read more)

1prase
The key question is: would you believe it if it were your opponent in a heated debate who told you?
0TheOtherDave
There's a big difference between: * "it's best if we notice and acknowledge when we're wrong, and therefore I will do my best to notice and acknowledge when I'm wrong" * "it's best if we notice and acknowledge when we're wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them" and * "it's best if we notice and acknowledge when we're wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so."
xv1540

Unfortunately that response did not convince me that I'm misunderstanding your position.

If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don't know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning.

No one is disputing that there is correlation between people's decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that T... (read more)

-1Desrtopa
People don't use a generalized form of TDT, but human decisionmaking is isomorphic to TDT in some domains. Other people don't have to consciously be using TDT to sometimes make decisions based on a judgment of how likely it is that other people will behave similarly. Tragedies of commons are not universally unresolvable. It's to everyone's advantage for everyone to pool their resources for some projects for the public good, but it's also advantageous for each individual to opt out of contributing their resources. But under the institution of governments, we have sufficient incentives to prevent most people from opting out. Simply saying "It's a tragedy of the commons problem" doesn't mean there's no chance of resolving it and therefore no use in knowing about it.
xv1540

Desrtopa, can we be careful with it means to be "different" from other agents? Without being careful, we might reach for any old intuitive metric. But it's not enough to be mentally similar to other agents across just any metric. For your reasoning to work, they have to be executing the same decision rule. That's the metric that matters here.

Suppose we start out identical but NOT reasoning as per TDT -- we defect in the prisoner's dilemma, say -- but then you read some LW and modify your decision rule so that when deciding what to do, you ima... (read more)

-2Desrtopa
Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT. If we were both defecting in the Prisoner's dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn't also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own. I think you're assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I'm not engaging in sloppy reasoning, but your comment isn't addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won't experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say "other" but it's difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who're aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and
xv15-10

To me, this comment basically concedes that you're wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you've been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little.

Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement:

In general I think that a personal policy of not informing o

... (read more)
5prase
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
-2Desrtopa
Yes, you are misunderstanding my position. I don't think that it's optimal for most individuals to inform themselves about global warming to a "socially optimal" level where everyone takes the issue sufficiently seriously to take grassroots action to resolve it. Human decisionmaking is only isomorphic to TDT in a limited domain and you can only expect so much association between your decisions and others; if you go that far, you're putting in too much buck for not enough bang, unless you're getting utility from the information in other ways. But at the point where you don't have even basic knowledge of global warming, anticipating a negative marginal utility on informing yourself corresponds to a general policy of ignorance that will serve one poorly with respect to a large class of problems. If there were no correlation between one person's decisions and another's, it would probably not be worth anyone's time to learn about any sort of societal problems at all, but then, we wouldn't have gotten to the point of being able to have societal problems in the first place.
xv1520

I agree. Desrtopa is taking Eliezer's barbarians post too far for a number of reasons.

1) Eliezer's decision theory is at the least controversial which means many people here may not agree with it.

2) Even if they agree with it, it doesn't mean they have attained rationality in Eliezer's sense.

3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent.

Desrtopa: Just because it upholds an ideal of rationality that... (read more)

1wedrifid
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying "Rah Cooperation!".
xv15140

Exactly, it IS the tragedy of the commons, but that supports my point, not yours. It may be good for society if people are more informed about global warming, but society isn't what makes decisions. Individuals make decisions, and it's not in the average individual's interest to expend valuable resources learning more about global warming if it's going to have no real effect on the quality of their own life.

Whether you think it's an individual's "job" or not to do what's socially optimal, is completely besides the point here. The fact is they ... (read more)

-4Desrtopa
In a tragedy of the commons, it's in everybody's best interests for everybody to conserve resources. If you're running TDT in a population with similar agents, you want to conserve, and if you're in a population of insufficiently similar agents, you want an enforced policy of conservation. The rationalist in a war with the barbarians might not want to fight, but because they don't want to lose even more, they will fight if they think that enough other people are running a similar decision algorithm, and they will support a social policy that forces them and everyone else to fight. If they think that their side can beat the barbarians with a minimal commitment of their forces, they won't choose either of these things.
xv15110

Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can't have any real effect on politics. I think that's the spirit in which thomblake means it's a political matter. For most of us, the earth will get warmer or it won't, and it doesn't affect how much we are willing to pay for tomatoes at the grocery sto... (read more)

-2Desrtopa
You don't have much influence on an election if you vote, but the system stops working if everyone acts only according to the expected value of their individual contribution. This is isomorphic to the tragedy of the commons, like the 'rationalists' who lose the war against the barbarians because none of them wants to fight.
xv1580

I took the survey too. I would strongly recommend changing the Singularity question to read:

"If you don't think a Singularity will ever happen, write N for Never"

Or something like that. The fraction of people who think Never with high probability is really interesting! You don't want to lump them in with the people who don't have an opinion.

2homunq
I would probably be an N, but I'd need a better definition of "singularity". In fact, I think the question would be generally more interesting if it were split into three: superhuman AI, AI which self improves with moore's law or faster, and AI domination of the physical world at a level that would make the difference between chimpanzee technology and human technology small. All three of these could be expressed as probability of it happening before 2100, because such a probability should still have enough information to let you mostly distinguish between a "not for a long time" and a "never".
xv15210

If the goal is intellectual progress, those who disagree should aim not for name-calling but for honest counterargument.

and

DH7: Improve the Argument, then Refute Its Central Point...if you're interested in producing truth, you will fix your opponents' arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse."

I would add that the goal of intellectual progress sometimes extends beyond you-the-rationalist, to the (potentially less than rati... (read more)

0a_gramsci
I find this is the most constructive way to resolve a debate between two people (see: http://lesswrong.com/lw/881/the_pleasures_of_rationality/) But in long-running debates, or ones with heated debaters, this is much harder. Firstly, because many debates are long running precisely because this strategy cannot be applied to the,. The issue with heated debaters is that this requires an open mindset of looking for truth versus looking to prove yourself right, which I find lacking in many debates.
xv15130

As we evaluate predictions for accuracy, one thing we should all be hyper-aware of is that predictions can affect behavior. It is always at least a little bit suspect to evaluate a prediction simply for accuracy, when its goal might very well have been more than accuracy.

If I bet you $100 on even odds that I'm going to lose 10 lbs this month, it doesn't necessarily indicate that I think the probability of it happening is > 50%. Perhaps this bet increases the probability of it happening from 10% to 40%, and maybe that 30% increase in probability is w... (read more)

xv1580

I really enjoyed this article. I took a few sittings to read it, but I liked the continuous format.

Let me just make a general comment on the tone here:

But really, shouldn't it have been obvious all along that humans are irrational? Perhaps it is, to everyone but neoclassical economists and Aristoteleans. (Okay, enough teasing...)

Teasing per se is fine, but this happens to reinforce a popular sentiment which I find misleading. Everyone likes to point out the differences between standard economic assumptions and actual human behavior. Pitted against ot... (read more)

xv15220

There are commenters who note that the use of "ey" and other gender neutral pronouns hurts their head. You may understand this and still use "ey" as part of a larger attempt to accustom people to language that is ultimately more convenient, even if it's worse in the short run. Which is a perfect example of what I was going to say:

When you do your harm minimization calculation, you really need to include the entire path over time, and not just the snapshot. It is often true that hurting people today makes them stronger in the future, ... (read more)

xv1530

Big agents can be more coherent than small agents, because they have more resources to spend on coherence.

Yes. Coherence, and persuasiveness.

The individual that argues against whatever political lobby is quick to point out that the lobby gets its way not because it is right, but rather because it has reason to scream louder than the dispersed masses who would oppose it. But indeed, the very arguments the lobby crafts are likely to be more compelling to the masses, because it has the resources to make them so.

The lobby screams louder and better than smaller agents, as far as convincing people goes.

xv1500

okay. I still suspect I disagree with whatever you mean by mere "figures of speech," but this rational truthseeker does not have infinite time or energy.

in any case, thank you for a productive and civil exchange.

xv1520

Fair. Let me be precise too. I read your original statement as saying that numbers will never add meaning beyond what a vague figure of speech would, i.e. if you say "I strongly believe this" you cannot make your position more clear by attaching a number. That I disagree with. To me it seems clear that:

i) "Common-sense conclusions and beliefs" are held with varying levels of precision. ii) Often even these beliefs are held with a level of precision that can be best described with a number. (Best=most succinctly, least misinterpret... (read more)

2Vladimir_M
xv15: You have a very good point here. For example, a dialog like this could result in a real exchange of useful information: A: "I think this project will probably fail." B: "So, you mean you're, like, 90% sure it will fail?" A: "Um... not really, more like 80%." I can imagine a genuine meeting of minds here, where B now has a very good idea of how confident A feels about his prediction. The numbers are still used as mere figures of speech, but "vague" is not a correct way to describe them, since the information has been transmitted in a more precise way than if A had just used verbal qualifiers. So, I agree that "vague" should probably be removed from my original claim.
xv1570

Again, meaningless is a very strong word, and it does not make your case easy. You seem to be suggesting that NO number, however imprecise, has any place here, and so you do not get to refute me by saying that I have to embrace arbitrary precision.

In any case, if you offer me some bets with more significant digits in the odds, my choices will reveal the cutoff to more significant digits. Wherever it may be, there will still be some bets I will and won't take, and the number reflects that, which means it carries very real meaning.

Now, maybe I will hold th... (read more)

2Vladimir_M
xv15: To be precise, I wrote "meaningless, except perhaps as a vague figure of speech." I agree that the claim would be too strong without that qualification, but I do believe that "vague figure of speech" is a fair summary of the meaningfulness that is to be found there. (Note also that the claim specifically applies to "common-sense conclusions and beliefs," not things where there is a valid basis for employing mathematical models that yield numerical probabilities.) You seem to be saying that since you perceive this number as meaningful, you will be willing to act on it, and this by itself renders it meaningful, since it serves as guide for your actions. If we define "meaningful" to cover this case, then I agree with you, and this qualification should be added to my above statement. But the sense in which I used the term originally doesn't cover this case.
xv1540

I tell you I believe X with 54% certainty. Who knows, that number could have been generated in a completely bogus way. But however I got here, this is where I am. There are bets about X that I will and won't take, and guess what, that's my cutoff probability right there. And by the way, now I have communicated to you where I am, in a way that does not further compound the error.

Meaningless is a very strong word.

In the face of such uncertainty, it could feel natural to take shelter in the idea of "inherent vagueness"...but this is reality, and we place our bets with real dollars and cents, and all the uncertainty in the world collapses to a number in the face of the expectation operator.

2Vladimir_M
So why stop there? If you can justify 54%, then why not go further and calculate a dozen or two more significant digits, and stand behind them all with unshaken resolve?
xv15120

For people who are embedded in a social structure, it can be costly to step outside of it. Many people will justifiably choose monogamy simply because, given the equilibrium we're in, it is the best move for them...even IF they would prefer a world of polyamory or some other alternative.

To go off topic for a moment, the same could also be said of religious belief. I know the people here feel a special allegiance to the truth, and that's wonderful, but if we lived in 12th century europe it might not be worth rejecting religion even if we saw through it.... (read more)

4Carinthium
Wouldn't there be some advantages in 12th century Europe to being a secret atheist (especially a rationalist, if that were somehow possible), and simply not speaking about it to anyone? It would eliminate the chance of going on crusades or the psycological fear of excommunication (even if excommunication would be a horrible situation anyway) if a noble, and a lot of superstitions if a commoner.
xv15290

You shouldn't take this post as a dismissal of intuition, just a reminder that intution is not magically reliable. Generally, intuition is a way of saying, "I sense similarities between this problem and other ones I have worked on. Before I work on this problem, I have some expectation about the answer." And often your expectation will be right, so it's not something to throw away. You just need to have the right degree of confidence in it.

Often one has worked through the argument before and remembers the conclusion but not the actual steps ... (read more)