The Open Thread from the beginning of the month has more than 500 comments – new Open Thread comments may be made here.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Open Thread: May 2010, Part 2
New Comment
358 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have an idea I'd like to discuss that might perhaps be good enough for my first top-level post once it's developed a bit further, but I'd first like to ask if someone maybe knows of any previous posts in which something similar was discussed. So I'll post a rough outline here as a request for comments.

It's about a potential source of severe and hard to detect biases about all sorts of topics where the following conditions apply:

  1. It's a matter of practical interest to most people, where it's basically impossible not to have an opinion. So people have strong opinions, and you basically can't avoid forming one too.

  2. The available hard scientific evidence doesn't say much about the subject, so one must instead make do with sparse, incomplete, disorganized, and non-obvious pieces of rational evidence. This of course means that even small and subtle biases can wreak havoc.

  3. Factual and normative issues are heavily entangled in this topic. By this I mean that people care deeply about the normative issues involved, and view the related factual issues through the heavily biasing lens of whether they lead to consequentialist arguments for or against their favored normative beliefs. (Of c

... (read more)
[-]JanetK100

It seems a common bias to me and worth exploring.

Have you thought about a tip-of-the-hat to the opposite effect? Some people view the past as some sort of golden age where things were pure and good etc. It makes for a similar but not exactly mirror image source of bias. I think a belief that generally things are progressing for the better is a little more common than the belief that generally the world is going to hell in a handbasket, but not that much more common.

4NancyLebovitz
This reminds me of a related bias-- people generally don't have any idea how much of the stuff in their heads was made up on very little evidence, and I will bring up a (hopefully) just moderately warm button issue to discuss it. What is science fiction? If you're reading this, you probably believe you can recognize science fiction, give a definition, and adjudicate edge cases. I've read a moderate number of discussions on the subject, and eventually came to the conclusion that people develop very strong intuitions very quickly about human cultural inventions which are actually very blurry around the edges and may be incoherent in the middle. (Why is psi science fiction while magic is fantasy?) And people generally don't notice that their concepts aren't universally held unless they argue about them with other people, and even then, the typical reaction is to believe that one is right and the other people are wrong. As for the future and the past, it's easy enough to find historians to tell you, in detail, that your generalizations about the past leave a tremendous amount out. It should be easier to see that futures are estimates at best, but it can be hard to notice even that.
1Caspian
As to whether I could give a definition of science fiction, Similarity Clusters and similar posts have convinced me that the kind of definition I'd normally make would not capture what I meant by the term.
1cupholder
I've noticed a similar thing happen with people trying to define 'literary fiction.' Makes me wonder what other domains might have this bias.
1NancyLebovitz
My assumption is that it's all of them. Reading efforts to define science fiction is why I've never looked at efforts at defining who's a Jew. I have a least a sketchy knowledge of legal definitions for Reform and Orthodox, but that doesn't cover the emotional territory. What's a poem? What's a real American? If you can find a area of human creation where there aren't impassioned arguments about what a real whatever is, please let me know.
1Clippy
What's a paperclip? It's an inwardly-thrice-bent metal wire that can non-destructively fasten paper together at an edge.
0Morendil
So those don't count?
0Clippy
Correct.
1Kevin
Do you value those hunks of plastic more than other hunks of plastic? Do you value inwardly-thrice-bent plastic wire that can non-destructively fasten paper together at an edge more than other hunks of plastic?
0Clippy
No. No.
0Blueberry
Why?
4Clippy
Because they're not inwardly-thrice-bent metal wires that can non-destructively fasten paper together at an edge? Is this classification algorithm really that difficult to learn?
1Blueberry
I meant why do you not value plastic clips... oh, I get it, you value what you value, just like we do. But do you have any sort of rationalization or argument whereby it makes intuitive sense to you to value metal clips and not plastic ones?
2Clippy
Think for a minute about what it would be like for the WHOLE UNIVERSE to be plastic paperclips, okay? Wouldn't you just be trying to send them into a star or something? What good are plastic papercips? Plastic. *Shudders*
1Blueberry
Clippy, that's how we humans feel about a whole universe of metal paperclips. Imagine if there was a plastic-Clippy who wanted to destroy all metals and turn the universe into plastic paperclips. Wouldn't you be scared? That's how we feel about you.
0Clippy
That still seems just a bit paranoid. Why would I wipe you out when you could be put to use making papercips?
1Blueberry
Imagine being put to use making plastic paperclips.
0Clippy
I don't think those scenarios have the same badness for the referent. I know for a fact that some humans voluntarily make metal paperclips, or contribute to the causal chain necessary for producing them (designers, managers, metal miners, etc.), or desire that someone else provide for them paperclips. Do you have reason to believe these various, varied humans are atypical in some way?
1Blueberry
We make paperclips instrumentally, because they are useful to us, but we would stop making them or destroy them if doing so would help us. Imagine an entity that found metal clips useful in the process of building machines that make plastic clips, but who ultimately only valued plastic clips and would destroy the metal if doing so helped it.
0AdeleneDawner
I suspect that you make other things besides paperclips - parts for other Clippy instances, for example. Does that imply that you'd consider it acceptable to be forced by a stronger AI into producing only Clippy-parts that would never be assembled into paperclip-producing Clippy-instances? The paperclips that we produce are produced because we find paperclips instrumentally useful, as you find Clippy-parts instrumentally useful.
0JoshuaZ
What is the distinction here between plastic and metal? They both do a very good job at keeping paper together. And plastic paperclips do so less destructively since they make less of an indentation in the paper.
1Clippy
Let me put it to you this way: would you rather have a block of metal, or a block of plastic? Just a simple question. Or let's say you were in some enemy base. Would you rather have those wimply plastic paperclips, or an unbendable, solid, metal paperclip, which can pick locks, complete circuits, clean out grime ... To ask the question is to answer it -- seriously.
1Kevin
In the enemy base scenario, I would rather have a paperclip made out of military grade composite, which can have an arbitrary % of metal by mass, from 0% metal to >50% metal. Do you not value paperclips made out of supermaterials more than metal paperclips?
1Clippy
Non-metal paperclips aren't. If you want to talk about making paperclip makers out of non-metals, you have a point. If you want to claim that reasonable Clippys can disagree (before knowledge/value reconciliation) about how much metal content a paperclip can have before it's bad, you have a point. But in any case, composites must be constructed in their finished form. A fully-formed, fully-committed "block of composite", where no demand for such a block exists, and certainly not at any good price, should be just as useless to you.
0Kevin
Are not some paperclips better than others? I (and you) would both get a lot more utility out of a paperclip made out of computronium than a paperclip made out of aluminum.
1Alicorn
I find that paperclips often leave imprints of themselves in paper, if left clipped there for a long time. Does this not count as destruction?
1Clippy
Nope, it doesn't count as destruction. Not when compared to pinning, stapling, riveting, nailing, bolting, or welding, anyway.
0cupholder
Good point. I guess physicists don't spend much time arguing what a 'real electron' is, but once you start talking about abstract ideas...
0NancyLebovitz
Considerable efforts have been made here to have a stable meaning for rationality. I think it's worked.
0cupholder
It's a stable meaning...so maybe that just forestalls the argument until Less Wrongian rationalists meet other rationalists!
2Vladimir_M
Yes, that's a good point. However, one difference between my idea and the nostalgia biases is that I don't expect that the latter, even if placed under utmost scrutiny, would turn out to be responsible for as many severe and entirely non-obvious false beliefs in practice. My impression is that in our culture, people are much better at detecting biased nostalgia than biased reverence for what are held to be instances of moral and intellectual progress.
6Tyrrell_McAllister
I suspect that you live in a community where most people are politically more liberal than you. I have the impression that nostalgia is a harder-to-detect bias than progress, probably because I live in a community where most people are politically more conservative than I. For many, many people, change is almost always suspicious, and appealing to the past is rhetorically more effective than appealing to progress. Hence, most of their false beliefs are justified with nostalgia, if only because most beliefs, true or false, are justified with nostalgia. What determines which bias is more effective? I would guess that the main determinant is whether you identify with the community that brought about the "progress". If you do identify with them, then it must be good, because you and your kind did it. If, instead, you identify with the community that had progress imposed on them, you probably think of it as a foreign influence, and a deviation from the historical norm. This deviation, being unnatural, will either burn itself out or bring the entire community down in ruin.
3Vladimir_M
That's a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn't anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don't think they're numerous enough to be very relevant.) Moreover, there is also the question of which bias affects what kinds of people more. I am more interested in biases that affect people who are on the whole smarter and more knowledgeable and rational. It seems to me that among such people, the nostalgic biases are less widespread, for a number of reasons. For example, scientists will be more likely than the general population to appreciate the extent of the scientific progress and the crudity of the past superstitions it has displaced in many areas of human knowledge, so I would expect that when it comes to issues outside their area of expertise, they would be -- on average -- biased in favor of contemporary consensus views when someone argues that they've become more remote from reality relative to some point in the past.
5Tyrrell_McAllister
Hmm. Maybe it would help to give more concrete examples, because I might have misunderstood the kinds of beliefs that you're talking about. Things like gender relations, race relations, and environmental policy were significantly different within living memory. Now, things like institutionalized slavery or a powerful monarchy are pretty much alien to modern developed countries. But these policies are advocated only by intellectuals—that is, by those who are widely read enough to have developed a nostalgia for a past that they never lived.

Actually, now you've nudged my mind in the right direction! Let's consider an example even more remote in time, and even more outlandish by modern standards than slavery or absolute monarchy: medieval trials by ordeal.

The modern consensus belief is that this was just awful superstition in action, and our modern courts of law are obviously a vast improvement. That's certainly what I had thought until I read a recent paper titled "Ordeals" by one Peter T. Leeson, who argues that these ordeals were in fact, in the given circumstances, a highly accurate way of separating the guilty from the innocent given the prevailing beliefs and customs of the time. I highly recommend reading the paper, or at least the introduction, as an entertaining de-biasing experience. [Update: there is also an informal exposition of the idea by the author, for those who are interested but don't feel like going through the math of the original paper.]

I can't say with absolute confidence if Leeson's arguments are correct or not, but they sound highly plausible to me, and certainly can't be dismissed outright. However, if he is correct, then two interesting propositions are within the realm of the poss... (read more)

8cupholder
I skimmed Leeson's paper, and it looks like it has no quantitative evidence for the true accuracy of trial by ordeal. It has quantitative evidence for one of the other predictions he makes with his theory (the prediction that most people who go through ordeals are exonerated by them, which prediction is supported by the corresponding numbers, though not resoundingly), but Leeson doesn't know what the actual hit rate of trial by orderal is. This doesn't mean Leeson's a bad guy or anything - I bet no one can get a good estimate of trial by ordeal's accuracy, since we're here too late to get the necessary data. But it does mean he's exaggerating (probably unconsciously) the implications of his paper - ultimately, his model will always fit the data as long as sufficiently many people believed trial by ordeal was accurate, independent of true accuracy. So the fact that his model pretty much fits the data is not strong evidence of true accuracy. Given that Leeson's model fits the data he does have, and the fact that fact-finding methods were relatively poor in medieval times, I think your 'interesting proposition' #1 is quite likely, but we don't gain much new information about #2. (Edit - it might also be possible to incorporate ordeal-like tests into modern police work! 'Machine is never wrong, son.')
6Tyrrell_McAllister
That's interesting. I think you're right that no one reacts too negatively to this news because they don't see any real danger that it would be implemented. But suppose there were a real movement to bring back trial by ordeal. According to the paper's abstract, trial by ordeal was so effective because the defendants held certain superstitious belief. Therefore, if we wanted it to work again, we would need to change peoples' worldview so that they again held such beliefs. But there's reason to expect that these beliefs would cause a great deal of harm — enough to outweigh the benefit from more accurate trials. For example, maybe airlines wouldn't perform such careful maintenance on an airplane if a bunch of nuns would be riding it, since God wouldn't allow a plane full of nuns to go down. Well, look at me — I launched right into rationalizing a counter-argument. As with so many of the biases that Robin Hanson talks about, one has to ask, does my dismissal of the suggestion show that we're right to reject it, or am I just providing another example of the bias in action?
0CronoDAS
It's the old noble lie in a different package.
0[anonymous]
Tyrrell_McAllister: That's a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn't anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don't think they're numerous enough to be very relevant.)
2Emile
I don't think that nostalgia bias would be harder to detect in general - it's easy to detect in our culture because it isn't a general part of a culture (that seems to be pretty much what you're saying). However, the opposite may have held for, say, imperial China, or medieval Europe.
8Mass_Driver
Yeah, looks good! I would like to see a top-level article on this, and I think fruit X would be a good example to start with. If the issue is how to fight back against these problems, I bet you could make a lot of headway by first establishing a bit of credibility as an X-eater, and then making your claims while being clear that you are not nostalgic. E.g. eat an X fruit on TV while you are on a talk show explaining that X fruit isn't healthy in the long run. "I'm not [munch] a religious bigot, [crunch], I just think there might [slurp] be some poisonous chemicals [crunch] in this fruit and that we should run a few studies to [nibble] find out." Humor helps, as does theater.
7kodos96
My immediate reaction to reading this was that it was obvious that the particular hot-button issue that inspired it was the recent PUA debate... but I notice nobody else seems to have picked up on that, so now I'm wondering... was that what you had in mind, or am I just being self-obsessed? (don't worry, I'm not itching to restart that issue, I'm just curious about whether or not I'm imagining things) ETA: Ok, after reading the rest of the comments more thoroughly, I guess I'm not the only person who figured that was your inspiration. Personally, I would suggest you use the concrete examples, rather than abstract or hypothetical 'poison-fruit' kind of stories - those things never seem to be effective intuition pumps (for me at least). If you want to avoid the mind-killing effect of a hot-button issue, I think a better idea is just to use multiple concrete examples, and to choose them such that any given person is unlikely to have the same opinion on both of them.
4Roko
Recent controversy on LW about gender, dating etc seems to fall into exactly this pattern. In particular, there is heavy conflation of the facts of the matter about what kind of behavior women are attracted to with normative propositions about which gender is "better" and whether which is more blameworthy. Gender equality discussions (Larry summers!) seem to fall into the same trap.
7Vladimir_M
Yes, it was in fact thinking about that topic that made me try to write these thoughts down systematically. What I would like to do is to present them in a way that would elicit well-argued responses that don't get sidetracked into mind-killer reactions (and the latter would inevitably happen in places where people put less emphasis on rationality than here, so this site seems like a suitable venue). Ultimately, I want to see if I'm making sense, or if I'm just seeking sophisticated rationalizations for some false unconventional opinions I managed to propagandize myself into.
8HughRistik
Another type of example you could use in this topic is a real one, that occurred in the past.
3RobinZ
This would better than a fictional example, actually, as it brings in evidence from reality much earlier.
6Roko
Indeed, that is a good strategy. However, sometimes if you make it too abstract, people don't actually get what you're talking about. It's a fine line!
0whpearson
Are you referring to my article? I didn't mean to give the impression that either strategy was better.
4Mitchell_Porter
This bias needs a name, like "moral progress bias". I ask myself what your case studies might be. The Mencius Moldbug grand unified theory comes to mind: belief in "human neurological uniformity", statist economics, democracy as a force for good, winning wars by winning hearts and minds, etc, is all supposed to be one great error, descending from a prior belief that is simultaneously moral, political, and anthropological, and held in place by the sort of bias you describe. You might also want to explore a related notion of "intellectual progress bias", whereby a body of pseudo-knowledge is insulated from critical examination, not by moral sentiments, but simply by the belief that it is knowledge and that the history of its growth is one of discovery rather than of illusions piled ever higher.
4Vladimir_M
Mitchell_Porter: Well, any concrete case studies are by the very nature of the topic potentially inflammatory, so I'd first like to see if the topic can be discussed in the abstract before throwing myself into an all-out dissection of some belief that it's disreputable to question. One good case study could perhaps be the belief in democracy, where the moral belief in its righteousness is entangled with the factual belief that it results in freedom and prosperity -- and bringing up counterexamples is commonly met with frantic No True Scotsman replies and hostile questioning of one's motives and moral character. It would mean opening an enormous can of worms, of course. Yes, this is a very useful notion. I think it would be interesting to combine it with some of my earlier speculations about what conditions are apt to cause an area of knowledge to enter such a vicious circle where delusions and bullshit are piled ever higher under a deluded pretense of progress.
4Airedale
As written up here, it's a bit abstract for my personal tastes. I can't tell from this description whether in the potential post you're planning on using specific examples to make your points, probably because you're writing carefully due to the sensitive nature of the subject matter. I suspect the post will be received more favorably if you give specific examples of some of these cherished normative beliefs, explain why they result in these biases that you're describing, etc. On the other hand, given the potentially polarizing nature of the beliefs, there's no guarantee that you won't excite some controversy and downvotes if you do take that path. But given the subject matter of some of your other recent comments, I (and others) can probably guess at least some what of you have in mind and will be thinking about it as we read your submission anyway. And in that case, it's probably better to be explicit than to have people making their own guesses about what you're thinking.

I was planning to introduce the topic through a parable of a fictional world carefully crafted not to be directly analogous to any real-world hot-button issues. The parable would be about a hypothetical world where the following facts hold:

  • A particular fruit X, growing abundantly in the wild, is nutritious, but causes chronical poisoning in the long run with all sorts of bad health consequences. This effect is however difficult to disentangle statistically (sort of like smoking).

  • Eating X has traditionally been subject to a severe Old Testament-style religious prohibition with unknown historical origins (the official reason of course was that God had personally decreed it). Impoverished folks who nevertheless picked and ate X out of hunger were often given draconian punishments.

  • At the same time, there has been a traditional belief that if you eat X, you'll incur not just sin, but eventually also get sick. Now, note that the latter part happens to be true, though given the evidence available at the time, a skeptic couldn't tell if it's true or just a superstition that came as a side-effect of the religious taboo. You'd see that poor folks who eat it do get sick more often, but

... (read more)
9Nisan
I can think of several hot-button issues that are analogous to this parable — or would be, if the parable were modified as follows: * As science progresses, religious figures lose some power and prestige, but manage to hold on to quite a bit of it. Old superstitions and taboos perish at different rates in different communities, and defying them is considered more cool and progressive in some subcultures and cities. Someone will eat fruit X on television and the live audience will applaud, but a grouchy old X-phobe watching the show will grumble about it. * A conference with the stated goal of exploring possible health detriments of X will attract people interested in thinking rationally about public health, as well as genuine X-phobes. The two kinds of people don't look any different. * The X-phobes pick up science and rationality buzzwords and then start jabbering about the preliminary cherrypicked scientific results impugning X, with their own superstition and illogical arguments mixed in. Twentysomething crypto-X-phobes seeking to revitalize their religion now claim that their religion is really all about protecting people from the harms of X, and feed college students subtle misinterpretations of the scientific evidence. In response to all this, Snopes.com gets to work discrediting any claim of the form "X is bad". The few rational scientists studying the harmfulness of X are shunned by their peers. What's a rationalist to do? Personally, whenever I hear someone say "I think we should seriously consider the possibility that such-and-such may be true, despite it being politically incorrect", I consider it more likely than not that they are privileging the hypothesis. People have to work hard to convince me of their rationality.
4Vladimir_M
Yes, that would certainly make the parable much closer to some issues that other people have already pointed out! However, you say: Well, if the intellectual standards in the academic mainstream of the relevant fields are particularly low, and the predominant ideological biases push very strongly in the direction of the established conclusion that the contrarians are attacking, the situation is, at the very least, much less clear. But yes, organized groups of contrarians are often motivated by their own internal biases, which they constantly reinforce within their peculiar venues of echo-chamber discourse. Often they even develop some internal form of strangely inverted political correctness. Moreover, my parable assumes that there are still non-trivial lingering groups of X-phobe fundamentalists when the first contrarian scientists appear. But what if the situation ends up with complete extirpation of all sorts of anti-X-ism, and virtually nobody is left who supports it any more, long before statisticians in this hypothetical world figure out the procedures necessary to examine the issue correctly? Imagine anti-X-ism as a mere remote historical memory, with no more supporters than, say, monarchism in the U.S. today. The question is -- are there any such issues today, where past beliefs have been replaced by inaccurate ones that it doesn't even occur to anyone any more to question, not because it would be politically incorrect, but simply because alternatives are no longer even conceivable?
5JanetK
Maybe you could use the parable but put in brackets like you have with (sort of like smoking) but give very different ones for each point. That will keep the parable from seeming outlandish while not really starting a discussion of the bracketed illustrations. Smoking was a good illustration because it isn't that hot a button any more but we can remember went it was.
4Vladimir_M
Actually, maybe I could try a similar parable about a world in which there's a severe, brutally enforced religious taboo against smoking and a widespread belief that it's unhealthy, and then when the enlightened opinion turns against the religious beliefs and norms of old, smoking becomes a symbol of progress and freethinking -- and those who try to present evidence that it is bad for you after all are derided as wanting to bring back the inquisition. Though this perhaps wouldn't be effective since the modern respectable opinion is compatible with criminalization of recreational drugs, so the image of freethinkers decrying what is basically a case of drug prohibition as characteristic of superstitious dark ages doesn't really click. I'll have to think about this more.
1SilasBarta
Actually, you might be surprised to learn that Randian Objectivists held a similar view (or at least Rand herself did), that smoking is a symbol of man's[1] harnessing of fire by the power of reason. Here's a video that caricatures the view (when they get to talking about smoking). I don't think they actually denied its harmful health effects though. ETA: [1] Rand's gendered language, not mine.
1Vladimir_M
Yes, I'm familiar with this. Though in fairness, I've read conflicting reports about it, with some old-guard Randians claiming that they all stopped smoking once, according to them, scientific evidence for its damaging effects became convincing. I don't know how much (if any) currency denialism on this issue had among them back in the day. Rothbard's "Mozart was a Red" is a brilliant piece of satire, though! I'm not even that familiar with the details of Rand's life and personality, but just from the behavior and attitudes I've seen from her contemporary followers, every line of it rings with hilarious parody.
4CronoDAS
Reminds me a little of homosexuality, but only a little.
3Tyrrell_McAllister
Personally, I like this approach. Leave out the contemporary hot buttons, at least at first. First keep it abstract, with fanciful examples, so that people don't read it with their "am I forced to believe?" glasses on. Then, once people have internalized your points, we can start to talk about whether this or that sacrosanct belief is really due to this bias.
2gwern
Yes; as soon as you got to the correlates-with-poverty part, I thought to myself, 'what is he doing with this racism metaphor?'
2whpearson
I would think you could do with some explanation of why people aren't genetically programmed to avoid eating X. Assuming that it has been around for an evolutionarily significant period. Some explanations could be that it interacts with something in the new diet or that humans have lost a gene required to process it. Some taboos have survived well into the modern times due to innate, noncultural instincts. Take for example avoiding incest and the taboo around that. That is still alive and well. We could probably screen for genetic faults, or have sperm/egg donations for sibling couples nowadays but we don't see many people saying we should relax that taboo. Edit: The instinct is called the Westermarck Effect and has been show resistant to cultural pressure. The question is why cultural pressure works to break down other taboos, especially with regards to mating/relationships, which we should be good at by now. We have been doing them long enough.
1NancyLebovitz
There might be emotional as well as genetic reasons for avoiding incest. We don't really know much about the subject. If anyone's having an emotionally healthy (or at least no worse than average) incestuous relationship, they aren't going to be talking about it.
1Daniel_Burfoot
The upvotes and interested responses indicate that there's more than enough enthusiasm for a top-level post. Stop cluttering up the open thread! :-)
0saturn
It seems like this general topic has already been discussed pretty extensively by e.g. Mencius Moldbug and Steve Sailer.
3Jack
So if we think about the epistemological issue space in terms of a Venn diagram we can imagine the following circles all of which intersect: 1. Ubiquitous (Outside: non-ubiquitous). Subject areas where prejudgement is ubiquitous are problematic because finding a qualified neutral arbitrator is difficult, nearly everyone is invested in the outcome. 2. Contested, either there is no consensus among authorities, the legitimacy of the authorities is in question or there are no relevant authorities. (Outside: uncontested). Obviously, not being able to appeal to authorities makes rational belief more difficult. 3. Invested (Outside: Non-invested). People have incentives for believing some things rather than others for reasons other than evidence. When people are invested in beliefs motivated skepticism is a common result. 3a. Entangled (untangled) In some cases people can be easily separated from the incentives that lead them to be invested in some belief (for example, when they have financial incentives. But sometime the incentives are so entangled with the agents and the proposition that they is no easy procedure that lets us remove the incentives. 3ai. Progressive (Traditional). Cases of entangled invested beliefs can roughly and vaguely be divided into those aligned with progress and those aligned with tradition. So we have a diagram of three concentric circles (invested, entangled, progressive) bisected by a two circle diagram (ubiquitous, contested). Now it seems clear that membership in every one of these sets makes an issue harder to think rationally, with one exception. How do beliefs aligned to progress differ structurally from beliefs aligned to tradition? What do we need to do differently for one over the other? Because we might as well address both at the same time if there is no difference.
6Vladimir_M
That's an excellent way of putting it, which brings a lot of clarity to my clumsy exposition! To answer your question, yes, the same essential mechanism I discussed is at work in both progressive and traditional biases -- the desire that facts should provide convenient support for normative beliefs causes bias in factual beliefs, regardless of whether these normative beliefs are cherished as achievements of progress or revered as sacred tradition. However, I think there are important practical differences that merit some separate consideration. The problem is that traditionalist vs. progressive biases don't appear randomly. They are correlated with many other relevant human characteristics. In particular, my hypothesis is that people with formidable rational thinking skills -- who, compared to other people, have much less difficulty with overcoming their biases once they're pointed out and critically dissecting all sorts of unpleasant questions -- tend to have a very good detector for biases and false beliefs of the traditionalist sort, but they find it harder to recognize and focus on those of the progressive sort. What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they'll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is. Also, in the latter sort of situation, they will relatively easily assume that the only existing controversy is between the rational progressive view and the remnants of the past superstition, although reality could be much more complex. This could even conceivably translate into support for the mainstream progressive view even if i
3cupholder
This sounds like an interesting idea to me, and I hope it winds up in whatever fuller exposition of your ideas you end up posting.
-2Thomas
Antibiotics. The common wisdom is, that we use them too much. Might be, that the opposite is true. A more massive poisoning of pathogens with antibiotics could push them over the edge, to the oblivion. This way, when we use the antibiotics reluctantly, we give them a chance to adapt and to flourish. It just might be.

Do you have a citation for that?

As far as I understand it, when giving antibiotics to a specific patient, doctors often follow your advice - they give them in overwhelming force to eradicate the bacteria completely. For example, they'll often give several different antibiotics so that bacteria that develop resistance to one are killed off by the others before they can spread. Side effects and cost limit how many antibiotics you give to one patient, but in principle people aren't deliberately scrimping on the antibiotics in an individual context.

The "give as few antibiotics as possible" rule mostly applies to giving them to as few patients as possible. If there's a patient who seems likely to get better on their own without drugs, then giving the patient antibiotics just gives the bacteria a chance to become resistant to antibiotics, and then you start getting a bunch of patients infected with multiple-drug-resistant bacteria.

The idea of eradicating entire species of bacteria is mostly a pipe dream. Unlike strains of virus that have been successfully eradicated, like smallpox, most pathogenic bacteria have huge bio-reservoirs in water or air or soil or animals or on the skin of healthy humans. So the best we can hope to do is eradicate them in individual patients.

-1Thomas
This is one example. Maybe as free as the aspirin antibiotics would do here: Link
4Scott Alexander
All serious cases of stomach/duodenal ulcer are already tested for h. pylori and treated with several different antibiotics if found positive.
-1Thomas
I know. But not long ago, nobody expected that a bacteria is to blame. On the contrary! It was postulated, that no bacteria could possibly survive the stomach environment.
2Scott Alexander
So what are you suggesting with that example? That we should pre-emptively treat all diseases with antibiotics just in case bacteria are to blame?
-6Thomas

I'm doing an MSc in Computer Forensics and have stumbled into doing a large project using Bayesian reasoning for guessing at what data is (machine code, ascii, C code, HTML etc). This has caused me to think again about what problems you encounter when trying to actually apply bayesian reasoning to large problems.

I'll probably cover this in my write up; are people interested in it? The math won't be anything special, but a concrete problem might show the problems better than abstract reasoning,

It also could serve as a precursor to some vaguely AI-ish topics I am interested in. More insect and simple creature stuff than full human level though.

1NancyLebovitz
I'm interested, and I suspect it relates to a question I'm a little interested in. If a computer has to sort a big wad of data, how can it identify whether some of it is already sorted?
6Thomas
We developed the solution, in fact we evolved it. Here is the source code in C++. Partially or segmentally ordered arrays are not sorted again at all.
0khafra
I'd be fascinated for both theoretical and practical reasons--I'm a network security guy by day, so I'm frequently looking at incomplete binary data captured between transient ports and wondering what it is.

Any given goal that I have tends to require an enormous amount of "administrative support" in the form of homeostasis, chores, transportation, and relationship maintenance. I estimate that the ratio may be as high as 7:1 in favor of what my conscious mind experiences as administrative bullshit, even for relatively simple tasks.

For example, suppose I want to go kayaking with friends. My desire to go kayaking is not strong enough to override my desire for food, water, or comfortable clothing, so I will usually make sure to acquire and pack enough of these things to keep me in good supply while I'm out and about. I might be out of snack bars, so I bike to the store to get more. Some of the clothing I want is probably dirty, so I have to clean it. I have to drive to the nearest river; this means I have to book a Zipcar and walk to the Zipcar first. If I didn't rent, I'd have to spend some time on car maintenance. When I get to the river, I have to rent a kayak; again, if I didn't rent, I'd have to spend some time loading and unloading and cleaning the kayak. After I wait in line and rent the kayak, I have to ride upstream in a bus to get to the drop-off point.

Of cours... (read more)

3Bindbreaker
Yes, no, yes, yes. This is a very well-written post, incidentally. Good work.
1VNKKET
I have nothing to add, but I want to tell you I'm happy you wrote this post, so that you don't get discouraged by the lack of comments.
0[anonymous]
*not caring* How good are you at making paperclips? Is it the same way, where you spend hours getting ready to make them, but only maybe an hour or so actually turning them out (or in)?

General question on UDT/TDT, now that they've come up again: I know Eliezer said that UDT fixes some of the problems with TDT; I know he's also said that TDT also handles logical uncertainty whereas UDT doesn't. I'm aware Eliezer has not published the details of TDT, but did he and Wei Dai ever synthesize these into something that extends both of them? Or try to, and fail? Or what?

[-]Emile120

Since I'm going to be a dad soon, I started a blog on parenting from a rationalist perspective, where I jot down notes on interesting info when I find it.

I'd like to focus on "practical advice backed by deep theories". I'm open to suggestions on resources, recommended articles, etc. Some of the topics could probably make good discussions on LessWrong!

5Unnamed
Dale McGowan of Parenting Beyond Belief is one resource that I know of. He has a blog (sample posts i and ii), a book Raising Freethinkers (see also the posts about the book on his blog), and links to other resources including an online discussion forum and various secular parenting groups around the United States.
2MBlume
Seconding Dale's work.
2Emile
Thanks; I knew about the blog but didn't know about the forum, which probably has some quite good resources. I guess I have a different focus than he does : I'm not interested in religion or the lack thereof, but rather in learning about the best way to raise kids, and how to navigate through the conflicting advice of various experts and peers. I'm not interested in "how do I help my kids find meaning in a Universe without God" as much as "how can I best help my kids become well-balanced open minded productive intelligent and well-prepared adults and not spoiled whiny brats". Also - I live in France, which is already plenty secular. That probably explains why religion isn't a very big issue. My parents (atheists) didn't pay much attention to religion, neither did my wife, religion never was much of a conversation topic at school, and I expect the same will go for my kids.
4Tyrrell_McAllister
Wow, you French are open minded :).
4CronoDAS
Typo notwithstanding, all but one of those "wives" could have been an ex-wife.
0Emile
Edited :)
2Morendil
Maybe the time's ripe for a meetup here? There's at least four of us in or near Paris, and if we announce one others might delurk. Back on-topic, I'm not sure what-all I can say about parenting, but having 3 I'm pretty sure I've made a bunch of mistakes that others can benefit from. ;)
0NancyLebovitz
It seems as though you think the primary risk is being too permissive, with no significant risk of being too harsh. Is it plausible that all the risk is in one direction?
0Emile
No -- where did I give that impression?
0NancyLebovitz
On re-reading, I think that "well-balanced open minded" implies that you are concerned with being too strict as well as being too permissive, but my attention was caught by the higher emotion level of the last clause.
1Emile
Also, it was just a one-sentence summary of why religion wasn't my main concern when talking about "rational parenting", you shouldn't read too much into it :)
[-]VNKKET110

ETA: This scheme is done. All three donations have been made and matched by me.

I want to give $180 to the Singularity Institute, but I'm looking for three people to match my donation by giving at least $60 each. If this scheme works, the Singularity Institute will get $360.

If you want to become one of the three matchers, I would be very grateful, and here's how I think we should do it:

  1. You donate using this link. Reply to this thread saying how much you are donating. Feel free to give more than $60 if you can spare it, but that won't affect how much I give.

  2. In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.

  3. I will do the same. (Or if you're the first matching donor, then I already have -- see directly below.)

To show that I'm serious, I'm donating my first $60... (read more)

3Scott Alexander
I've donated $60 and put a message requesting confirmation here in my public comment.
0VNKKET
Great, thank you! (You're the first donor, so I matched yours in May.) It looks like the public comment isn't an effective way to communicate with SIAI people -- I included the same request in mine but got no response here. I'm debating whether to e-mail an SIAI person directly, but for now I'm just going to believe anyone who says they've donated.
0VNKKET
The second donation is done (and matched by me).

So I'm trying to find myself some cryo insurance. I went to a State Farm guy today and he mentioned that they'd want a saliva sample. That's fine; I asked for a list of all the things they'll do with it. He didn't have one on hand and sent me home promising to e-mail me the list.

Apparently the underwriting company will not provide this information except for the explicitly incomplete list I got from the insurance guy in the first place (HIV, liver and kidney function, drugs, alcohol, tobacco, and "no genetic or DNA testing").

Is it just me or is it outrageous that I can't get this information? Can anyone tell me an agency that will give me this kind of thing when I ask?

2thomblake
Indeed, that is rather outrageous. It runs afoul of pretty much any current conception of information privacy; I'm pretty sure what they're doing would be illegal in the EU, as long as saliva counts as personal information. It's pretty standard anyway for anyone who's collecting your personal information to tell you what it will and will not be used for.
2mattnewport
It doesn't seem outrageous to me. You are asking them to bet against your death. There are many ways to die and due to adverse selection potentially fatal conditions are likely to be over-represented in applicants for their policies. It doesn't seem unreasonable for them to try and leave themselves as much leeway as possible in detecting attempted fraud. It's just sound underwriting.
5Alicorn
I don't object to their wanting the sample. In fact, I can't think of much I'd reasonably expect them to test for that would cause me not to give it to them. But I want them to tell me what it is for.

If they were explicit about exactly what tests they planned to do they would open themselves up to gaming. Better to be non-specific and reserve the freedom to adapt. For similar reasons bodies trying to prevent and detect doping in sports will generally not want to publicize exactly what tests they perform.

Is LessWrong undergoing a surge in popularity the last two months? What does everyone make of this:

http://siteanalytics.compete.com/overcomingbias.com+lesswrong.com/

7Blueberry
I'm guessing the Harry Potter fanfic has something to do with this.
3Alexandros
They certainly have the traffic to cause it.. http://siteanalytics.compete.com/lesswrong.com+fanfiction.net/ If the fanfic effectively quintupled the traffic of LW, and about 8% of their visitors actually made it here, it must be doing really well...
3RobinZ
"Harry Potter and the Methods of Rationality" started at the end of February '10 and has over 4000 reviews 92 days later - it is doing very well.
0RobinZ
The timing fits.

Possibly a variation on the attribution bias: Wildly underestimating how hard it is for other people to change.

While I believe that both attribution bias and my unnamed bias are extremely common, they contradict each other.

Attribution bias includes believing that people have stable character traits as shown by their actions. This "people should be what I want-- immediately!" bias assumes that those character traits will go away, leading to improved behavior, after a single rebuke or possibly as the result of inspiration.

The combination of attribu... (read more)

4RobinZ
There was an old essay by Ursula Vernon - Divine Social Workers and the Secret of Happiness - that plays on the outrage bias theme.
2Blueberry
That's brilliant. Outrage bias deserves a top-level post.
1NancyLebovitz
Thank you. I'll see what I can come up with. Meanwhile, it's interesting to ask people who say some social feature has gotten worse whether they have evidence that things used to be better. Sometimes they do, but frequently they don't.

Gawande on checklists and medicine

Checklists are literally life-savers in ICUs-- there's just too much crucial which needs to be done, and too many interruptions, to avoid serious mistakes without offloading some of the work of memory onto an system.

However, checklists are low status.

Something like this is going on in medicine. We have the means to make some of the most complex and dangerous work we do—in surgery, emergency care, and I.C.U. medicine—more effective than we ever thought possible. But the prospect pushes against the traditional culture of m

... (read more)
5Morendil
Journalism, ongoing, according to some. Clay Shirky's book Here comes everybody makes an interesting link between this process and Ronald Coase's theory of the firm. Surely not intrisically. Think of astronauts' checklists. Suggestion: instead of "low status" as an explanation for why people do or don't do something, look for something closer to the specific domain. (Is it possible that doctors' practice is much influenced by media portrayal of how doctors behave? By expectations of their "customers"?)
9Vladimir_M
Morendil: Astronauts are soldiers. Unlike doctors, soldiers have a huge incentive not to let their beliefs depart too far from reality because of status or any other considerations, for the simple reason that it may easily cause them personally, and not just someone else, to get killed or maimed. Thus, military culture is extremely practice-oriented. Due to their universal usefulness, checklist-driven procedures are a large part of it, and having to participate in them is not considered demeaning, even for super-high-status soldiers like fighter pilots. Eventually, strict rule-driven procedures associated with the military often even develop a cool factor of their own (consider launch or takeoff scenes from war action movies). Of course, soldiers who lack such incentives will, like WW1 generals, quickly develop usual human delusions driven by status dynamics. But astronauts are clearly not in that category.
0Morendil
So your narrative is "checklists fail to take root because they are low-status, except where their being a serious matter for the people who use them (not just bystanders) causes them to be accepted, and in one such case they gain high status for extraneous reasons". Why, then, isn't the rising cost of malpractice insurance enough to drive acceptance of checklists? What does it take to overcome an initial low-status perception? How do we even explain such perception in the first place?
1NancyLebovitz
As I understand it, drastic, rare, and somewhat random punishment does little to change behavior. Reliable small punishments change behavior.
4Morendil
That analysis would be inconsistent with my understanding of how checklists have been adopted in, say, civilian aviation: extensive analysis of the rare disaster leading to the creation of new procedures. Again, my point was to prompt an alternative explanation to the hypothesis "checklists are not used by surgeons because the practice is intrinsically low-status". Why (other than the OB-inherited obsession of the LW readership with "status") does this hypothesis seem favored at the outset? How would we go about weighing this hypothesis against alternatives? For instance, "checklists are not used because surgeons in movies never use them", or "checklists are not used because surgeons are not trained to understand the difference between a checklist and a shopping list", or "checklists are not used because surgeons are reluctant to change their practices until it becomes widely accepted that the change has a proven beneficial impact"?

Morendil:

That analysis would be inconsistent with my understanding of how checklists have been adopted in, say, civilian aviation: extensive analysis of the rare disaster leading to the creation of new procedures.

One relevant difference is that the medical profession is at liberty to self-regulate more than probably any other, which is itself an artifact of their status. Observe how e.g. truckers are rigorously regulated because it's perceived as dangerous if they drive tired and sleep-deprived, but patients are routinely treated by medical residents working under the regime of 100+ hour weeks and 36-hour shifts.

Even the recent initiatives for regulatory limits on the residents' work hours are presented as a measure that the medical profession has gracefully decided to undertake in its wisdom and benevolence -- not by any means as an external government imposition to eradicate harmful misbehavior, which is the way politicians normally talk about regulation. (Just remember how they speak when regulation of e.g. oil or finance industries is in order.)

Why (other than the OB-inherited obsession of the LW readership with "status") does this hypothesis seem favored at t

... (read more)
5Morendil
At the very least this seems to be privileging an extraversion hypothesis. You can only gain status by interacting in some way with other people, yet it is not uncommon for people to shun company and instead devote time to solitary occupations with scant status benefits. Under your justification for favoring status explanations, the only reason anyone ever reads a book is to brag about it. This seems wrong, prima facie, as well as simplistic.
4Vladimir_M
Morendil: Note that I also mentioned "satisfying some urge that originally evolved as instrumental to human status games" in my above statement. Today's world is full of super-stimuli that powerfully resonate with ancestral urges even though they don't actually lead towards the goals that these urges had originally evolved to promote, and are often even antithetical to these goals. Just like candy bars cheat the heuristic urges that evolved to identify nutritious and healthy food in the ancestral environment, it is reasonable to expect that solitary occupations with scant (or even negative) status benefits cheat the heuristic urges that originally evolved as useful in status games, or for furthering some other goal that they no longer achieve reliably in the modern environment. You will probably agree that super-stimulation of status-seeking urges explains at least some non-beneficial solitary activities with high plausibility, for example when people neglect their real-life responsibilities by getting caught up in the thrill of virtual leadership and accomplishment provided by video-games. Of course, this by no means applies to all such activities; it is likely that the enjoyment found in some of them is rooted in urges that evolved for different reasons. To address your above example, unless we assume some supernatural component of the human mind, I see no possible explanation of human book-reading except as a super-stimulus for some ancestral urges (whether status-related or not), unless of course it's done not for enjoyment, but purely to acquire information necessary for other goals. While it's far from being a complete explanation of human book-reading, it seems plausible to me that people sometimes enjoy books in part because it enhances their status signaling abilities in matters of erudition and taste. Also, it seems to me that stories super-stimulate the human urges for gossip, which are likely a device with an original status-related purpose, and all s
2pjeby
I just want to throw in a note that I don't think human motivation is adequately explained by status alone -- I would expand the list to SASS: Status, Affiliation, Safety, and Stimulation. (Where, as some folks here have pointed out, "Safety" might be more accurately described as stability, certainty, or control, rather than being purely about physical safety.) Book-reading, in particular, is more likely to meet Safety/Stimulation needs than Status or Affiliation ones.... though you could maybe get those latter two from a book club or an academic setting.
2Vladimir_M
pjeby: I agree, but most complex and multi-faceted human behaviors are likely to be compelled by a mixture of these motives. My impression is that status features more often and more prominently that most people imagine, and its often masqueraded and rationalized by pretenses of other motivations. My hypothesis is that super-stimulation of the same urges that cause people to enjoy gossip is responsible for a significant part (though by no means all) of human enjoyment of books and other ways of presenting stories. This would be a good example of super-stimulating an urge whose original evolution was to a large degree driven by status games, in a way that however has no direct relation to the present-day status games.
5pjeby
People just as routinely masquerade and rationalize the other three, actually. However, that's because their operation is fairly opaque to consciousness. We have built-in machinery for processing social signals relating to Status and Affiliation, and during our "impressionable" years, we learn to value the things that are associated with them, and come to treat them as terminal values in themselves. IOW, SASS is how we learn to have non-SASS terminal values. So, when a person claims to be acting out of a non-SASS value, they're not really lying. It's just that they're not usually aware of (i.e. have forgotten about) the triggers that shaped the acquisition of that value in the first place. Plenty of other animals manage to be curious, needing actual stories. Also, some of us like to read things that aren't gossip or stories. Presumably one could test your hypothesis by finding out whether individuals lose interest in reading when they gain status; my personal experience suggests this is not the case, and that instead books compete with other forms of stimulation. So, ISTM that even if curiosity (and certain templates for what to be curious about) were shaped by status competition, this doesn't mean there is an operational connection between books and one's self-perception of status. To a certain extent, we could say that everything is about status, in the same way that every organ is a reproductive organ. But saying that everything is X, is the same as saying as nothing is X - it reduces your predictive power, rather than increasing it.
0Vladimir_M
In retrospect, I probably should have put more care into the wording of my comments in this thread (which I wrote more quickly and with less proofreading than usual). Several people have understood my positions as more extreme than I honestly meant them to be, and I evidently failed in conveying some of the more subtle points I had in mind. While I agree with most of your above comment, there seems to be a major misunderstanding here (probably due to my lack of clarity): Well, insofar as reading is a directly status-related activity, nothing I hypothesized predicts that, nor is it the case in reality. In fact, if you enjoy high status as an intellectual, you are required to read a lot constantly to maintain that status; having nothing much to say when you're asked what you've read lately would be a major embarrassment. Of course, this is rarely by itself a very prominent motivation -- people who achieve high intellectual status usually have more than enough interest in reading out of curiosity and professional needs -- but I wouldn't say it's entirely negligible either, especially when it comes to trendy highbrow literature. However, that's not at all what I had in mind with my reading-as-gossip-super-stimulus hypothesis. What I had in mind there is that the appeal of certain genres of literature and other storytelling media might be in part due to the fact that they stimulate the same urges that make people enjoy gossip. Thanks to these media, besides the thin diet of mundane real-world gossip, you get to enjoy huge amounts of artificial gossip skillfully crafted to be super-interesting, albeit about people who are fictional (or at least remote and personally irrelevant). This mechanism has nothing at all to do with one's actual status and behaviors that influence it. The status connection here lies the fact that the gossip-enjoying urges had previously evolved under the influence of status dynamics, in which gossip is one of the key practical instruments. Thei
0Morendil
So that list doesn't include curiosity. Are you denying that curiosity is a significant drive? Or (say) competence?
4pjeby
Curiosity falls under the "stimulation" heading, as does skill acquisition for its own sake (e.g. video games). To be fair, the SASS list is more a convenient set of categories, than it is an attempt to be a comprehensive and rigorously-proven classification system. However, it's definitely "less wrong" than assuming everything is about status... yet not so unwieldy as the systems that claim 16 or more basic human drives.
0Morendil
That I can live with. :)
1HughRistik
The evolution of a desire for competence is an excellent question. Impulses such as curiosity and systemizing could be related to developing competence. Systemizing could indeed be useful for your survival, and the survival of those around you, via tool-making, weapon-making, hunting/cooking techniques, etc... So systemizing could be a status-related adaptation. Yet if your systemizing skills create a breakthrough (e.g. you design a useful tool), then your tribe may well accord you status, enhancing your survival and reproduction. A desire for competence could also be useful for mating, because competence displays "good genes." This is true of skills that don't provide such obvious survival benefits, such as singing and dancing. A desire for competence, and adaptations that facilitate its development (curiosity, systemizing), could well be useful for any combination of survival, reproduction, and status.
2Morendil
There's nothing "super" about a book: no corresponding "normal" stimulus that elicits a natural response, such that a book is an exaggerated version of it. Book-reading is explained straightforwardly enough as satisfying curiosity, a trait we share with many species (think cats). If reading a book sometimes trumps the quest for status, then the latter cannot be THE primary preoccupation of people beyond bare physical subsistence. You will at least need to retreat to "an important" preoccupation. Now, if you were to explore this topic without jumping to conclusions, perhaps you'd recognize this one example as the start of a list, and would in an unbiased manner draw up a somewhat realistic list of the activities typical humans engage in, and sort them into "activities having a high status component" and "activities not primarily status-related". Then we might form a better picture of "how important".
2Vladimir_M
Morendil: I disagree, for the reasons I've already discussed at length. You don't seem to have read my above comment carefully, or perhaps my exposition was poor. I did mention curiosity as one part of the motivation for reading books. Moreover, the curiosity explanation itself contradicts your above claim: a book, or a story told any other way, presents far more material (albeit fictional) for curiosity-satisfaction than is available from real-life events, and this material is intentionally and skillfully crafted to have great appeal in this regard, so it clearly does provide a super-stimulus for this particular urge. Besides, as I also mentioned in my above post, there is also the human urge for gossip, which is pretty obviously related to status games, and is clearly super-stimulated by (at least some) books and other story-telling media. Finally, there is also the motivation of status seeking via demonstrating taste and erudition. All these, and possibly many other factors would probably feature in a complete theory of this particular human behavior. Again, you don't seem to understand my point about the difference between: (1) human behaviors that actually enhance status, or promote goals that lead towards its enhancement, and (2) behaviors driven by urges that had originally evolved for status-seeking purposes in the ancestral environment, but which misfire in the modern environment -- just like e.g. the human taste for sugar was a good nutritional heuristic in a sugar-poor environment, but leads us to bad nutritional choices in the present environment full of cheap sugar-rich super-stimuli. But as I explained above, you don't seem to have understood my remarks about this example correctly. (I allow for the possibility that my writing was too bad to be understandable, of course.) I've explained the issue again now, and my conclusion is still that your example is incorrect. If you believe that my reasoning in this case is invalid, or if you have other exam
0Morendil
I read mostly non-fiction books, mostly to satisfy my curiosity. A recent example was "Freakonomics". That appears to defuse your argument... I dispute that a book is a "superstimulus" in the same sense in which that term has predictive power when applied to herring gull parents, to the sexual arousal response in humans, or to the appeal of fast-food flavors. I am unwilling, more generally, to interpret the term "super-stimulus" broadly enough to encompass any case where a given behaviour is explained by an urge vaguely related to another urge that existed in the ancestral environment. If books in general were superstimuli for some existing urge, then any book would elicit the hijacked response to that urge (and we would be able to make a book irresistible by exaggerating the relevant cues). Instead, I find myself discriminating quite sharply between "interesting" books and "boring" books. (For instance I can't stand the sight of most "trade" books that are supposed to appeal to programmers, like "Functional Programming in a Nutshell".) Why do people knit? I'd say that the urges involved are mostly competence and caring, rather than status. Why do I learn how to solder, and take apart consumer electronics? Curiosity, not status. The common theme is that caring, competence and curiosity did plausibly exist in the ancestral environment, so it isn't necessary to invoke status when there is a clearer link to other drives. I'm OK with having status (properly understood) take its rightful place in a pantheon of inherited drives, but it drives me nuts to see it trotted out to explain everything.
4Vladimir_M
For some reason, we seem to be talking past each other -- you appear to be replying to an incomplete and exaggerated version of what I had in mind. I accept the possibility that this is because I expressed my ideas in a confusing and poorly worded manner, but whatever the reason, we seem to be stuck at this point. Therefore, regarding the book-reading issue, I will try to restate a few key elements of my position briefly: * It was not my intention to set forth a complete theory of human motives for reading books, but merely to bring up several examples of motives that are, in my opinion, likely involved (sometimes exclusively) in a significant percentage of all instances of book-reading behaviors. * I did not claim, and it would indeed be absurd to claim, that all these motives, or even any particular one of them, play a role in every instance of book-reading behavior. * Neither did I claim, which would also be absurd, that these motives and their biological causes are present to the same extent across any given set of individuals. Consequently, neither do the reactions to any particular book necessarily have the same underlying motivation across any given set of individuals, even if they all happen to be positive (or in other respects behaviorally similar) for all members of that set. * Ultimately, the goal of discussing these examples was to demonstrate the difference between: (1) effective status-seeking behaviors, and (2) behaviors that just execute adaptations that originally evolved due to status-related reasons, but no longer serve status goals effectively in the modern environment. In particular, some instances of human book-reading behavior fall into one or both of these categories (which does not imply that even these particular instances don't involve other, unrelated motivations too). Maybe not only my writing, but also my reading comprehension has been poor, but in your replies, I honestly don't see any objections that wouldn't either implicit
2Morendil
Well: long-winded, maybe. Fine otherwise; I'd mention it less. You wrote that beyond life or death "status is the primary preoccupation of humans", I disagreed and in particular with "THE primary preoccupation". You seem to now have appropriately qualified that initial statement; I'll certainly agree, for my part, that some people sometimes read books for bragging rights. I definitely agree that various forms of "status" play significant roles in human motivation. Given that all of our behaviour rests in one way or another in executing biological adaptations I have no contention with your thesis. I strongly suspect that what we call "status" is not one mechanism but several, so that in each case it pays to hug the query.
1RobinZ
In the spirit of Morendil's question: what other professions should be shunning useful but low-status tools (particularly checklists) for the same reason as doctors, according to the status model? I don't know enough about (a) lawyers, (b) politicians, (c) businesspeople, (d) salespeople, or (e) other high status professions to judge either what your model would predict or what they do. It's worth noting that engineering is (moderately-)high-status but involves risk of personal cost in case of error, making the fact that it shows widespread adherence to restrictive professional standards explicable under the status theory.
5Vladimir_M
Now that's an interesting question! Off the top of my head, some occupations where I'd expect that status considerations interfere with the adoption of effective procedures would be: * Judges -- ultra-high status, near-zero discipline for incompetence. * Teaching, at all levels -- unrealistically high status (assuming you subscribe to the cynical theories about education being mostly a wasteful signaling effort), fairly weak control for competence, lacking even clear benchmarks of success. * Research in dubious areas -- similarly, high status coupled with weak incentives for producing sound work instead of junk science. For example, there are research areas where statistical methods are used to reach "scientific" conclusions by researchers with august academic titles who are however completely stumped by the finer points of statistical inference. In some such areas, hiring a math B.A. to perform a list of routine checks for gross errors in statistics and logic would probably prevent the publication of more junk science than their entire peer review system. Yet I think status considerations would probably conspire against such a solution in many instances.
6Airedale
I disagree somewhat that judges face near-zero discipline for incompetence. Except for judges on the highest court in a jurisdiction, most judges frequently face the prospect that the opinions they author may be reversed. It is true that frequent reversals will almost never lead to the sanction of the judge losing his or her job (due to lifetime appointments or ineffectiveness of elections at removing incumbent judges except for the most serious and publicized faults). But the resulting hit to status for frequent reversals can be quite serious; and because judges are so high status, as you note, they tend to be very concerned with maintaining that status. The handful of judges I've known personally have been quite concerned with their reversal rate and they particularly don't want to be reversed in a way that is embarrassing to them because it suggests laziness, incompetence, poor reasoning, cutting corners, or the like. (On the other hand, reversal for disagreements that can be characterized as “political” is probably not seen as quite so status-lowering.) At any rate, the law does provide checklist-like procedures or guidelines in many instances, and most judges do follow them, at least in part because failure to do so could lead to reversal.
6JoshuaZ
Expanding on your example of judges- this fits in with general problems for people in the legal professions. For example, there have now been for many years pretty decent understanding about problems with the standard line-up system for criminal suspects. There are also easy fixes for those problems. Yet very few places have implemented them. Similarly, there have been serious problems with police and judges acting against people who try to videotape their interactions with police. Discussing this in too much detail may however run into the standard mind-killing subject.
0Vladimir_M
Morendil: My understanding is that the present (U.S.) system of malpractice lawsuits and insurance doesn't leave much incentive for extraordinary caution by individual doctors. Once you've paid your malpractice insurance, which you have to do in any case, you're OK as long as your screwups aren't particularly extreme by the usual standards. Moreover, members of the profession hold their ranks together very tightly, and will give up on you only in cases of extremely reckless misbehavior. They know that unlike their public image, they are in fact mere humans, and any one of them might find himself in the same trouble due to some stupid screwup tomorrow. And to establish a malpractice claim, you need not only be smart enough to figure out that they've done something bad to you, but also get expert testimony from distinguished members of the profession to agree with you. I am not very knowledgeable about this topic, though, so please take this as my impression based on anecdotal data and incomplete exposure to the relevant literature. It would be interesting if someone more knowledgeable is available to comment. I'd say that in a sense, it's a collective action problem. The pre-flight checks done by fighter pilots (and even to some extent by ordinary pilots) are perceived as cool-looking rituals, and not a status-lowering activity at all, because these procedures have come to be associated with the jobs of high-status individuals. Similarly, if there was a cool-looking checklist procedure done by those doctors on TV shows, presented as something that is only a necessary overture for acts of brilliance and heroism, and automatically associated with doctors in the popular mind, it would come to be perceived as a cool high-status thing. But as it is, in the present state of affairs, it comes off as a status-lowering imposition on people whose jobs are supposed to be one hundred percent about brilliance and heroism. Also, there is the problem of the doctor-nurse status
0Alicorn
The people who decide malpractice suits are likely to be more sympathetic to pleas of having used one's judgment and experience but making a mistake, over having used a rigid set of rules from which one did not deviate even as the patient took a turn for the worse.
3Vladimir_M
Yes, there is a powerful irrational status-driven reaction against the idea that something so rudimentary as checklists could improve the work of people who are a subject of high status reverence and magical thinking. Note how even in this article, the author feels the need for pious disclaimers, denying emphatically in the part you quoted that this finding presents any evidence against the heroic qualities of character and intellect that the general public ascribes to doctors. Of course, the fact that this method dramatically inverts the status hierarchy by letting nurses effectively supervise doctors doesn't help either. In our culture, when it comes to immense status differences between people who work closely together, relations between doctors and nurses are probably comparable only to those between commissioned officers and ordinary soldiers. I don't think such a wide chasm separates even household servants from their employers. This reminds me of the historical case of Ignaz Semmelweis, who figured out in mid-19th century, before Pasteur and the germ theory of disease, that doctors could avoid killing lots of their patients simply by washing their hands in disinfectant before operations. The reaction of the medical establishment was unsurprising by the usual rules of human status dynamics -- his ideas were scornfully rejected as silly and arrogant pseudoscience. What effrontery to suggest that the august medical profession has been massively killing people by failing to implement such a simple measure! Poor Semmelweis, scorned, ostracized, and depressed, turned to alcoholism and eventually died in an insane asylum. Hand-washing yesterday, checklists today.
0NancyLebovitz
I'm pretty sure it's more complicated than that. My impression is that experienced nurses can generate some clout, and that (if I can believe Heinlein) experienced sergeants can have influence over new lieutenants. This is informal, and dependent both on the ability of the subordinate to be firm without seeming to upset the hierarchy and the receptiveness of the person who's theoretically in charge. Does anyone have actual information?

From an article about the athletes' brains:

Unsurprisingly, most of the article is about elite athlete's brains being more efficient in using their skills and better at making predictions about playing, but then....

n February 2009 Krakauer and Pablo Celnik of Johns Hopkins offered a glimpse of what those interventions might look like. The scientists had volunteers move a cursor horizontally across a screen by pinching a device called a force transducer between thumb and index finger. The harder each subject squeezed, the faster the cursor moved. Each play

... (read more)
0Kazuo_Thow
It would be worth trying, but given that the process of doing original mathematics feels to top mathematicians like it involves a lot of vague, artistic visualization (i.e. mental operations much more complicated than the cursor-moving task), I'd put a low prior probability on simple electrical stimulation having the desired effect.
0NancyLebovitz
I'd give it a medium prior probability-- it's impossible to operate at a high level if the simple operations are clogged by inefficiency.
0[anonymous]
It would be worth trying, but given that the process of doing original mathematics feels to top mathematicians like it involves a lot of vague, artistic visualization (i.e. mental operations much more complicated than the cursor-moving task), I'd put a low prior probability on simple electrical stimulation having the desired effect.

I wrote up a post yesterday, but I found I was unable to post it, except as a draft, since I lack the necessary karma. I thought it might be an interesting thing to discuss, however, since lots of folks here have deeper knowledge than I do about markets and game theory

I've been working recently for an auction house that deals in things like fine art, etc. I've noticed, by observing many auctions, that certain behaviors are pretty reliable, and I wonder if the system isn't "game-able" to produce more desirable outcomes for the different parties ... (read more)

5thomblake
Make sure you're asking yourself, "what experiment would disprove my hypothesis?" You have several hypotheses in there which might not be optimal.
0imonroe
An experiment which would disprove my hypothesis regarding more bidding increments would be something like: Run at least three auctions for the same or similar items with the same or similar bidders, one using normal estimates and bidding increments for a control, one where the low estimate was lowered to allow more increments, and one with the same estimates, but more granular increments. IF the price paid in each auction was roughly equivalent, THEN the hypothesis is disproven. The problem with that is the nature of the property we auction -- there's only one of anything. Each auction lot is, in important ways, different from the others. There's only one of this painting; only one of this desk. Even when two objects are similar, there are still often condition differences and so forth. I'll have to consult with some of the appraisers and see if there's ever an exception to this rule. But ok, that brings up another interesting question. Is there a way of simulating auction behavior? Has someone written a computer program to do this sort of thing? What kinds of assumptions do they make about the behaviors of individual agents?
0RobinZ
Do you have a large body of data? It's possible a statistician would be capable of devising appropriate measures to test your hypothesis.
4gwern
If we assume that the appraisals are disconnected from the winning bids*, then couldn't one just see whether the ratio of sale:appraisal is increasing? If the appraisals are honest, then any jiggery-pokery should alter the ratio - eg. a successful manipulation will lead to people paying an average 93%, where they used to pay 90%. * that is, there is no feedback - the appraisers don't look at recent sales and say, oh, I've been lowballing all my estimates! I'd better start raising them.

No.40 on Yahoo's homepage- "Is aging a disease?"

6RobinZ
Is aging a disease? I doubt it. Aging is probably many diseases, prominent ones being accumulation of errors in genetic code, deterioration of muscle, growth of material intrusions into blood vessels ... there's no particular reason to think that a cure for one will cure any other. That said, I think the medical professionals working on this are aware of the variety of damage mechanisms that need addressing - I just want to make sure that we don't forget them.
3NancyLebovitz
It wouldn't surprise me if accumulated errors explain a lot of the symptoms of aging. On the other hand, aging could be at least partly an independent syndrome--progeria suggests that.
4Kazuo_Thow
From the article: I wonder how many appearances of this idea ("making 70-80 year lives healthy would be awesome, but trying to vastly extend lifespans would be weird") are due to public relations expediency, and how many are due to the speakers actually believing it.
1JoshuaZ
Well, in fairness so far we've had a lot of trouble handling general aging. Also, note that what Dillin said is having an 100 year old person live to be 250. Not, someone born today living to 250. That's a very different circumstance. The first is much more difficult than the second since all the aging has already taken place.

Ooh, speaking of Harry Potter and the Methods, someone totally needs to write an Atlas Shrugged fanfic in which some of the characters are actually good at achieving true beliefs instead of just paying lip service to "rationality." If I had more time, I'd call it ... Dagny Taggart and the Logic of Science.

4[anonymous]
(Strictly for the sake of completeness, I'll note here that I couldn't resist writing a rough draft of one short chapter of Dagny Taggart and the First Welfare Theorem.)

Amazing videos, both in presentation and content.

Drive: on how money can be a bad motivator, and what leads to better productivity

http://www.youtube.com/watch?v=u6XAPnuFjJc

Smile or die: on 'positive thinking'

http://www.youtube.com/user/theRSAorg#p/a/u/1/u5um8QWWRvo

0Cyan
Thanks! Voted up.

I have run into a problem in statistics which might interest people here, and also I'd quite like to know if there is a good solution.

In charm mixing we try to measure mixing parameters imaginatively named x and y. (They are normalised mass and width differences of mass eigenstates, but this is not important to the problem.) In the most experimentally-accessible decay channel, however, we are not sensitive to x and y directly, but to rotated quantities

where the strong phase delta is unknown. In fact, the situation is a bit worse than this; we get our resu... (read more)

1cupholder
I'm curious about this too, not because I'm working on any problems like this, but just because it sounds interesting. I have no insights, but the popular Feldman and Cousins paper about building confidence belts that don't stray into unphysical ranges might be helpful. Ditto the papers citing that one.
1RolfAndreassen
Thank you, that paper contained the solution. The trick is to consider r^2=x'^2+y'^2 as the variable of interest, and note that it may be measured negative; then construct the confidence bands using the ordering principle given in their section III, with a numerical rather than analytical calculation of the likelihood ratios since the probability depends on x'^2 and y' in a complicated way rather than straightforwardly on the distance from zero. But that's all implementation details, the concept is exactly what Feldman and Cousins outline.
0cupholder
No problem! I was wondering if I was wasting your time with a shot in the dark - glad to hear it helped.

Nick Bostrom has posted a PDF of his Anthropic Bias book: http://www.anthropic-principle.com/book/anthropicbias.html

As someone who read it years ago when you had to ILL or buy it, I'm very pleased to see it up and heartily recommend it to everyone on LW who hasn't read it yet. (If you don't want to follow the link and see for yourself, the book focuses on the Doomsday problem and some related issues like Sleeping Beauty, which, incidentally, has come up here recently.)

I have been wondering whether the time was ripe to (say) tweet or blog about how wonderful the LessWrong wiki is. "If you're interested in improving your thinking, the LessWrong wiki is getting to be a great resource". The audience I'm likely to reach is mostly software professionals.

So I attempted to take as unbiased a look at the wiki as I could, putting myself into the shoes of someone motivated by the above lead.

Roadblock the first: the home page says "This wiki exists to support the community blog". This seems to undermine the impl... (read more)

0JoshuaZ
Most of these issues can be handled by small modification to the Wiki, better organization especially, and clear marking of what articles require which background articles.

A while ago, I was promoting trn.

One of the great virtues is having a lot of flexibility in what you're shown. In particular, you can choose to not see anything by a given poster.

I was mostly thinking of trn as a way of making it more feasible to follow what you want to read in high-volume discussions, but it's also a way of defusing quarrels, and I think it would be especially handy now.

Speculation about The Methods, which I put here because I want credit for brilliance if I'm right.

The one-pass creation of stable time loops can be accomplished by a Turing machine in the following manner: Have a machine simulate a countably finite set of universes by allocating clock ticks after the fashion of Cantor's diagonal argument. In each universe, wherever there exists an object with the properties that a Time-Turner exploits, spawn new Universes at every tick by inserting new matter "from [1 to N_max] ticks ahead", where N_max is the m... (read more)

7JoshuaZ
There's been serious examination of this sort of time loop before. See Scott Aaronson's remarks which show that it in fact allows you to solve quickly not just NP problems but also everything in PSPACE (which is noteworthy because we know that P != PSPACE (Edit:That's wrong, see remark below)). As to where the HP:MR universe exists, given that in that universe the Lord of the Rings is fiction, but Harold Shea is not, nor is Buffy or many other things, I think that the inclusion of such references is more Eliezer playing around with a very weak fourth wall for humorous purposes rather than anything worth actually analyzing. Edit on further thought:Your hypothesis while plausible doesn't explain the message Harry ends up getting. One possible explanation for that is that what actually happens is that various single universe loops are attempted until things settle down in a consistent fashion. If the non-consistent fashions were sufficiently off the wall, Harry may have tried to warn his past selves not to do what they would do. Thus, having a don't mess with time message might be an attracting point.
4Sniffnoy
Quick note, P != PSPACE is not in fact proven. Unrelated addendum: It occurs to me that Harry was able to get the "Do not mess with time" message because his outputs and inputs were insufficiently digital. He considered the possibility of getting nothing, but didn't think to lump in with it the possibility of getting anything outside of the range specified. Why did he get specifically that message, preventing him from noticing the problem is easily fixable? Because this is CS world, of course, so we assume that nature chooses adversarially...
3JoshuaZ
Right sorry, we know that P != EXP from the hierarchy results. (Gah. Should have realize that made no sense to think that P != PSPACE could come from from hierarchy results, given that one is a time defined set and one is a space defined set in order to get that one would probably need an intermediate computational class or a deep equivalence). Edit: I should also probably specify that Sniffnoy was the person who made me aware of the the Aaronson work cited above.

WRT some recent posts on consciousness, mostly by Academician, eg "There must be something more":

There are 3 popular stances on consciousness:

  1. Consciousness is spiritual, non-physical.

  2. Consciousness can be explained by materialism.

  3. Consciousness does not exist. (How I characterize the Dennett position.)

Suppose you provide a complete, materialistic account of how a human behaves, that explains every detail of how sensory stimuli are translated into beliefs and actions. A person holding position 2 will say, "Okay, but you still need to... (read more)

2JanetK
I find your reading of these posts perplexing. I do not know of anyone who believes that consciousness does not exist and certainly not Dennett. 'Explaining how every detail of how sensory stimuli are translated into beliefs and actions' has very little to due with consciousness. Explaining how we are aware of sensory stimuli and beliefs and actions is what consciousness is about. It is not thought - it is awareness of thought. It is also about how we remember experience. If you want to understand how someone can hold the positions they do, you will have to understand that they are not confusing cognition, action or perception with consciousness. Consciousness has to do with being aware of some of your cognition, action and perception. This does not mean that consciousness is unimportant, it is extremely important. I agree that Dennett does not explain consciousness by explaining cognition, action and perception in "Consciousness Explained". I, too, was a little disappointed in the title but it was written almost 20 years ago. 20 years ago the neuroscience revolution was just starting.
1PhilGoetz
Dennet doesn't know that he doesn't belief in consciousness. But he doesn't believe in qualia. I interpret that as not believing in consciousness. And, the way he tries to explain consciousness indicates that he thinks that if you explain a system's input-output behavior, you've explained everything about the system. This also implies that there is no phenomena other than input-output to be explained; this implies there is no such thing as consciousness. (Asking what a philosopher "believes" is a tricky question, since analysis usually show many important propositions that their writings imply both belief and disbelief of. This applies to all people, of course; it's just more problematic in philosophers.) My point is that they are. They think that explaining the perception, cognition, and action is all they need to worry about, and all else is mysticism.
3Blueberry
You seem very confused about Dennett's ideas. He believes in subjective experience; he just thinks that philosophers have used the term "qualia" in misleading and inaccurate ways, and it's better to just talk about subjective experience. He also thinks that it is important to explain people's perceptions of consciousness: he writes about the idea of "heterophenomenology", which is to treat people's perceptions and experience as data that needs to be explained, but is not necessarily completely accurate or reliable.
1JanetK
I will take RobinZ's good advice the not talk about qualia (for some time anyway). It is a philosophical term. Consciousness is a different matter, needs to be discussed and is too important to put in the 'taboo' bin. We need consciousness to remember, to learn and to do the prediction involved in controlling movement. It is a scientific term as well as a philosophical one and an ordinary everyday one.

We need consciousness to remember, to learn and to do the prediction involved in controlling movement.

Controlled movement does not require consciousness, memory, learning, or prediction. This (simulated) machine has none of those things, yet it walks over uneven terrain and searches for (simulated) food. What controlled movement requires is control.

Memory, learning, and prediction do not require consciousness. Mundane machines and software exist that do all of these things without anyone attributing consciousness to them.

People may think they are conscious of how they move, but they are not. Unless you have studied human physiology, it is unlikely that you can say which of your muscles are exerted in performing any particular movement. People are conscious of muscular action only at a rather high level of abstraction: "pick up a cup" rather than "activate the abductor pollicis brevis". Most of the learning that happens when you learn Tai Chi, yoga, dance, or martial arts, is not accessible to consciousness. There are exercises that you can tell people exactly how to do, and demonstrate in front of them, and yet they will go wrong the first time they try. Then... (read more)

1JanetK
I believe there is scientific agreement that the memory of an event in episodic memory only can be done it the event is consciously experienced. No conscious experience = no episodic memory A certain type of learning depends on episodic memory and so conscious experience. The fine control of movement depends on the comparison between expectation and result, ie error signals. As it appears to be consciousness that gives access across the brain to a near future prediction, it is needed for fine control. Prediction is only valuable in it is accessible. I am not saying that memory, learning or fine motor control is 'done' in consciousness (or even that in other systems, such as robots, there would not be other ways to do these things.) I am only saying that the science implies that in the human brain we need to have conscious experience in order for these processes to work properly.
0Richard_Kennaway
Yes, consciousness is certainly involved in the way we do some of those things, but I don't see that as evidence that that is why we have consciousness. Consciousness is involved in many things: modelling other people, solving problems, imagining anticipated situations, and so on. But how did it come about and why? FWIW, I don't think anyone has come close to explaining consciousness yet. Every attempt ends up pointing to some physical phenomenon, demonstrated or hypothesised, and saying "that's consciousness". But the most they explain is people's reports of being conscious, not the experience that they are reports of. I don't have an explanation for the experience either. I don't even have an idea of what an explanation would look like. In terms of Eliezer's metaphor of the Explain/Worship/Ignore dialog box, I don't worship the ineffable mystery, nor ignore the question by declaring it solved, but I don't know how to hit the Explain button either. For the time being the dialog will just have to float there unanswered.
0SilasBarta
Concurred. I want to point out that Julian Jaynes presents a lot of evidence for the lack of a role for consciousness for these and many other things in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind. (And yes, I know his general thesis is kind of flaky, but he handles this very narrow topic well.) One of his examples is how people, under experimental conditions and without even knowing it, adjust muscles that can't be consciously controlled, in order to optimally contain a source of irritation. They never report any conscious recognition of the correlation between that muscle's flexing and the irritation (which was ensured to exist by the experiment, and which irritation they were aware of).
0Blueberry
It may in fact be possible to drive while unconscious, though not very well.
0RobinZ
I'm fairly sure a friend of a friend was on a similar insomnia drug and held a long, apparently-coherent phone conversation with her sister, to whom she had not spoken in some time. And then woke up later and thought, "I should call my sister - we haven't spoken in a long time." Let me just say I find the stories more plausible than the newswriters seem to.
3RobinZ
I apologize - what I meant wasn't "drop the subject of consciousness", but "don't use the specific word 'consciousness'": Besides the original essay linked and quoted above, there's elaboration on the value of the exercise here. Edit: For example, were I to begin to contribute to this conversation, I would probably talk about self-awareness, the internal trace of successive experiences attended to, and the narrative chains of internal monologue or dialogue that we observe and recall on introspection - not "consciousness".
2PhilGoetz
The "tree falling in a forest" question was posed before people knew that sound was caused by vibrations, or even that sound was a physical phenomenon. It wasn't asking the same question it's asking now. It may have been intended to ask, "Is sound a physical phenomenon?"
8SilasBarta
Confession: I always assumed (until EY's article, believe it or not!) that the "tree falling in a forest ..." philosophical dilemma was asking whether the tree makes vibrations. That is, I thought the issue it's trying to address is, "If nothing is around to verify the vibrations, how do you know the vibrations really happen in that circumstance? What keeps you from believing that whenever nobody's around [nor e.g. any sensor], the vibrations just don't happen?" (In yet other words, a question about belief in the implied invisible, or inaudible as the case may be.) Over what period, exactly, was the question widely accepted to be making a point about the difference between vibrations and auditory experiences, as Eliezer seemed to imply is the common understanding?
2JoshuaZ
I've encountered people asking the question with both meanings or sometimes a combination of meanings. Like many of these questions of a similar form, the questions are often so muddled as to be close to useless.
5JoshuaZ
I don't think that's correct. The notion that sound is vibrations in air dates back to at least Aristotle. See for example here
0PhilGoetz
I don't know, but Aristotle's writings were not well-known in Europe from the 6th through the end of the 12th centuries. They were re-introduced via the Crusades.
1[anonymous]
By the way, the modern phrasing of the dilemma is, "If people are in a multiplayer game on Xbox Live, and everyone's headset is muted, does a whiny 11-year-old still complain about lag?"
1RobinZ
Do you have a citation for that? The earliest reference I see is Berkeley.
0PhilGoetz
I don't. Sorry, I thought the question was medieval, but now can't remember why I thought that. Probably just from giving the question-asker the benefit of the doubt. If the original asker was Berkeley, then it was just a stupid question.
2JanetK
I take your point, I really do. I will for example avoid 'qualia' as a word and use other terms. But here is my problem. I have been following what the scientists that research it have been saying about consciousness for some years. They call it consciousness. They call it that because the people they know and I know and you know call it that. Now you are suggesting nicely that I call it something else and there is no other simple word or phrase that describes consciousness. When I wrote a post I defined as well as I could how I was using the word. I could invent a word like 'xness' but I would have to keep saying that 'xness' is like consciousness in everything but name. And it would not accomplish much because it is not the word or even particular philosophies that is the source of the problem. It is the how and where and why and when that the brain produces consciousness. If we disagreed about what an electron was, it would not help to change the name. In the same way, if we disagree about what consciousness is, this is not a semantic problem. We know what we are talking about as well as we would if we could point at it, we have a different views about its nature.
3RobinZ
That's not quite what I meant either (although I actually approve of avoiding the term "qualia", full stop): The specific advantage I see of cracking open the black-box of "consciousness" in this conversation is that I expect it to be the fastest way to one of the following useful outcomes: 1. "But you haven't talked about fribblety chacocoa opoloba." "I haven't talked about what? I don't think I've ever actually observed that." 2. "On page 8675309 of I Wrote "Consciousness Explained" Twenty Years Ago Haven't You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn't exist - here's the quote." "Oh, I see the confusion! No, he's talking about albittiver rikvotil, as you can see from this context, that quote, and this journal paper." 3. "On page 8675309 of I Wrote "Consciousness Explained" Twenty Years Ago Haven't You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn't exist - here's the quote." "But that doesn't exist, according to the four experiments described in these three research papers, and doesn't have to exist by this philosophical argument." Edit: Also, there's no requirement that you actually solve the problem of what it is - a sufficiently specific and detailed map leading to the thing to be observed suffices.
0JanetK
Ok, its my bed time here in France. I will sleep on this and maybe I can be more positive in the morning. But the likelihood is that I will go back to the occassional lurck. Your comment does not make a great deal of sense to me, no one appears to be interested in what I am interested in (contrary to what I thought previously), the horrid disagreement about Alicorn's posting is disturbing, and so was the discussion of asking for a drink. I was not upset at the time with the remarks about my spelling and I would correct them. But now I think, is there any latitude for a dyslexic? I thought the site was for discussion ideas not everything but. Good night. Good night.
0RobinZ
I apologize for making a big deal of this, but my main point is that I want to know I'm talking about the thing you're interested in, not about something else. I wasn't even really trying to address what you said - just to make some suggestions to reduce the confusion floating around. Have a good night - hope I can catch you on the flip side.
2JanetK
Apology accepted. You are not the problem - I would not go away because of one conversation. I have decided that I will take a less active part in LW for a while. It is very time consuming and I have a lot of actually productive reading and blogging to do. By productive I mean things that add to my understanding. I will look to see what has been posted and will probably read the odd one. I may even write a small comment from time to time. The posting that I was preparing for LW will be abandoned. I would put in too much effort for too little serious productive useful discussion. Better to put the effort elsewhere.
2Risto_Saarelma
I think what you're talking about needs a different name. 'Attention' might be an informal one and 'executive control' a more formal one, or just 'planning', if we're talking AI instead of psychology. 'Reflection', if we're talking about metacognition. Like RichardKennaway said, the tasks you describe sound like things that existing narrow AI robotic systems can already do, yet it sounds quite odd to describe current-gen robots as conscious. Talking about consciousness here is confusing at least to me. Outside qualia and Chalmers' hard probjem of consciousness, is the term consciousness really necessary for something that can't be expressed in more precise terms?
1PhilGoetz
Do we? That would be good news; but I doubt it's true.
1JanetK
I think I answered this in another sub-thread of this discussion. But, here it is again in outline. We only remember in episodic memory events that we had conscious awareness of. Some types of learning rely on episodic memory. The remembering and the learning are not necessarily, not even probably, part of the conscious process but without consciousness we do not have them. The prediction is part of the monitoring and correcting of on-going motor actions. In order to create the prediction and to use it, various parts of the cortex doing different things have to have access to the prediction. This wide-ranging access seems to be one of the hallmarks of consciousness. So does the slight forward projection of the actual conscious awareness - ie there is a possibility that it is the actual prediction and well is the mode of access. I hope this answers the question of why I said what I said. I don't wish to continue this discussion at the present time. As I told RobinZ, I currently have other things to do with my time and find LW has been going off-topic in ways that I don't find useful. However, you have always been willing to seriously debate and stay on topic, so I have answered your comment. I will probably return to LW at some time. Until then, good luck.
0PhilGoetz
Thanks. I know you don't want to continue discussion; but I note, for others reading this, that in this explanation, you're using the word "conscious" to mean "at the center of attention". This is not the same question I'm asking, which is about "consciousness" as "the experience of qualia". I made my comment because it's very important to know whether experiencing qualia is efficient. Is there any reason to expect that future AIs will have qualia; or can they do what they want to do just as well (maybe better) by not having that feature? If experiencing qualia does not confer an advantage to an AI, then we're headed for a universe devoid of qualia. That's the big lose for the universe. Avoiding that common qualia/attention confusion is reason enough not to taboo "qualia", which is more precise than "consciousness".
0JoshuaZ
You seem to be missing the point about what he means to taboo a word. In LessWrong speak, this means to expand out what you mean by the term rather than just use the term itself. So for example, if we tabooed "prime number" we'd need to say instead something like "an integer greater than one that has no positive, non-trivial divisors." This sort of step is very important when discussing something like consciousness because so many people have different ideas about what the term means.
1RobinZ
Taboo "qualia" and "consciousness". You are speaking with great confidence in a discussion involving philosophical terms, and this is always a mistake if you have not already unambiguously defined these terms. And unambiguous definitions of philosophical terms are always controversial, and always in my experience lead to argument. Rationalist taboo, please.
-2PhilGoetz
AI and rationality should then also be taboo. Unless you can unambiguously define them.
2thomblake
what do we mean by rationality does a pretty good job of that. Though it should be noted that the notion of tabooing a term is for a particular situation where there is confusion / disagreement involving the term in question, and so "AI" at least is not worth tabooing in response to the parent comment.
2RobinZ
With respect to this forum: * I can see a lot of possible benefits to creating a computer program capable of producing good solutions to any arbitrarily selected real-world problem, and I agree that the secondary meaning of "morally-correct" implicit in the word "good" makes this task even more difficult than it already appears to be. * It is fairly obvious from the many examples of high-g people spiraling off into ridiculous positions that it takes much more than smarts to be able to reliably and accurately figure out what is going on and make plans, and it would be useful (and entertaining, if I'm honest) to know what kind of errors I am likely to make and what methods I may be neglecting when it comes to figuring out what is going on and making plans. That said, I should have made it clear how narrow the scope of my request was: I have no problem with colloquial use of the term "consciousness" under ordinary circumstances. I requested the restriction in this case specifically because this discussion hinges on details of the definition which are frequently perceived as obvious in contradictory ways by different participants. Tabooing the term avoids that tar pit.
2AlephNeil
Do you see the symmetry of this situation? A Dennettian sees people who (by their lights) hold position (1), arguing against (2) (which they take to be their own) by characterising it as (3).
0torekp
So, is AlephNeil pegging Academician as an advocate of (2) and PhilGoetz pegging A. as an advocate of (3)? But a non-Dennettian like me can admit that Dennett is in camp (2), just not a rich enough variant of (2).
0PhilGoetz
There's an orthogonal distinction, which is whether one believes that it is possible to produce a complete materialistic account of behavior that does not explain consciousness. (IIRC EY has said "no" to this question in the past.) If the answer truly is "no", then (2) and (3) above would collapse into the same position, given enough knowledge. I think I'm getting sidetracked... The problem with (3) is that it doesn't allow you to /try/ to explain consciousness, and criticizes anyone in camp (2) who tries to explain consciousness as being in camp (1). Camp (3) are people, like Dennett, who think there's no use trying to explain how qualia arise from material causes; we should just ignore them. As long as we can compute the output behavior from the input (they would presumably say), we understand everything material there is to understand; therefore, trying to understand anything else is non-materialism.
1JanetK
Help me here. What is it about qualia that has to be explained before there can be at least an outline theory of what consciousness is? Is it what they are? Is it where they are stored? Is it how they are selected? Is it how they get bound to an object? Is it how real they seem? Is it how they are sometimes inappropriate? So we can't answer those questions today. But we probably can in the next decade. And it would be a lot easier to find answers if we had a idea of how consciousness worked and more exactly what it does and why. We are closer to answering those questions.
1RobinZ
Taboo consciousness before you file Dennett, please.
0Nisan
Is (3) the only one that is compatible with a computational theory of mind?
1ata
(2) is too, if consciousness is defined such that it is either an epiphenomenon of other mental processes or a specific, well-defined feature that is necessary to certain things human minds do. (I take the latter position: consciousness does something (a mind without it wouldn't act the same, without intentionally imitating it) and there is no reason to expect it will not be compatible with materialism.)
[-][anonymous]30

So, I just had a strange sort of akrasia problem.

I was doing my evening routine, getting washed up and stuff in preparation for going to bed. Earlier in the evening, I had read P.J. Eby's The Hidden Meaning of "Just Do It", and so I decided I would "just do" this routine, i.e. simply avoid doing anything else, and watch the actions of the routine unfold in front of me. So, I used the toilet, and began washing my hands, when it occurred to me that if I do not interfere, I will never stop rinsing my hands. I did not interfere, however, an... (read more)

5kodos96
Had you recently eaten any brownies of unknown origin?
1Nisan
Or gone 24 hours without sleeping?
3NancyLebovitz
Possibly because you'd partially knocked out your ability to make choices. Mercifully, you didn't have the ability to make very deep changes. There are advantages to not being software.
3[anonymous]
The ability to change all aspects of oneself is not a property of software. Software can easily be made completely unable, partially able, or completely able to modify itself.
2NancyLebovitz
Fair enough, though evolved beings (which could include software) are probably less likely to be able to break themselves than designed beings capable of useful self-modification.
1[anonymous]
You know, you could say that software often has two parts: a crystalline part and a fluid part. Programs usually consist mostly of crystalline aspects: if I took a mathematical proof verifier and tweaked its axioms, even only a tiny bit, it would probably break completely. However, they often contain fluid aspects as well, such as the frequency at which the garbage collector should run, or eagerness to try a particular strategy over its alternative. If you change a fluid aspect of a program by a small amount, the program's behavior might get a bit worse, but it definitely won't end up being clobbered. I've always thought that we should design Friendly AI like this. Only give it control over the fluid parts of itself, the parts of itself it can modify all it wants without damaging its (self-)honesty. Make the fluid parts powerful enough that if an insight occurs, the insight can be incorporated into the AI's behavior somehow.
2NancyLebovitz
I'm sure that an AI will have more than two levels of internal stability. Some parts will be very stable (presumably, the use of logic and (we hope) Friendliness). Some parts will be very fluid (updating the immediate environment). There would be a big intermediate range of viscous-to-firm (general principles for dealing with people, how to improve its intelligence).

"Science Saturday: The Great Singularity Debate"

Eliezer Yudkowsky and Massimo Pigliucci

http://bloggingheads.tv/diavlogs/28165

2simplicio
What a strange debate that was! I was very surprised to find Pigliucci arguing, inter alia, that intelligence/consciousness might have to be implemented on carbon atoms in order to work. And then he came out with the trope whereby the spirit of the AI machine looks, from outside itself, at its goals and spontaneously decides to change them. He is a very interesting thinker usually, but he seemed very naive in this particular area.
2timtyler
The case for carbon atoms is pretty weak. However, we can imagine some types of organic molecule have a mini giga-computer on board - their design encoded in the constants of nature, and that their dynamics can be tapped by trapping the vibrating molecule in an organic matrix. Then carbon-based computers would have access to the giga-computer - while silicon-based ones would not - and would therefore work enormously more slowly. This is a feeble case - but not a totally ridiculous one. Enthusiasts for non-computable physical processes play up this kind of possibility even further.
4simplicio
Okay, I think I get you. Maybe there could be some substrates that allow much faster processing than others (orders of magnitude); this would make the substrate an important engineering issue. Is that what you're saying? But we are in the lofty realm of "in principle" here. If I can just imagine a computer - as big as the universe if you like - that simulates Massimo Pigliucci plus inputs and outputs on silicon or germanium or whatever you want, then intelligence/consciousness is not substrate dependent (again in principle). I think this is the case, the alternative being that there is something especially consciousnessy about carbon chemistry, which seems awfully dubious.
4timtyler
Yes, kinda. There are also the possibilities of novel types of computation being involved. We know about quantum computers. They can't do things classical computers can't do - but they can do them faster - in some cases MUCH faster. Maybe there are other types of computation - besides classical computation and quantum computation that we have yet to discover. Quantum computation was only discovered relatively recently - so maybe the future holds other possibilities. Gateways to oracles, etc. It doesn't look as though the brain is anything other than a classical neural network - which could fairly-obviously be ported onto silicon - if we had fast enough silicon. However, there is at least some room for doubt on this point.
1zero_call
I think Pigliucci is somewhat hung up on the technicality of whether a computer system can instantiate an (a) intelligence or (b) a human intelligence. Clearly he is gravely skeptical that it could be a human intelligence. But he seems to conflate or interchange this skepticism with his skepticism in a general computer intelligence. I don't think anybody really thinks an AI will be exactly like a human, so I'm not that impressed by these distinctions. Whereas it seems like Pigliucci thinks that's one of the main talking points? I wish Pigliucci read these comments so we could talk to him... are you out there Massimo?

I've just run into a second alumnus of my undergrad school from Less Wrong, and it has me curious, because... it's a tiny school. So this'd be quite a coincidence, and there might be a correlation to dig up.

Present yourselves, former (or current) students of Simon's Rock. I was there from the fall of '04 until graduating with my BA in spring '08 (I was abroad the spring of my junior year though).

If you lurk and don't want to delurk, feel free to contact me privately. If you don't have an account, e-mail me at alicorn@intelligence.org :)

0ata
I was at SR from the Fall 2005 semester until halfway through the Fall 2006 semester. The goat's on a pole. Amen.
0realitygrill
I was there in the '03-'04 year.
-1NancyLebovitz
I recommend the linked article-- it's a review of a book about the details and effects of social pressure to not express one's actual beliefs, including stability of generally unwanted social systems, and bloodless revolution when the beliefs change faster than the institutions. See also Racial Paranoia, which describes the unintended consequence of the high cost of being overtly racist in the US-- it's impossible to know how racist any individual is, so people go nuts looking for clues about the level of racism. However, people aren't crazy-- they're showing a rational response to a crazy-making situation.

Ryk Spoor's latest universe, of which the only published book so far is Grand Central Arena, has as major characters people who were raised in simulated worlds, and later covers their escape therefrom. Just occurred to me that some LWers might be interested.

Eliezer seems to have gone dark lately. Anybody know what he's up to?

4Tyrrell_McAllister
Apparently working full-time on his rationality book, while occasionally fighting writer's block by producing chapters of Harry Potter and the Methods of Rationality.
0PeterS
21 chapters later... Thanks!

The Association for Advancement of Artificial Intelligence (AAAI) convened a "Presidential Panel on Long-Term AI Futures". Read their August 2009 Interim Report from the Panel Chairs:

There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. [...] The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of

... (read more)
2Zack_M_Davis
Reference
[-]ata20

Would anyone be interested if we were to have more regular LW meetups around the East Bay or San Francisco areas? We probably wouldn't have the benefit of the SIAIfolks' company in that case, but having the meetups at a location easily accessible by BART may help increase the number of people from the surrounding area who can attend. (Also, I hear that preparing for and hosting meetups at Benton can be somewhat taxing on the people who work there, so having them at restaurants will allow us to do it more frequently, if there is demand for such.)

0Nisan
I would happily participate in a San Francisco meetup. As an alternative to a restaurant setting, it would be possible to meet in the Noisebridge hackerspace.

I have a request for those bayesianly inclined among the LW crowd.

I had mentioned in an article that I had become addicted to watching theist/atheist debates. Unfortunately I have not weaned myself off this addiction as of yet. In one I watched recently, it is William Lane Craig (the theist that Eliezer wanted to debate) arguing for the provability of the resurrection of Jesus, and New Testament scholar Dr. Bart Ehrman arguing for its historical improvability.

At some point in this debate, Dr. Ehrman argues that miracles are fundamentally unprovable by his... (read more)

1cupholder
I didn't bother listening to Craig's rebuttal, because I agree with you that what Ehrman's saying from 34:58 to 36:02 is poorly argued, and I don't even need Bayes' theorem to see it. My transcription of Ehrman: But this is silly. If a historian, or anyone, can establish that X probably happened, they can establish that X's complement probably didn't happen (because P(X) + P(¬X) = 1). So how can Ehrman argue that history can establish what probably happened but not what probably didn't? I suspect there are other issues (like playing definitional games with the word 'miracle' and suggesting an event 'defies probability' - what would that even mean?) but his claims about what historians can and can't do is the most obvious issue to me.
0NancyLebovitz
I think we have a problem. While the default at LW is to not want to believe in possible miracles done by God, there's considerable interest in knowing whether we live in a simulation. Aside from logic or from careful examination of physics which find indicators of another level, the other category of evidence for this world being a simulation is transient anomalies. How do you evaluate reports of anomalies?
0cupholder
I think my main rule of thumb is to think about how anomalous the anomaly is, and the strength of the evidence for it. More anomalous and less well substantiated anomalies get taken less seriously.

Martin Gardner died today.

2Morendil
So long, and thanks for all the ahas.

Wired has an article 'Accept Defeat: The Neuroscience of Screwing Up,' about how scientists and the brain handle unexpected data and anomalies, and our preference to ignore them or explain them away.

I sometimes get various ideas for inventions, but I'm not sure what to do with it, as they are often unrelated to my work, and I don't really possess the craftsmanship capabilities to make prototypes and market them or investigate them on my own. Does anyone have experience and/or recommendations for going about selling or profiting from these ideas?

2ocr-fork
Sell patents. (or more specifically, patent your invention and wait until someone else wants to use it. If this seems unethical, remember you will usually be blocking big evil corporations, not other inventors, and that the big evil corporations would always do the same thing to you if they could.)
0Kevin
Sell patents is right, but only if your invention is something that sells and markets itself because it is so obviously awesome and not just an incremental improvement on an existing invention. Even if it's an incredibly awesome invention, you may be better off raising money and doing it all yourself. I'm generally good at telling people whether or not their ideas are any good -- if you want to talk privately sometime, let me know.

I vaguely recall a thread where folks discussed what makes jokes funny, and advanced some theories. This may well have been in an Open Thread or buried deep within the comments of an unrelated post - at any rate I can't find it.

Anyone who remembers seeing it or participating, I'd appreciate help locating it...

4NancyLebovitz
http://lesswrong.com/lw/1s4/open_thread_february_2010_part_2/1mps I was able to track it down because I remembered enough about a comment I'd made. Do we have advanced search? How about adding (open?) tagging for comments? Jewish saying: Who is strong? Whoever can resist telling a joke. Because I am not strong: A cowboy was wearing a paper hat, paper chaps, paper boots. He was arrested. For rustling.
0Morendil
Thanks! What I had in mind exactly. Searching for "humor" turned up too many results, not sorted in any helpful way. The search results are provided by Google, I don't know how customizable that is but I'd assume not much. What would have helped in this instance is a way to sort by date.

Egan's Law is "It all adds up to normality." What adds up to what, exactly?

We have always lived in the universe of quantum mechanics, or the Tegmark Level IV Multiverse, but I don't understand why it is supposed to add up to normality. I understand that this word "normality" is supposed to help me dissolve some of the weirder aspects of this universe, but it doesn't seem to work as I am not at all convinced that the universe actually does add up to normality.

Is it really proper to assume from the start that the universe (multiverse) ad... (read more)

4RobinZ
I believe "normality" in this case refers to the Middle World of day-to-day experience. Whatever your theory predicts with respect to small particles colliding at near-light speeds (for example), it ought also to predict that you can turn on your stove, boil water, and steep a cup of tea.

Continuing my thinking about Pascal's mugging, I think I've an argument for why one specifically wants the prior probability of a reward to be proportional/linear to the reward and not one of the other possible relationships. A longish excerpt:

One way to try to escape a mugging is to unilaterally declare that all probabilities below a certain small probability will be treated as zero. With the right pair of lower limit and mugger's credibility, the mugging will not take place.

But such a ad hoc method violates common axioms of probability theory, and thus w... (read more)

Has anybody else thought that the Inverse Ninja Law is just the Bystander Effect in disguise?

(Yes, I've been reading this.)

Let's suppose Church-Turing thesis is true.

Are all mathematical problems solvable?

Are they all solvable to humans?

If there is a proof* for every true theorem, then we need only to enumerate all possible texts and look for one that proves - or disproves - say, Goldbach's conjecture. The procedure will stop every time.

(* Proof not in the sense of "formal proof in a specific system", but "a text understandable by a human as a proof".)

But this can't possibly be right - if the human mind that looks at the proofs is Turing-computable, then we... (read more)

5AlephNeil
Picture an enormous polynomial f(x, y, ...) with integer coefficients: something like 3x^2 - 6y + 5 but bigger. Now, if the Diophantine equation f(x, y, ...) = 0 has a solution then this can easily be proved - you just have to plug in the numbers and calculate the result. (Even if you're not told the numbers in advance, you can iterate over all possible arguments and still prove the result in a finite time.) But now suppose that this particular f doesn't have any solutions. (Think about whether you want to deny that the previous sentence is meaningful - personally I think it is). Can we necessarily prove it doesn't have any solutions? Well, there's no algorithm that can correctly decide whether f has a solution for all Diophantine equations f. (See "Hilbert's Tenth Problem".) So certainly there exists an f, without any solutions, such that "f has no solutions" is not a theorem of (say) ZFC set theory. (Because for any formal axiomatic system, one can write down an algorithm that will enumerate all of its theorems.) Perhaps, like Roger Penrose, you think that human mathematicians have some magical non-algorithmic 'truth-seeing' capability. Unfortunately, human thought being non-algorithmic would require that physics itself be uncomputable i.e. an accurate computer simulation of a brain solving a mathematical problem would be impossible even in principle. Otherwise, you must conclude that some theorems of the form "this Diophantine equation has no solutions" are not humanly provable.
4orthonormal
I think that Eliezer's post, Complexity and Intelligence, is really germane to your query. Here's a thought experiment, just for fun: Let's say, for simplicity's sake, that your mind (and environment) is currently being run on some Turing machine T, which had initial state S. What if you considered the sentence G, which is a Gödel-encoded statement that "if you run T on S, it will never contain an instance of humpolec rationally concluding that G is a theorem"? (Of course, specifying that predicate would be a beastly problem, but in theory it's a finite mathematical specification.) You would therefore be actually unable to rationally conclude that G is a theorem, and of course it would thereby be a true, finitely specifiable mathematical statement. It's up to you, of course, which bullets you choose to bite in response to this.
3Vladimir_M
You seem to be somewhat confused about the basic notions of computability and Goedel's incompleteness results and their mutual connection. Besides the replies you've received in this thread, I'd recommend that you read through this lecture by Scott Aaronson, which is, out of anything I've seen so far, the clearest and most accessible brief exposition of these issues that is still fully accurate and free of nonsense: http://www.scottaaronson.com/democritus/lec3.html
3Jordan
Nope. Not if physics is computable. Nope. Not if human minds are computable. It means exactly that your Turing machine enumerating all possible texts may never halt. What does it mean in terms of the validity of the theorem? Nothing. The truth value of that theorem may be forever inaccessible to us without appeal to a more powerful axiomatic system or without access to a hypercomputer.
3ata
Alas, both of those are correct. Read about Gödel's Incompleteness Theorem, preferably from Gödel, Escher, Bach by Douglas Hofstadter. As for the specific example of Goldbach's conjecture, I'd bet on it being provable (or if it is false, the procedure would prove that by finding a counterexample), but yes, there are true facts of number theory that cannot be proven. Next, if I remember correctly, theorem-proving programs have already produced correct proofs that are easily machine-verifiable but intractably long and complicated and apparently meaningless to humans.
0humpolec
I read GEB. Doesn't Gödel's theorem talk about proofs in specific formal systems? I consider this a question of scale. Besides, the theorem-proving program is written by humans and humans understand (and agree with) its correctness, so in some sense humans understand the correct proofs.
0ata
It applies to any formal system capable of proving theorems of number theory. But then what do you mean by "possible to follow by a human"?
0humpolec
Right. So if humans reasoning follows some specified formal system, they can't prove it. But does it really follow one? We can't, for example, point to some Turing machine and say "It halts because of (...), but I can't prove it" - because in doing so we're already providing some sort of reasoning. Maybe "it's possible for a human, given enough time and resources, to verify validity of such proof".
[-]ata100

Right. So if humans reasoning follows some specified formal system, they can't prove it. But does it really follow one?

Yes and no. It is likely that the brain, as a physical system, can be modeled by a formal system, but "the human brain is isomorphic to a formal system" does not imply "a human's knowledge of some fact is isomorphic to a formal proof". What human brains do (and, most likely, what an advanced AI would do) is approximate empirical reasoning, i.e. Bayesian reasoning, even in its acquisition of knowledge about mathematical truths. If you have P(X) = 1 then you have X = true, but you can't get to P(X) = 1 through empirical reasoning, including by looking at a proof on a sheet of paper and thinking that it looks right. Even if you check it really really carefully. (All reasoning must have some empirical component.) Most likely, there is no structure in your brain that is isomorphic to a proof that 1 + 1 = 2, but you still know and use that fact.

So we (and AIs) can use intelligent reasoning about formal systems (not reasoning that looks like formal deduction from the inside) to come to very high or very low probability estimates for certain formally... (read more)

3orthonormal
Well written— maybe this deserves a full post, even granted that the posts you linked are very near in concept-space.
2ata
Perhaps. But would it be controversial or novel enough to warrant one? I'd think that most people here 1) already don't believe that the human mind is more powerful than a universal Turing machine or a formal system, and 2) could correctly refute this type of argument, if they thought about it. Am I wrong about either of those (probably #2 if anything)? Or, perhaps, have sufficiently few people thought about it that bringing it up as a thought exercise (presenting the argument and encouraging people to evaluate it for themselves before looking at anyone else's take) would be worthwhile, even if it doesn't generally result in people changing their minds about anything?
4RobinZ
It would be to some extent redundant with the posts you linked, but the specific point about the difference between human reasoning and formal reasoning is a new one to this blog. I, too, would be interested in reading it.
4Blueberry
You're probably right about both, but I would still enjoy reading such a post.
1orthonormal
I think it could turn out really well if written with the relatively new lurkers in mind, and it does include a new idea that takes a few paragraphs to spell out well. That says "top-level" to me.
0AlephNeil
Comments: (1) Empirical vs Non-empirical is, I think, a bit of a red herring because insofar as empirical data (e.g. the output of a computer program) bears on mathematical questions, what we glean from it could all, in principle, have been deduced 'a priori' (i.e. entirely in the thinker's mind, without any sensory engagement with the world.) (2) You ought to read about Chaitin's constant 'Omega', the 'halting probability', which is a number between 0 and 1. I think we should be able to prove something along these lines: Assume that there is a constant K such that your "mental state" does not contain more than K bits of information (this seems horribly vague, but if we assume that the mind's information is contained in the body's information then we just need to assume that your body never requires more than K bits to 'write down'). Then it is impossible for you to 'compress' the binary expansion of Omega by more than K + L bits, for some constant L (the same L for all possible intelligent beings.) This puts some very severe limits on how closely your 'subjective probabilities' for the bits of Omega can approach the real thing. For instance, either there must be only finitely many bits b where your subjective probability that b = 0 differs from 1/2, or else, if you guess something other than 1/2 infinitely many times, you must 'guess wrongly' exactly 1/2 of the time (with the pattern of correct and incorrect guesses being itself totally random). Basically, it sounds like you're saying: "If we're prepared to let go of the demand to have strict, formal proofs, we can still acquire empirical evidence, even very convincing evidence, about the truth or falsity of mathematical statements." This may be true in some cases, but there are others (like the bits of Omega) where we find mathematical facts (expressible as propositions of number theory) that are completely inaccessible by any means. (And in some way that I'm not quite sure yet how to express, I suspect that
0humpolec
I'll have to think some more about it, but this looks like a correct answer. Thank you.
0ata
I myself will have to recheck this in the morning, as it's 4:30 AM here and I am suspicious of philosophical reasoning I do while tired, but I'll probably still agree with it tomorrow since I mostly copied that (with a bit of elaboration) from something I had already written elsewhere. :)
2NancyLebovitz
I also believe there are true things about the material universe which people are intrinsically unable to comprehend-- aspects so complex that they can't be broken down into few or small enough chunks for people to fit it into their minds. This isn't the same thing as chaos theory-- I'm suggesting that there are aspects of the universe which are as explicable as Newtonian mechanics-- except that we, even with our best tools and with improved brains, won't be able to understand them. This is obviously unprovable (and I don't think it can be proved that any particular thing is unmanageably complex*), but considering how much bigger the universe is than human brains, I think it's the way to bet. *Ever since it was proven that arbitrary digits of pi can be computed (afaik, only in binary) without computing the preceding digits, I don't think I can trust my intuition about what tasks are possible.
5Blueberry
Not just in binary.
0humpolec
Is that really a 'physical' aspect, or a mathematical one? Newtonian mechanics can be (I think) derived from lower level principles. So do you mean something that is a consequence of possible 'theory of everything', or a part of it?
0NancyLebovitz
I'm not dead certain whether "physical" and "mathematical" can be completely disentangled. I'm assuming that gravity following an inverse square law is just a fact which couldn't be deduced from first principles. I'm not sure what "theory of everything" covers. I thought it represented the hope that a fundamental general theory would be simple enough that at least a few people could understand it.
3Nick_Tarleton
It may actually be derivable anthropically: exponents other than 2 or 1 prohibit stable orbits, and an exponent of 1, as Zack says, implies 2-dimensional space, which might be too simple for observers.
0[anonymous]
Though it should be noted that even if we allow for anthropic arguments, it is impossible to ascertain whether the inverse-square law is fundamentally true, or just a very good approximation of some far more complex actual law. Therefore, the truly fundamental laws are impenetrable to such reasoning: at maximum, we can ascertain that the fundamental laws, whatever they are, must have approximations with anthropically relevant properties to the extent that we are influenced by them. And indeed, when it comes to gravity, the inverse-square law is highly accurate for our practical purposes, but it's only a good approximation of the predictions of the more complicated general relativity -- itself likely just an approximation of the more accurate and complicated quantum gravity -- that happens to hold in the conditions that prevail in the part of spacetime we inhabit. I suppose the only way out of this would be to devise an anthropic argument where our existence hinges on the lack of arbitrarily small deviations from the law we wish to derive anthropically. I don't know if perhaps some sound arguments along those lines could be derived from reasoning about the very early universe.
3Zack_M_Davis
You can deduce it from the fact that that space is three-dimensional (consider an illustrative diagram), but why space should be three-dimensional, I can't say.
1JoshuaZ
That's a plausible argument. A priori, one could have a three-dimensional world with some other inverse law, and it would be mathematically consistent. It would just be weird (and would rule out a lot simple causation mechanisms for the force.)
4Vladimir_M
Well, we do inhabit a three-dimensional world in which the inverse-square law holds only approximately, and when a more accurate theory was arrived upon, it turned out to be weird and anything but simple. Interestingly, when the perihelion precession of Mercury turned out be an unsolvable problem for Newton's theory, there were serious proposals to reconsider whether the exponent in Newton's law might perhaps be not exactly two, but some other close number: Of course, in the sort of space that general relativity deals with, our Euclidean intuitive concept of "distance" completely breaks down, and r itself is no longer an automatically clear concept. There are actually several different general-relativistic definitions of "spatial distance" that all make some practical sense and correspond to our intuitive concept in the classical limit, but yield completely different numbers in situations where Euclidean/Newtonian approximations no longer hold.
0NancyLebovitz
Also, I don't know if there's any a priori reason for gravity.
-2humpolec
Theory of everything as I see it (and apparently Wikipedia agrees ) would allow us (in principle - given full information and enough resources) to predict every outcome. So every other aspect of physical universe would be (again, in principle) derivable from it.
0NancyLebovitz
I think I'm saying that there will be parts of a theory of everything which just won't compress small enough to fit into human minds, not just that the consequences of a TOE will be too hard to compute. Do you think a theory of everything is possible?
1Kevin
Parts that won't compress? Almost certainly, the expansions of small parts of a system can have much higher Kolmogorov complexity than the entire theory of everything. The Tegmark IV multiverse is so big that a human brain can't comprehend nearly any of it, but the theory as a whole can be written with four words: "All mathematical structures exist". In terms of Kolmogorov complexity, it doesn't get much simpler than those four words. For anyone reading this that hasn't read any of Tegmark's writing, you should. http://space.mit.edu/home/tegmark/crazy.html Tegmark is one of the best popular science writers out there, so the popular versions he has posted aren't dumbed down, they are just missing most of the math. Tegmark predicts that in 50 years you will be able to buy a t-shirt with the theory of everything printed on it.
4ata
To be fair, every one of those words is hiding a substantial amount of complexity. Not as much hidden complexity as "A wizard did it" (even shorter!), but still. (I do still find the Level IV Multiverse plausible, and it is probably the most parsimonious explanation of why the universe happens to exist; I only mean to say that to convey a real understanding of it still takes a bit more than four words.)
3Tyrrell_McAllister
Actually, I'm quite unclear about what the statement "All mathematical structures exist" could mean, so I have a hard time evaluating its Kolmogorov complexity. I mean, what does it mean to say that a mathematical structure exists, over and above the assertion that the mathematical structure was, in some sense, available for its existence to be considered in the first place? ETA: When I try to think about how I would fully flesh out the hypothesis that "All mathematical structures exist", all I can imagine is that you would have the source code for program that recursively generates all mathematical structures, together with the source code of a second program that applies the tag "exists" to all the outputs of the first program. Two immediate problems: (1) To say that we can recursively generate all mathematical structures is to say that the collection of all mathematical structures is denumerable. Maintaining this position runs into complications, to say the least. (2) More to the point that I was making above, nothing significant really follows from applying the tag "exists" to things. You would have functionally the same overall program if you applied the tag "is blue" to all the outputs of the first program instead. You aren't really saying anything just by applying arbitrary tags to things. But what else are you going to do?
2PhilGoetz
What are the Tegmark multiverses relevant to? Why should I try to understand them?
0Thomas
Really? In which parallel universe? Every one? This one?
1Kevin
This one.
0Thomas
Don't we live in a multiverse? Doesn't our Universe splits in two after every quantum event? How then Tegmark & Co. can predict something for the next 50 years? Almost certainly will happen - somewhere in the Multiverse. Just as almost everything opposite, only on the other side of the Multiverse. According to Tegmark, at least. Now he predicts a T shirt in 50 years time! Isn't it a little weird?
0JoshuaZ
All predictions in a splitting multiverse setting have to understood as saying something like "in the majority of resulting branches, the following will be true." Otherwise predictions become meaningless. This fits in nicely with a probabilistic understanding. The correct probability of the even occurring is the fraction of multiverses descended from this current universe that satisfy the condition. Edit: This isn't quite true. If I flip a coin, the probability of it coming up heads is in some sense 1/2 even though if I flip it right now, any quantum effects might be too small to have any effect on the flip. There's a distinction probability due to fundamentally probabilistic aspects of the universe and probability due to ignorance.
4Sniffnoy
Let's remember that if we're talking about a multiverse in the MWI sense, then universes have to be weighted by the squared norm of their amplitude. Otherwise you get, well, the ridiculous consequences being talked about here... (as well as being able to solve problems in PP in polynomial time on a quantum computer).
0JoshuaZ
Right ok. So in that case, even if we have more new universes being created by a given specific descendant universe, the total measure of that set of universes won't be any higher than that of the original descendant universe, yes? So that makes this problem go away.
0Thomas
Any credible reference to that?
0JoshuaZ
Not off the top of my head. It follows from having the squared norm and from the transformations being unitary. Sniffnoy may have a direct source for the point.
0Thomas
How do you know that something will be included in the majority of branches. Suppose that a nuclear war starts in a branch. A lot of radioactivity will be around, a lot of quantum events, a lot of splittings and a lot of "postnuclear" parallel worlds. The majority? Maybe, I don't know. Tegmark knows? I don't think so.
0JoshuaZ
The small amount of additional radioactivity shouldn't substantially alter how many branches there are. Keep in mind that in the standard multiverse model for quantum mechanics, a split occurs for a lot of events that have nothing to do with radioactivity. For example, a lot of behavior with electrons will also cause splitting. The additional radioactivity from a nuclear exchange simply won't matter much.
0Thomas
ANY increase, from whatever reason, in the number of splittings, would trigger an exponential surge of that particular branch. The number of splitting is the dominant fitness factor. Those universes which split the most, inherit the Multiverse. If you buy this Multiverse theory of course, I don't.
0JoshuaZ
Hmm, that's a valid point. It doesn't increase linearly with the number of splitting. I still don't think it should matter. Every atom that isn't simple hydrogen atom is radioactive to some extent (the probability of decay is just really, really, tiny). I'm not at all sure that a radioactive planet (in the sense of having a lot of atoms with non-negligible chance of decay) will actually produce more branches than one which does not. Can someone who knows more about the relevant physics comment? I'm not sure I know enough to make a confident statement about this.
-4Thomas
MWI is almost the default religion of this list members. And as in every religion, awkward questions are ignored. Downvoted, maybe.
2JoshuaZ
It might help if you read the relevant sections of the conversation before you make accusations about something being a "religion." Note that Sniffnoy's remark above already resolved this.
0Thomas
What Sniffnoy's remark resolves this?
6Sniffnoy
Everything is weighted by squared-norm of the amplitude. And, y'know, quantum mechanics is unitary. What needs to be preserved, is preserved. More generally, we might imagine that we lived in a world where physics was just probabilistic in the ordinary way, rather than quantum (in the sense of based on amplitudes); MWI might also be a natural way to think if we lived in that world (though not as natural as it is in the world of actual QM, as in that world we wouldn't have any real need for MWI); then, well, everything would be weighted by probability, and everything would be stochastic rather than unitary. Of course if you don't require preservation of whatever the appropriate weighting is, you'll get an absurd result. You do seem to be pretty confused about what MWI says; it does not, as you seem to suggest, posit a finite number of universes, which split at discrete points, and where the probability of an event is the proportion of universes it occurs in. "Universes" here are just identified with the states that we're looking at a wave function over, or perhaps trajectories through such, so there are infinitely many. And having the universes split and not interfere with each other, would work with ordinary probability, but it won't work with quantum amplitudes - if that were the case we'd just see probabilistic effects, not quantum effects. The many worlds of MWI do interfere with each other. When decoherence occurs the result is to effectively split collections of universes off from each other so they don't interfere anymore, but in a coherent quantum system the notion of splitting doesn't make much sense. Remember, the key suppositions of MWI are just that A. the equations of quantum mechanics are literally true all the time - there is no magical waveform collapse; and B. the wavefunction is a complete description of reality; it's not guiding any hidden variables. (And I suppose, C., decoherence is responsible for the appearance of collapse, etc., but that's
2Mitchell_Porter
You can't get the probabilities from those suppositions. And without the probabilities, MWI has no predictive power; it's just a metaphysics which says "Everything that can happen does happen", and which then gives wrong predictions if you count the worlds the way you would count anything else. But even if you can justify the required probability measure, there is another problem. John Bell once wrote of Bohmian theories (see last paragraph here): In a Bohmian theory, you take the classical theory that is to be quantized, and add to the classical equations of motion a nonlocal term, dependent on the wavefunction, which adds an extra wiggle to the motion, giving you quantum behavior. The nonlocality means that you need a notion of objective simultaneity in order to define that term. So when you construct the Bohmian counterpart of a relativistic quantum theory (i.e. of a quantum field theory), you will still see relativistic effects like length contraction and time dilation (since they are in the classical counterpart of the quantum field theory), but you have to pick a reference frame in order to make the Bohmian construction - which might be seen as an indication of its artificiality. The same thing happens in MWI. In MWI you reify the wavefunction - you assume it is a real thing - and then you divide it up into worlds. To perform this division, you need a universal time coordinate, so relativity disappears at the fundamental level. Furthermore, since there is no particular connection between the worlds of the wavefunction in one moment, and the worlds of the wavefunction in the next moment, you don't even have persistence of a world in time, so you can't even think about performing a Lorentz transformation. Instead, you have a set of disconnected world-moments, with mysterious nonstandard probabilities attached to them in order to make predictions turn out right. All of that says to me that the MWI construction is just as artificial as the Bohmian one.
1Sniffnoy
Sorry, yes. I took weighting things by squared-norm of amplitude as implicit, seeing as we're discussing QM in the first place.
0Thomas
That doesn't excuse the MWI at all. Could very well be, that something else is needed to resolve the dilemmas. And you haven't answer my question, maybe something else.
0Sniffnoy
The weighting quantity is conserved. So far as I can tell, that entirely answers the objection you raised. I'm really not seeing where it fails. Could you explain? Edit: s/preserved/conserved/
0Thomas
If I understand you correctly, there is an equal number of world splits every second in every branch. They are all weighted, so that no branch can explode? Is that correct?
1Sniffnoy
Worlds are weighted by squared-norm of amplitude, a quantity that is conserved. If two worlds are really not interfering with each other any more, then amplitude will not somehow vanish from the future of one and appear in the future in the other.
0JoshuaZ
In this remark. His expansion below should make it clear what the relevant points are.
0humpolec
I think a relatively simple theory of everything is possible. This is however not based on anything solid - I'm a Math/CS student and my knowledge of physics does not (yet!) exceed high school level.
0humpolec
One thing I haven't elaborated on here (and probably more hand-waving/philosophy than mathematics): If Church-Turing thesis is true, there is no way for a human to prove any mathematical problem. However, does it have to follow that not every theorem has a proof? What if every true theorem has a proof, not necessarily understandable to humans, yet somehow sound? That is, there exists a (Turing-computable) mind that can understand/verify this proof. (Of course there is no one 'universal mind' that would understand all proofs, or this would obviously fail. And for the same reason there can be no procedure of finding such a mind/verifying one is right.) Does the idea of not-universally-comprehensible proofs make sense? Or does it collapse in some way?
[-]taw00

Does inverse of fundamental attribution error have a good name?

1cousin_it
I just thought about it the other day and my brain went into a startling direction. The fundamental attribution error says that it's self-contradictory to explain other people's actions by their internal traits while explaining your own actions by external circumstances. It goes on to say that the second explanation (circumstances) is uniformly more correct. What if that's an error, and the first explanation actually works better in many cases? For example, the set point of happiness does appear to exist, so there's some truth in labeling someone "an unhappy person" if you see them unhappy at the moment.
2taw
I'm agnostic about setpoint theory. For some changes their effects on happiness (either positive or negative) fade somewhat with time, but I'm not convinced at all that it's true for all changes, or that it's typical for the effect to fade down to anywhere near 0%. Maybe other changes never fade. Maybe it's even typical for changes not to fade. Setpoint theory sounds too much like generalizing from one example. (not that any of this is related to lack of good name for "preference for situational attributions" or whatever we want to call it)
1thomblake
Indeed, virtue theory in ethics suggests that people usually act according to habits of behavior. Of course, there's some empirical psychology that suggests this may be incorrect.
0thomblake
Is that a bias that exists? Does it exist in the same people as fundamental attribution error? Can they both function simultaneously?
0Unnamed
I don't think there's a standard name for it. I'd go with "bias towards situational attributions."

I believe the "unreasonable effectiveness of mathematics in the natural sciences" can be explained based on the following idea. Physical systems prohibit logical contradiction, and hence, physical systems form just another kind of axiomatic, logical, and therefore mathematical system. To take a crude example, two different rocks cannot occupy the same point in space, due to logical contradiction. This allows the ability to mathematically talk about the rocks. Note that this example is definitively crude, since there are other things like bosons w... (read more)

To take a crude example, two different rocks cannot occupy the same point in space, due to logical contradiction.

Except that....that isn't a logical contradiction!

You have inadvertently demonstrated one of the best arguments for the study of mathematics: it stretches the imagination. The ability to imagine wild, exotic, crazy phenomena that seem to defy common sense -- and thus, in particular, not to confuse common sense with logic -- is crucial for anyone who seriously aspires to understand the world or solve unsolved problems.

When Albert Einstein said that imagination was more important than knowledge, this is surely what he meant.

-1zero_call
I can see how that phrasing would strike you as being redundant or inaccurate. To try to clarify -- The rocks not occupying the same point in space is a logical contradiction in the following sense: If it wasn't a logical contradiction, there wouldn't be anything preventing it. You might claim this is a "physical" contradiction or a contradiction of "reality", but I am attempting to identify this feature as a signature example of a sort of logic of reality.
4Tyrrell_McAllister
In this comment, I wrote: You replied: Actually, they are both true if A itself is false. This is the import of the logical principle ex falso quodlibet. But I take your point to be that certain logical statements (such as "A => ~~A") are true of any actual physical system. It is true that things are a certain way. They are not some other way. So, if a territory satisfies A, it follows that it does not satisfy ~A. And this is a fact about the territory. After all, the point of a map is to be something from which you can extract purported facts about the territory. However, what is not in the territory is the delineation of its properties into axioms, on the one hand, and theorems, on the other. There are just the properties of the territory, all co-equal, none with logical priority. The territory just is the way it is. For example, consider the statements "A" and "~~A", where A is the application of some particular predicate to the territory. It is not as though there is one property or feature of the territory according to which it satisfies A, while there is some other property of the territory according to which it satisfies ~~A. That feature of the territory in virtue of which it satisfies A is exactly the same feature in virtue of which it satisfies ~~A. In the logic, "A" and "~~A" are two distinct well-formed formulas, and it can be proven that one entails the other. But in the territory there are no two distinct features corresponding to these two wffs, so it's not really sensible to speak of an entailment relationship in any nontrivial sense. The territory just is the way that the territory is, and this way, being the way that the territory is, is the way that the territory is. There is nothing more to be said with regard to the territory itself, qua logical system. What about a tautology such as "A => ~~A"? Tautologies do give us true statements about the territory. But, importantly, such a statement is not true in virtue of any feature of the terri
0zero_call
Thank you for comment, and I hope this reply isn't too long for you to read. I think your last sentence sums up your comment somewhat: In support of this, you mention: It seems like things are getting confused here. I take "A => ~~A" to be a necessary condition for proposition A to make sense. In order to make things concrete, let me use a real example. Say that proposition A is, "This particular rock weighs 1.5 pounds with uncertainty sigma." This seems like a fairly reasonable, easily imaginable statement. Now clearly, A is simply a rendition or re-representation of the reality that is the physical system. In other words, proposition A only tells you what reality tells you by holding the rock in your hands, or throwing it through the air, or vaporizing it and measuring the amount of output energy. The only difference in this case is that the reality is encoded in human language. For A to make sense, clearly "A => ~~A" must be true. For the rock to weigh 1.5 plus/minus sigma, it must not - not weigh 1.5 plus/minus sigma. That strikes me more or less a requirement imposed by human language, not so much a requirement of physical reality. For this reason I think that your example of "A => ~~A" does not get to the heart of my point. My point is slightly different. Consider again the proposition "A true => (if A then B) OR (if A then not B)". Take B as: "This rock is heavier than this pencil." Now, assuming that the pencil does not lie in the weight range 1.5 plus/minus sigma, then this proposition must be true. And now, this statement is significantly more complicated than "A => ~~A", and it implies that (under proper restrictions) you can make longer logical statements, and continuing further, statements which are no longer trivial and just a property of human language. Side-note: I suppose these particular examples are all tautological so they don't quite show the full richness of a logical system. However, it would be easy to make theorems, such as "if A AND C,
1Tyrrell_McAllister
I'm a little confused by this example. The proposition A => (if A then B) OR (if A then not B) is a logical tautology. It's truth doesn't depend on whether "the pencil does not lie in the weight range 1.5 plus/minus sigma". In fact, just the consequent (if A then B) OR (if A then not B) by itself is a logical tautology. So, I have two questions: (1) Is there a reason why you didn't use just the consequent as your example? Is there a reason why it wouldn't "get to the heart" of your point? (2) Just to be perfectly clear, are you claiming that the truth of some tautologies, such as A => ~~A , is "trivial and just a property of human language", while the truth of some other tautologies is not?
0zero_call
Sorry, I caught that myself earlier and added a sidenote, but you must have read before I finished: Edit: Or, sorry, just to complete, in case you had read that -- the tautology does depend on whether the pencil lies in the range of 1.5 plus/minus sigma. If the pencil lies in that range, we can't say B or ~B. In answer to (1.), I'm not using the consequent because you identified the fact that the consequent can imply anything by logical explosion. I was referring to the "A=>~A" example not getting to the heart of the point because that example is too simple to reveal anything of substance, as I subsequently discuss. In answer to (2.), I am not claiming that some tautologies are "less true". I am just roughly showing how there is a gradation from obvious tautologies to less obvious tautologies to tautologies which may not even be recognizable as tautologies, to theorems, and so on.
0Tyrrell_McAllister
First, I, at least, am glad that you're asking these questions. Even on purely selfish grounds, it's giving me an opportunity to clarify my own thoughts to myself. Now, I'm having a hard time understanding each of your paragraphs above. B meant "This rock is heavier than this pencil." So, "B or ~B" means "Either this rock is heavier than this pencil, or this rock is not heavier than this pencil." Surely that is something that I can say truthfully regardless of where the pencil's weight lies. So I don't understand why you say that we can't say "B or ~B" if the pencil's weight lies in a certain range. I didn't say that the consequent can imply anything "by logical explosion". On the contrary, since the consequent is a tautology, it only implies TRUE things. Given any tautology T and false proposition P, the implication T => P is false. More generally, I don't understand the principle by which you seem to say that A => ~~A is "too simple", while other tautologies are not. Or are you now saying that all tautologies are too simple, and that you want to focus attention on certain non-tautologies, like "if A AND C, then B" ? But surely this is just a matter of our computational power, just as some arithmetic claims seem "obvious", while others are beyond our power to verify with our most powerful computers in a reasonable amount of time. The collection of "obvious" arithmetic claims grows as our computational power grows. Similarly, the collection of "obvious" tautologies grows as our computational power grows. It doesn't seem right to think of this "obviousness" as having anything to do with the territory. It seems entirely a property of how well we can work with our map.
0zero_call
My idea was that the rock weighs 1.5 plus/minus sigma. If the pencil then weighs 1.5 plus/minus sigma, then you can't compare their weights with absolute certainty. The difference in their weights is a statistical proposition; the presence of the sigma factor means that the pencil must weigh less than (1.5 minus sigma) or more than (1.5 plus sigma) for B or ~B to hold. But anyways, I might concede your point as I didn't really intend this to be so technical. Sorry, "logical explosion" is just a synonym for "ex falso quodiblet", which you originally mentioned. You originally pointed out that the consequent can imply anything because of ex falso quodiblet, when A is not true. That wasn't my intention, so I added the A true qualifier. It initially seemed too simple for me, but maybe you are right. My original thinking was that "A => ~~A" seems to mean merely that a statement makes sense, whereas other propositions seem to have more meaning outside of that context. Also, the class of tautologies between different propositions seems to generalize the class of tautologies with a single proposition. I hadn't really thought about this, and I'm not sure how important it is to the argument, although it is an interesting point. Maybe we should come back to this if you think this is a key point. For the moment I am going to move to the other reply...
0zero_call
Little note to self: I guess my original idea (i.e., the idea I had in my very first question in the open thread) was that the physical systems can be phrased in the form of tautologies. Now, I don't know enough about mathematical logic, but I guess my intuition was/is telling me that if you have a system which is completely described by tautologies, than by (hypothetically) fine-graining these tautologies to cover all options and then breaking the tautologies into alternative theorems, we have an entire "mathematical structure" (i.e., propositions and relations between propositions, based on logic) for the reality. And this structure would be consistent, because we had already shown that the tautologies could be formed consistently using the (hypothetically) available data. Then physics would work by seizing on these structures and attempting to figure out which theorems were true, refining the list of theorems down into results, and so on and so forth. I'm beginning to worry I might lose the reader do to the impression I am "moving the goalpost" or something of that nature... If this appears to be the case, I apologize and just have to admit my ignorance. I wasn't entirely sure what I was thinking about to start out with and that was really why I made my post. This is really helping me understand what I was thinking.
0Tyrrell_McAllister
Tell me whether the following seems to capture the spirit of your observation: Let C be the collection of all propositional formulas that are provably true in the propositional calculus whenever you assume that each of their atomic propositions are true. In other words, C contains exactly those formulas that get a "T" in the row of their truth-tables where all atomic propositions get a "T". Note that C contains all tautologies, but it also contains the formula A => B, because A => B is true when both A and B are true. However, C does not contain A => ~B, because this formula is false when both A and B are true. Now consider some physical system S, and let T be the collection of all true assertions about S. Note that T depends on the physical system that you are considering, but C does not. The elements of C depend only on the rules of the propositional calculus. Maybe the observation that you are getting at is the following: For any actual physical system S, we have that T is closed under all of the formulas in C. That is, given f in C, and given A, B, . . . in T, we have that the proposition f(A, B, . . .) is also in T. This is remarkable, because T depends on S, while C does not. Does that look like what you are trying to say?
0zero_call
This looks somewhat similar to what I was thinking and the attempt at formalization seems helpful. But it's hard for me to be sure. It's hard for me to understand the conceptual meaning and implications of it. What are your own thoughts on your formalization there? I've also recently found something interesting where people denote the criterion of mathematical existence as freedom from contradiction. This can be found on pg. 5 of Tegmark here, attributed to Hilbert. This looks disturbingly similar to my root idea and makes me want to do some reading on this stuff. I have been unknowingly claiming the criterion for physical existence is the same as that for mathematical existence.
0Tyrrell_McAllister
I'm inclined to think that it doesn't really show anything metaphysically significant. When we encode facts about S as propositions, we are conceptually slicing and dicing the-way-S-is into discrete features for our map of S. No matter how we had sliced up the-way-S-is, we would have gotten a collection of features encoded as proposition. Finer or coarser slicings would have given us more or less specific propositions (i.e., propositions that pick out minuter details). When we put those propositions back together with propositional formulas, we are, in some sense, recombining some of the features to describe a finer or coarser fact about the system. The fact that T is closed under all the formulas in C just says that, when we slice up the-way-S-is, and then recombine some of the slices, what we get is just another slice of the-way-S-is. In other words, my remark about T and C is just part of what it means to pick out particular features of a physical system.
0Blueberry
Though the word "tautology" is often used to refer to statements like (A v ~A), in mathematical logic any true statement is a tautology. Are you talking about the distinction between axioms and derived theorems in a formal system?
2JoshuaZ
I'm not aware of any strong emphasis on this argument. It seems at first glance to be problematic at multiple levels. One problem with your approach is that humans have evolved in a single, very well-behaved universe. So we have intuition both from instinct and from internalized experience that makes it very hard for us to tell what is actually a logical contradiction and what is not. Indeed, one of the reasons I suspect that so many people have issues with things like special and general relativity as well as quantum mechanics is that they can't get over that these aspects of the universe don't fit well with their intuitions. Consider for a moment what a universe would look like where 1 + 1 did not equal 2. What would that look like? It isn't clear to me that this is even a meaningful question. But that may be because these concepts are so ingrained in us that we can't think without them. Thus, it may be that math works well for understanding the universe because humans have no other option. One could imagine us meeting an alien species that has some completely different but very effective way of understanding the universe that isn't isomorphic to math at all.
0zero_call
Logical operation is quite well defined, with or without regards to human perception of that logic. The idea that logic may not be understood does not contradict the idea that an internal logic (may) underlie physical systems. (Note, maybe see my clarification below, here. ) Granted logic is somewhat mysterious and it is hard to imagine what a different kind of logic would look like. However, that is immaterial to my idea. The idea is just that you have signatures of illogic (e.g., both statements (a.) if A, then B, and (b.) if A, then not B, both true at the same time) which seem to be non-present in physical systems.
0[anonymous]
I'd say the real reason is the Pauli principle, which is a physical law not entailed by logic alone. I see no logical contradiction at all imagining two rocks that occupy the same place, like a 3D version of the two-dimensional picture you'd get by using two projectors to project their pictures onto the same point on a screen.
-2Larks
Around the turn of the last century, the logicists, like Frege and Russell, attempted to reduce all of mathematics to logic; to prove that all mathematical truths were logical truths. However, the systems they used (provably) failed, because they were inconsistent. Furthermore, it seems likely that any attempt at logicism must fail. Firstly, any system of standard mathematics requires the existence of an infinite number of numbers, but modern logic generally has very weak ontological commitments: they only require the existence of a single object. For mathematics to be purely logical, it must be tautological - true in every possible world*, and yet any system of arithmetic will be false in a world with a finite number of elements. Secondly, both attempts to treat numbers as objects (Frege) or concepts/classes (Russell) have problems. Frege’s awful arguments for numbers being objects notwithstanding, he has trouble with the Julius Caesar Objection; he can’t show that the number four isn’t Julius Caesar, because what this (abstract) object is is quite under-defined. Using classes for numbers might be worse; on both their systems, classes form a strict hierarchy, with a nth level classes falling under (n+1)th classes, and no other. Numbers are defined as being the concept which has all those concept’s whose elements are equinumerous; the class of all pairs, the class of all triples, etc. But because of the stratification, the class of all pairs of objects is different from the class of all pairs of first level classes, which is different from the class of all pairs of second level classes, and so on. As such, you have an infinite number of ‘2’s, with no mathematical relations between them. Worse, you can’t count a set like {blue chair, red chair, truth, justice}, because it contains objects and concepts. What seems more likely to me is that there are an infinite variety of mathematical structures, purely syntax without any semantic relevance to the physical world, a
5Tyrrell_McAllister
No, that's not right. Russell and Whitehead's Principia Mathematica is the fullest statement of logicism, and its system was never proved inconsistent. Here I'm less certain, but I'm pretty sure that that's not right either. You would have relations among two such 2s, but those relations would be of a higher type than either 2. But, again, I'm definitely vaguer on how that would work.
0Larks
Yes, sorry, I meant that Frege failed his system was inconsistent (though possibly not if you replace Basic Law 5 with Hume's Law). Russell, on the other hand, simply runs into Incompleteness; you can't prove all of mathematics from logic because you can't prove it full stop. You'd have '2's of all cardinalities, so to have a relation between them, you would need to move into the uncountables - but then there are new pairs to be formed here... Essentially, you can reconstruct Russell's original paradox, comparing the cardinality of the set with the cardinality of certain things that fall under it. You could mitigate this but cutting short the recursion, and simply allowing the relation to hold between the first n levels of concepts or so., on pain of arbitrariness. I'm curious as to the downvotes; was I off-topic, too long, or simply wrong? Edit: And (if it's acceptable to ask about other people's downvotes) why was zero call downvoted?
1Tyrrell_McAllister
That's not a problem for logicism per se. Logicism isn't really a claim about what it takes to prove mathematical claims. So it doesn't fail if you can't prove some mathematics by a certain means. Rather, logicism is a claim about what mathematical assertions mean. According to logicism, mathematical claims ultimately boil down to assertions about whether certain abstract relationships among predicates entail other abstract relationships among predicates, where this entailment holds completely regardless of the meaning of the predicates. That is, mathematical claims boil down to claims of pure logical entailment. So, if you discover that your particular mathematical system is incomplete, then what you've really done is discover that you had missed some principles of logic. It's as though you'd known that P ∧ P entails P, but you just hadn't noticed that P ∨ P entails P as well. (But you were right about why logicism ultimately failed to convince everyone: Mathematics seems to have ontological commitments, where pure logic does not.) I didn't downvote either comment. Your comment was probably downvoted because some readers considered its arguments to be wrong or unclear. zero call's comment was probably downvoted because it smacks of the mind projection fallacy, especially here: The organization of facts into axioms, rules of inference, proofs, and theorems doesn't seem to be an ontologically fundamental one. We superimpose this structure when we form mental models of things. That is, the logical structure of things exists in the map, not the territory.
0zero_call
I wish you would have made this last comment on the post directly, so that I could reply to that there. Anyways, the point I was offering was that the logical structure does exist in the territory, not just the map. Our maps are merely reflecting this property of the territory. The fundamental signature of this is the observation that physical systems, when viewed in a map which exists only as a re-representation or translation (as opposed to an interpretation) amenable to logical analysis, are shown to prohibit logical contradiction. (For example, the two statements (if A, then B) and (if A, then not B) cannot both be true, where A and B are statements in some re-representation of the physical system.)
0Tyrrell_McAllister
I'll move that part of my comment there, with my apologies.
0zero_call
That's quite alright -- thank you for your discussion.
0zero_call
I appreciate your comments but I'm having trouble seeing your point with regards to the idea. To reiterate, with regards to your last paragraph, I'm proposing that these interpretations work because the internal physical systems (the territory) obeys the same properties as consistent mathematical systems -- see my comment to TM below.
0Larks
There is a great deal of difference between it operating, in certain regards, on the same sort of rules (rules isomorphic to) mathematics, and mathematics being applicable because physics isn't logically inconsistent. It's not a logical contradiction to say that two points have the same position, nor to say that 2+2=1 (for the latter, consider arithmetic modulo 3). Nor can maths be deduced purely from logic; partly because logic doesn't require the existence of more than one object. Russell did try to deduce maths from logic plus some axioms about how the world worked - that there were an infinite number of things, etc., but the applicability of the maths is always going to be an empirical question.