I have an idea I'd like to discuss that might perhaps be good enough for my first top-level post once it's developed a bit further, but I'd first like to ask if someone maybe knows of any previous posts in which something similar was discussed. So I'll post a rough outline here as a request for comments.
It's about a potential source of severe and hard to detect biases about all sorts of topics where the following conditions apply:
It's a matter of practical interest to most people, where it's basically impossible not to have an opinion. So people have strong opinions, and you basically can't avoid forming one too.
The available hard scientific evidence doesn't say much about the subject, so one must instead make do with sparse, incomplete, disorganized, and non-obvious pieces of rational evidence. This of course means that even small and subtle biases can wreak havoc.
Factual and normative issues are heavily entangled in this topic. By this I mean that people care deeply about the normative issues involved, and view the related factual issues through the heavily biasing lens of whether they lead to consequentialist arguments for or against their favored normative beliefs. (Of c
It seems a common bias to me and worth exploring.
Have you thought about a tip-of-the-hat to the opposite effect? Some people view the past as some sort of golden age where things were pure and good etc. It makes for a similar but not exactly mirror image source of bias. I think a belief that generally things are progressing for the better is a little more common than the belief that generally the world is going to hell in a handbasket, but not that much more common.
Actually, now you've nudged my mind in the right direction! Let's consider an example even more remote in time, and even more outlandish by modern standards than slavery or absolute monarchy: medieval trials by ordeal.
The modern consensus belief is that this was just awful superstition in action, and our modern courts of law are obviously a vast improvement. That's certainly what I had thought until I read a recent paper titled "Ordeals" by one Peter T. Leeson, who argues that these ordeals were in fact, in the given circumstances, a highly accurate way of separating the guilty from the innocent given the prevailing beliefs and customs of the time. I highly recommend reading the paper, or at least the introduction, as an entertaining de-biasing experience. [Update: there is also an informal exposition of the idea by the author, for those who are interested but don't feel like going through the math of the original paper.]
I can't say with absolute confidence if Leeson's arguments are correct or not, but they sound highly plausible to me, and certainly can't be dismissed outright. However, if he is correct, then two interesting propositions are within the realm of the poss...
I was planning to introduce the topic through a parable of a fictional world carefully crafted not to be directly analogous to any real-world hot-button issues. The parable would be about a hypothetical world where the following facts hold:
A particular fruit X, growing abundantly in the wild, is nutritious, but causes chronical poisoning in the long run with all sorts of bad health consequences. This effect is however difficult to disentangle statistically (sort of like smoking).
Eating X has traditionally been subject to a severe Old Testament-style religious prohibition with unknown historical origins (the official reason of course was that God had personally decreed it). Impoverished folks who nevertheless picked and ate X out of hunger were often given draconian punishments.
At the same time, there has been a traditional belief that if you eat X, you'll incur not just sin, but eventually also get sick. Now, note that the latter part happens to be true, though given the evidence available at the time, a skeptic couldn't tell if it's true or just a superstition that came as a side-effect of the religious taboo. You'd see that poor folks who eat it do get sick more often, but
Do you have a citation for that?
As far as I understand it, when giving antibiotics to a specific patient, doctors often follow your advice - they give them in overwhelming force to eradicate the bacteria completely. For example, they'll often give several different antibiotics so that bacteria that develop resistance to one are killed off by the others before they can spread. Side effects and cost limit how many antibiotics you give to one patient, but in principle people aren't deliberately scrimping on the antibiotics in an individual context.
The "give as few antibiotics as possible" rule mostly applies to giving them to as few patients as possible. If there's a patient who seems likely to get better on their own without drugs, then giving the patient antibiotics just gives the bacteria a chance to become resistant to antibiotics, and then you start getting a bunch of patients infected with multiple-drug-resistant bacteria.
The idea of eradicating entire species of bacteria is mostly a pipe dream. Unlike strains of virus that have been successfully eradicated, like smallpox, most pathogenic bacteria have huge bio-reservoirs in water or air or soil or animals or on the skin of healthy humans. So the best we can hope to do is eradicate them in individual patients.
I'm doing an MSc in Computer Forensics and have stumbled into doing a large project using Bayesian reasoning for guessing at what data is (machine code, ascii, C code, HTML etc). This has caused me to think again about what problems you encounter when trying to actually apply bayesian reasoning to large problems.
I'll probably cover this in my write up; are people interested in it? The math won't be anything special, but a concrete problem might show the problems better than abstract reasoning,
It also could serve as a precursor to some vaguely AI-ish topics I am interested in. More insect and simple creature stuff than full human level though.
Any given goal that I have tends to require an enormous amount of "administrative support" in the form of homeostasis, chores, transportation, and relationship maintenance. I estimate that the ratio may be as high as 7:1 in favor of what my conscious mind experiences as administrative bullshit, even for relatively simple tasks.
For example, suppose I want to go kayaking with friends. My desire to go kayaking is not strong enough to override my desire for food, water, or comfortable clothing, so I will usually make sure to acquire and pack enough of these things to keep me in good supply while I'm out and about. I might be out of snack bars, so I bike to the store to get more. Some of the clothing I want is probably dirty, so I have to clean it. I have to drive to the nearest river; this means I have to book a Zipcar and walk to the Zipcar first. If I didn't rent, I'd have to spend some time on car maintenance. When I get to the river, I have to rent a kayak; again, if I didn't rent, I'd have to spend some time loading and unloading and cleaning the kayak. After I wait in line and rent the kayak, I have to ride upstream in a bus to get to the drop-off point.
Of cours...
General question on UDT/TDT, now that they've come up again: I know Eliezer said that UDT fixes some of the problems with TDT; I know he's also said that TDT also handles logical uncertainty whereas UDT doesn't. I'm aware Eliezer has not published the details of TDT, but did he and Wei Dai ever synthesize these into something that extends both of them? Or try to, and fail? Or what?
Since I'm going to be a dad soon, I started a blog on parenting from a rationalist perspective, where I jot down notes on interesting info when I find it.
I'd like to focus on "practical advice backed by deep theories". I'm open to suggestions on resources, recommended articles, etc. Some of the topics could probably make good discussions on LessWrong!
ETA: This scheme is done. All three donations have been made and matched by me.
I want to give $180 to the Singularity Institute, but I'm looking for three people to match my donation by giving at least $60 each. If this scheme works, the Singularity Institute will get $360.
If you want to become one of the three matchers, I would be very grateful, and here's how I think we should do it:
You donate using this link. Reply to this thread saying how much you are donating. Feel free to give more than $60 if you can spare it, but that won't affect how much I give.
In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.
I will do the same. (Or if you're the first matching donor, then I already have -- see directly below.)
To show that I'm serious, I'm donating my first $60...
So I'm trying to find myself some cryo insurance. I went to a State Farm guy today and he mentioned that they'd want a saliva sample. That's fine; I asked for a list of all the things they'll do with it. He didn't have one on hand and sent me home promising to e-mail me the list.
Apparently the underwriting company will not provide this information except for the explicitly incomplete list I got from the insurance guy in the first place (HIV, liver and kidney function, drugs, alcohol, tobacco, and "no genetic or DNA testing").
Is it just me or is it outrageous that I can't get this information? Can anyone tell me an agency that will give me this kind of thing when I ask?
If they were explicit about exactly what tests they planned to do they would open themselves up to gaming. Better to be non-specific and reserve the freedom to adapt. For similar reasons bodies trying to prevent and detect doping in sports will generally not want to publicize exactly what tests they perform.
Is LessWrong undergoing a surge in popularity the last two months? What does everyone make of this:
http://siteanalytics.compete.com/overcomingbias.com+lesswrong.com/
Possibly a variation on the attribution bias: Wildly underestimating how hard it is for other people to change.
While I believe that both attribution bias and my unnamed bias are extremely common, they contradict each other.
Attribution bias includes believing that people have stable character traits as shown by their actions. This "people should be what I want-- immediately!" bias assumes that those character traits will go away, leading to improved behavior, after a single rebuke or possibly as the result of inspiration.
The combination of attribu...
Gawande on checklists and medicine
Checklists are literally life-savers in ICUs-- there's just too much crucial which needs to be done, and too many interruptions, to avoid serious mistakes without offloading some of the work of memory onto an system.
However, checklists are low status.
...Something like this is going on in medicine. We have the means to make some of the most complex and dangerous work we do—in surgery, emergency care, and I.C.U. medicine—more effective than we ever thought possible. But the prospect pushes against the traditional culture of m
Morendil:
That analysis would be inconsistent with my understanding of how checklists have been adopted in, say, civilian aviation: extensive analysis of the rare disaster leading to the creation of new procedures.
One relevant difference is that the medical profession is at liberty to self-regulate more than probably any other, which is itself an artifact of their status. Observe how e.g. truckers are rigorously regulated because it's perceived as dangerous if they drive tired and sleep-deprived, but patients are routinely treated by medical residents working under the regime of 100+ hour weeks and 36-hour shifts.
Even the recent initiatives for regulatory limits on the residents' work hours are presented as a measure that the medical profession has gracefully decided to undertake in its wisdom and benevolence -- not by any means as an external government imposition to eradicate harmful misbehavior, which is the way politicians normally talk about regulation. (Just remember how they speak when regulation of e.g. oil or finance industries is in order.)
...Why (other than the OB-inherited obsession of the LW readership with "status") does this hypothesis seem favored at t
From an article about the athletes' brains:
Unsurprisingly, most of the article is about elite athlete's brains being more efficient in using their skills and better at making predictions about playing, but then....
...n February 2009 Krakauer and Pablo Celnik of Johns Hopkins offered a glimpse of what those interventions might look like. The scientists had volunteers move a cursor horizontally across a screen by pinching a device called a force transducer between thumb and index finger. The harder each subject squeezed, the faster the cursor moved. Each play
Craig Venter et al. have succeeded in creating the first functional synthetic bacterial genome.
http://www.sciencemag.org/cgi/content/full/328/5981/958 http://www.sciencemag.org/cgi/content/abstract/science.1190719 http://arstechnica.com/science/news/2010/05/first-functional-synthetic-bacterial-genome-announced.ars http://www.jcvi.org/cms/research/projects/first-self-replicating-synthetic-bacterial-cell/overview/
I wrote up a post yesterday, but I found I was unable to post it, except as a draft, since I lack the necessary karma. I thought it might be an interesting thing to discuss, however, since lots of folks here have deeper knowledge than I do about markets and game theory
I've been working recently for an auction house that deals in things like fine art, etc. I've noticed, by observing many auctions, that certain behaviors are pretty reliable, and I wonder if the system isn't "game-able" to produce more desirable outcomes for the different parties ...
Ooh, speaking of Harry Potter and the Methods, someone totally needs to write an Atlas Shrugged fanfic in which some of the characters are actually good at achieving true beliefs instead of just paying lip service to "rationality." If I had more time, I'd call it ... Dagny Taggart and the Logic of Science.
Amazing videos, both in presentation and content.
Drive: on how money can be a bad motivator, and what leads to better productivity
http://www.youtube.com/watch?v=u6XAPnuFjJc
Smile or die: on 'positive thinking'
I have run into a problem in statistics which might interest people here, and also I'd quite like to know if there is a good solution.
In charm mixing we try to measure mixing parameters imaginatively named x and y. (They are normalised mass and width differences of mass eigenstates, but this is not important to the problem.) In the most experimentally-accessible decay channel, however, we are not sensitive to x and y directly, but to rotated quantities
where the strong phase delta is unknown. In fact, the situation is a bit worse than this; we get our resu...
Nick Bostrom has posted a PDF of his Anthropic Bias book: http://www.anthropic-principle.com/book/anthropicbias.html
As someone who read it years ago when you had to ILL or buy it, I'm very pleased to see it up and heartily recommend it to everyone on LW who hasn't read it yet. (If you don't want to follow the link and see for yourself, the book focuses on the Doomsday problem and some related issues like Sleeping Beauty, which, incidentally, has come up here recently.)
I have been wondering whether the time was ripe to (say) tweet or blog about how wonderful the LessWrong wiki is. "If you're interested in improving your thinking, the LessWrong wiki is getting to be a great resource". The audience I'm likely to reach is mostly software professionals.
So I attempted to take as unbiased a look at the wiki as I could, putting myself into the shoes of someone motivated by the above lead.
Roadblock the first: the home page says "This wiki exists to support the community blog". This seems to undermine the impl...
A while ago, I was promoting trn.
One of the great virtues is having a lot of flexibility in what you're shown. In particular, you can choose to not see anything by a given poster.
I was mostly thinking of trn as a way of making it more feasible to follow what you want to read in high-volume discussions, but it's also a way of defusing quarrels, and I think it would be especially handy now.
Speculation about The Methods, which I put here because I want credit for brilliance if I'm right.
The one-pass creation of stable time loops can be accomplished by a Turing machine in the following manner: Have a machine simulate a countably finite set of universes by allocating clock ticks after the fashion of Cantor's diagonal argument. In each universe, wherever there exists an object with the properties that a Time-Turner exploits, spawn new Universes at every tick by inserting new matter "from [1 to N_max] ticks ahead", where N_max is the m...
WRT some recent posts on consciousness, mostly by Academician, eg "There must be something more":
There are 3 popular stances on consciousness:
Consciousness is spiritual, non-physical.
Consciousness can be explained by materialism.
Consciousness does not exist. (How I characterize the Dennett position.)
Suppose you provide a complete, materialistic account of how a human behaves, that explains every detail of how sensory stimuli are translated into beliefs and actions. A person holding position 2 will say, "Okay, but you still need to...
We need consciousness to remember, to learn and to do the prediction involved in controlling movement.
Controlled movement does not require consciousness, memory, learning, or prediction. This (simulated) machine has none of those things, yet it walks over uneven terrain and searches for (simulated) food. What controlled movement requires is control.
Memory, learning, and prediction do not require consciousness. Mundane machines and software exist that do all of these things without anyone attributing consciousness to them.
People may think they are conscious of how they move, but they are not. Unless you have studied human physiology, it is unlikely that you can say which of your muscles are exerted in performing any particular movement. People are conscious of muscular action only at a rather high level of abstraction: "pick up a cup" rather than "activate the abductor pollicis brevis". Most of the learning that happens when you learn Tai Chi, yoga, dance, or martial arts, is not accessible to consciousness. There are exercises that you can tell people exactly how to do, and demonstrate in front of them, and yet they will go wrong the first time they try. Then...
So, I just had a strange sort of akrasia problem.
I was doing my evening routine, getting washed up and stuff in preparation for going to bed. Earlier in the evening, I had read P.J. Eby's The Hidden Meaning of "Just Do It", and so I decided I would "just do" this routine, i.e. simply avoid doing anything else, and watch the actions of the routine unfold in front of me. So, I used the toilet, and began washing my hands, when it occurred to me that if I do not interfere, I will never stop rinsing my hands. I did not interfere, however, an...
"Science Saturday: The Great Singularity Debate"
Eliezer Yudkowsky and Massimo Pigliucci
I've just run into a second alumnus of my undergrad school from Less Wrong, and it has me curious, because... it's a tiny school. So this'd be quite a coincidence, and there might be a correlation to dig up.
Present yourselves, former (or current) students of Simon's Rock. I was there from the fall of '04 until graduating with my BA in spring '08 (I was abroad the spring of my junior year though).
If you lurk and don't want to delurk, feel free to contact me privately. If you don't have an account, e-mail me at alicorn@intelligence.org :)
Ryk Spoor's latest universe, of which the only published book so far is Grand Central Arena, has as major characters people who were raised in simulated worlds, and later covers their escape therefrom. Just occurred to me that some LWers might be interested.
The Association for Advancement of Artificial Intelligence (AAAI) convened a "Presidential Panel on Long-Term AI Futures". Read their August 2009 Interim Report from the Panel Chairs:
...There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. [...] The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of
Would anyone be interested if we were to have more regular LW meetups around the East Bay or San Francisco areas? We probably wouldn't have the benefit of the SIAIfolks' company in that case, but having the meetups at a location easily accessible by BART may help increase the number of people from the surrounding area who can attend. (Also, I hear that preparing for and hosting meetups at Benton can be somewhat taxing on the people who work there, so having them at restaurants will allow us to do it more frequently, if there is demand for such.)
I have a request for those bayesianly inclined among the LW crowd.
I had mentioned in an article that I had become addicted to watching theist/atheist debates. Unfortunately I have not weaned myself off this addiction as of yet. In one I watched recently, it is William Lane Craig (the theist that Eliezer wanted to debate) arguing for the provability of the resurrection of Jesus, and New Testament scholar Dr. Bart Ehrman arguing for its historical improvability.
At some point in this debate, Dr. Ehrman argues that miracles are fundamentally unprovable by his...
Wired has an article 'Accept Defeat: The Neuroscience of Screwing Up,' about how scientists and the brain handle unexpected data and anomalies, and our preference to ignore them or explain them away.
I sometimes get various ideas for inventions, but I'm not sure what to do with it, as they are often unrelated to my work, and I don't really possess the craftsmanship capabilities to make prototypes and market them or investigate them on my own. Does anyone have experience and/or recommendations for going about selling or profiting from these ideas?
I vaguely recall a thread where folks discussed what makes jokes funny, and advanced some theories. This may well have been in an Open Thread or buried deep within the comments of an unrelated post - at any rate I can't find it.
Anyone who remembers seeing it or participating, I'd appreciate help locating it...
Egan's Law is "It all adds up to normality." What adds up to what, exactly?
We have always lived in the universe of quantum mechanics, or the Tegmark Level IV Multiverse, but I don't understand why it is supposed to add up to normality. I understand that this word "normality" is supposed to help me dissolve some of the weirder aspects of this universe, but it doesn't seem to work as I am not at all convinced that the universe actually does add up to normality.
Is it really proper to assume from the start that the universe (multiverse) ad...
Continuing my thinking about Pascal's mugging, I think I've an argument for why one specifically wants the prior probability of a reward to be proportional/linear to the reward and not one of the other possible relationships. A longish excerpt:
One way to try to escape a mugging is to unilaterally declare that all probabilities below a certain small probability will be treated as zero. With the right pair of lower limit and mugger's credibility, the mugging will not take place.
But such a ad hoc method violates common axioms of probability theory, and thus w...
Has anybody else thought that the Inverse Ninja Law is just the Bystander Effect in disguise?
(Yes, I've been reading this.)
Buddha: The quintessential rational mind
http://www.hindu.com/mag/2010/05/23/stories/2010052350210600.htm
Let's suppose Church-Turing thesis is true.
Are all mathematical problems solvable?
Are they all solvable to humans?
If there is a proof* for every true theorem, then we need only to enumerate all possible texts and look for one that proves - or disproves - say, Goldbach's conjecture. The procedure will stop every time.
(* Proof not in the sense of "formal proof in a specific system", but "a text understandable by a human as a proof".)
But this can't possibly be right - if the human mind that looks at the proofs is Turing-computable, then we...
Right. So if humans reasoning follows some specified formal system, they can't prove it. But does it really follow one?
Yes and no. It is likely that the brain, as a physical system, can be modeled by a formal system, but "the human brain is isomorphic to a formal system" does not imply "a human's knowledge of some fact is isomorphic to a formal proof". What human brains do (and, most likely, what an advanced AI would do) is approximate empirical reasoning, i.e. Bayesian reasoning, even in its acquisition of knowledge about mathematical truths. If you have P(X) = 1 then you have X = true, but you can't get to P(X) = 1 through empirical reasoning, including by looking at a proof on a sheet of paper and thinking that it looks right. Even if you check it really really carefully. (All reasoning must have some empirical component.) Most likely, there is no structure in your brain that is isomorphic to a proof that 1 + 1 = 2, but you still know and use that fact.
So we (and AIs) can use intelligent reasoning about formal systems (not reasoning that looks like formal deduction from the inside) to come to very high or very low probability estimates for certain formally...
I believe the "unreasonable effectiveness of mathematics in the natural sciences" can be explained based on the following idea. Physical systems prohibit logical contradiction, and hence, physical systems form just another kind of axiomatic, logical, and therefore mathematical system. To take a crude example, two different rocks cannot occupy the same point in space, due to logical contradiction. This allows the ability to mathematically talk about the rocks. Note that this example is definitively crude, since there are other things like bosons w...
To take a crude example, two different rocks cannot occupy the same point in space, due to logical contradiction.
Except that....that isn't a logical contradiction!
You have inadvertently demonstrated one of the best arguments for the study of mathematics: it stretches the imagination. The ability to imagine wild, exotic, crazy phenomena that seem to defy common sense -- and thus, in particular, not to confuse common sense with logic -- is crucial for anyone who seriously aspires to understand the world or solve unsolved problems.
When Albert Einstein said that imagination was more important than knowledge, this is surely what he meant.
The Open Thread from the beginning of the month has more than 500 comments – new Open Thread comments may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.