Open Thread: June 2010
To whom it may concern:
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (651)
Anyone here live in California? Specifically, San Diego county?
The judicial election on June 8th has been subject to a campaign by a Christian conservative group. You probably don't want them to win, and this election is traditionally a low turnout one, so you might want to put a higher priority on this judicial election than you normally would. In other words, get out there and vote!
What's the deal with female nymphomaniacs? Their existence seems a priori unlikely.
Then your priors are wrong. Adjust accordingly.
"What's the deal with" means "What model would have generated a higher prior probability for". Noticing your confusion isn't the entire solution.
I thought it was pretty clear. Sexual Dimorphism doesn't operate the way you think it does. Women with high sex drives aren't rare at all.
I have heard that, for most men and most women, the time of highest sex drive happens at very different times (much younger for men than women). This might account for the entire difference, especially if your'e getting most of your information from the culture at large. As TVTropes will tell you, Most Writers Are Male.
If the existing model is sexual dimorphism, with high sexual desire a male trait, you could simply suppose that it's a "leaky" dimorphism, in which the sex-linked traits nonetheless show up in the other sex with some frequency. In humans this should especially be possible with male traits which depend not on the Y chromosome, but rather on having one X chromosome rather than two. That means that there is only one copy, rather than two, of the relevant gene, which means trait variance can be greater - in a woman, an unusual allele on one X chromosome may be diluted by a normal allele on the other X, whereas a man with an unusual X allele has no such counterbalance. But it would still be easy enough for a woman to end up with an unusual allele on both her Xs.
Also, regardless of the specific genetic mechanism, human dimorphism is just not very extreme or absolute (compared to many other species), and forms intermediate between stereotypical male and female extremes are quite common.
This question reads to me like it's out of the middle of some discussion I didn't hear the beginning of. Why were "nymphomaniacs" on your mind in the first place? What do you mean by the word? I don't think I've heard it in many years, and I associate it with the sexual superstitions of a former age.
What does the word "nymphomaniacs" mean? How do you judge someone to be sufficiently obsessed with sex to be a nymphomaniac? I think a lot of your confusion might be coming from you tendency to label people with this word with such negative connotations.
Does the question "what is with women who want to have sex [five times a week*] and will undertake to get it?" resolve any of your confusion? You should expect that those women who have more sex to be more salient wrt people talking about them, so they would seem more prominent, even if only 2% of the population.
*not sure about this number, just picked one that seemed alright.
Picking a number for this seems like a really bad idea. For most modern clinical definitions of disorders what matters is whether it interferes with normal daily behavior. Even that is questionable since what constitutes interference is very hard to tell.
Societies have had very different notions of what is acceptable sexuality for both males and females. Until fairly recent homosexuality was considered a mental disorder in the US. And in the Victorian era, women were routinely diagnosed as nymphomaniacs for showing pretty minimal signs of sexuality.
Five times a week wouldn't be remotely enough to diagnose. It has to be problematic and clinically significant.
I think that's kinda my point. I was attempting to point out that he's probably confusing the term "nymphomaniac" with its negative connotations, with "likes to have [vaguely defined 'a lot'] of sex."
"Nymphomaniac" hasn't been a clinical diagnosis for a long time. In my experience, the word is now most commonly used colloquially to mean "a woman who likes to have a lot of sex". Whether this has negative connotations depends on your attitude to sex, I suppose.
Why?
And they are accordingly rare, are they not?
No, women with a high sex drive are not rare.
In the same vein as Roko's investigation of LessWrong's neurotypicalness, I'd be interested to know the spread of Myers-Briggs personality types that we have here. I'd guess that we have a much higher proportion of INTPs than the general population.
Online Myers-Briggs test can be found here, though I'm not sure how accurate it is
del
There are a lot of problems with Myers-Briggs. For example, the test doesn't account for people saying things because they are considered socially good traits. Claims that Myers-Briggs is accurate seem often to be connected to the Forer effect. A paper which discusses these issues is Boyle's "Myers-Briggs Type Indicator (MBTI): Some psychometric limitations", 1995 Australian Psychologist 30, 71–74.
(Wherein I seek advice on what may be a fairly important decision.)
Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. This would be a valuable educational opportunity for me in terms of learning about scientific computing and gaining general programming/design skill; as I hope to start contributing to FAI research within 5-10 years, this has potentially big instrumental value.
In "Why We Need Friendly AI", Eliezer discussed Moore's Law as a source of existential risk:
Due to the quality of the models used by the aforementioned research group and the prevailing level of interest in more accurate models of solar weather, successful completion of this summer project will probably result in a nontrivial increase in demand for GPUs. It seems that the next best use of my time this summer would be to work full time on the expression-simplification abilities of a computer algebra system.
Given all this information and the goal of reducing existential risk from unFriendly AI, should I take the job with the space weather research group, or not? (To avoid anchoring on other people's opinions, I'm hoping to get input from at least a couple of LW readers before mentioning the tentative conclusion I've reached.)
ETA: I finally got an e-mail response from the research group's point of contact and she said all their student slots have been taken up for this summer, so that basically takes care of the decision problem. But I might be faced with a similar choice next summer, so I'd still like to hear thoughts on this.
The amount you could slow down Moore's Law by any strategy is minuscule compared to the amount you can contribute to FAI progress if you choose. It's like feeling guilty over not recycling a paper cup, when you're planning to become a lobbyist for an environmentalist group later.
Do you mean that he actively seeks to encourage young people to try and slow Moore's Law, or that this is an unintentional consequence of his writings on AI risk topics?
I'm pretty sure that Roko means the second. If this idea got mentioned to Eliezer I'm pretty sure he'd point out the minimal impact that any single human can have on this, even before one gets to whether or not it is a good idea.
I would say that there seem to be a lot of companies that are in one way or another trying to advance Moore's law. For as long as it doesn't seem like the one you're working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move.
(Full disclosure: I'm an SIAI Visiting Fellow so they're paying my upkeep right now.)
Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.
Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.
We don't find out until the end of the article that her degrees are in women's studies and religious studies.
There are much better ways to spend $100K. Twentysomethings like her are filling up the workforce. I'm worried about the future implications.
I thank my lucky stars I'm not in such a position (in the respects listed in the article -- Munna's probably better off in other respects). I didn't handle college planning as well as I could have, and I regret it to this day. But at least I didn't go deep into debt for a worthless degree.
Do you mean young people with unrepayable college debt, or young people with unrepayable debt for degrees which were totally unlikely to be of any use?
Arnold Kling has some thoughts about the plight of the unskilled college grad.
1 2
Thanks for the links, I had missed those.
I agree with his broad points, but on many issues, I notice he often perceives a world that I don't seem to live in. For example, he says that people who can simply communicate in clear English and think clearly are in such short supply that he'd hire someone or take them on as a grad student simply for meeting that, while I haven't noticed the demand for my labor (as someone well above and beyond that) being like what that kind of shortage would imply.
Second, he seems to have this belief that the consumer credit scoring system can do no wrong. Back when I was unable to get a mortgage at prime rates due to lacking credit history despite being an ideal candidate [1], he claimed that the refusals were completely justified because I must have been irresponsible with credit (despite not having borrowed...), and he has no reason to believe my self-serving story ... even after I offered to send him my credit report and the refusals!
[1] I had no other debts, no dependents, no bad incidents on my credit report, stable work history from the largest private employer in the area, and the mortgage would be for less than 2x my income and have less than 1/6 of my gross in monthly payments. Yeah, real subprime borrower there...
For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".
Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.
This is another one of those Hanson-esque "X is not about X-ing" things.
Expected profit explains much behavior of credit card companies, but I don't think it helps at all with the behavior of the credit score system or mortgage lenders (Silas's example!). Nancy's answer looks much better to me (except her use of the word "also").
I think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that.
There may also be a weirdness factor if relatively few people have no debt history.
(1) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is partly about how a lot of what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.
Simplifying my behavior enough to keep track of me and control me is tyranny.
Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them.
Maybe financial gurus should think about that before they say "stay away from credit cards entirely". It should be "You MUST get a credit card, but pay the balance." (This is another case of addictive stuff that can't addict me.)
(Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.)
ETA: Sorry for the snarky tone; your points are valid, I just disagree about their applicability to this specific situation.
SilasBarta:
Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use these criteria, or these criteria would not be so good after all if applied on a large scale.
(See my above comment for an elaboration on this topic.)
Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?
These are not such different answers. Working on a large scale tends to require hiring (potentially) stupid people and giving them little flexibility.
Yes, that's certainly true. In fact, what you say is very similar to one of the points I made in my first comment in this thread (see its second paragraph).
No, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory.
Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, there is good support for the hypothesis of CU = non-idiot, larger banks/mortgage brokers = idiot.
(Of course, I do differ from the general subprime population in that if I see that I can only get bad terms on a mortgage, I don't accept them.)
SilasBarta:
This merely means that their formal criteria for sorting out loan applicants into officially recognized categories disallow the use of this information -- which would be fully consistent with my propositions from the above comments.
Mortgage lending, especially subprime lending, has been a highly politicized issue in the U.S. for many years, and this business presents an especially dense and dangerous legal minefield. Multifarious politicians, bureaucrats, courts, and prominent activists have a stake in that game, and they have all been using whatever means are at their disposal to influence the major lenders, whether by carrots or by sticks. All this has undoubtedly influenced the rules under which loans are handed out in practice, making the bureaucratic rules and procedures of large lenders seem even more nonsensical from the common person's perspective than they would otherwise be.
(I won't get into too many specifics in order to avoid raising controversial political topics, but I think my point should be clear at least in the abstract, even if we disagree about the concrete details.)
Why do you assume that the bailouts are indicative of idiocy? You seem to be assuming that -- roughly speaking -- the major financiers have been engaged in more or less regular market-economy business and done a bad job due to stupidity and incompetence. That, however, is a highly inaccurate model of how the modern financial industry operates and its relationship with various branches of the government -- inaccurate to the point of uselessness.
I actually agree with most of those points, and I've made many such criticisms myself. So perhaps larger banks are forced into a position where they rely too much on credit scores at one stage. Still, credit unions won, despite having much less political pull, while significantly larger banks toppled. Much as I disagree with the policies you've described, some of the banks' errors (like assumptions about repayment rates) were bad, no matter what government policy is.
If lending had really been regulated to the point of (expected) unprofitability, they could have gotten out of the business entirely, perhaps spinning off mortgage divisions as credit unions to take advantage of those laws. Instead, they used their political power to "dance with the devil", never adjusting for the resulting risks, either political or in real estate. There's stupidity in that somewhere.
Fair point. This does replicate the Conservation of Thought theme. I think a good bit about business can be explained as not bothering because one's competitors haven't bothered either.
I've seen financial gurus recommend getting a credit card and paying the balance.
And thanks for the ETA.
Ramit Sethi for example. I had the impression that this was actually pretty much the standard advice from personal finance experts. Most of them are not worth listening to anyway though.
This might be what they say in their books, where they give a detailed financial plan, though I doubt even that. What they advise is usually directed at the average mouthbreather who gets deep into credit card debt. They don'd need to advise such people to build a credit history by getting a credit card solely for that purpose -- that ship has already said!
All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!" I had never once heard a caveat about, "oh, but make sure to get one anyway so you don't find yourself at 24 without a credit history, just pay the balance." No, for most of what they say to make sense, you have to start from the assumption that the listener typically doesn't pay the full balance, and is somehow enlightened by moving to such a policy.
Notice how the citation you give is from a chapter-length treatment from a less-known finance guru (than Ramsey, Orman, Howard, etc.), and it's about "optimizing credit cards" a kind of complex, niche strategy. Not standard, general advice from a household name.
That would be an insanely stupid thing for anyone to say. Credit cards are very useful if used properly. I agree with mattnewport that the standard advice given in financial books is to charge a small amount every month to build up a credit rating. Also, charge large purchases at the best interest rate you can find when you'll use the purchases over time and you have a budget that will allow you to pay them off.
One reason why the behavior of corporations and other large organizations often seems so irrational from an ordinary person's perspective is that they operate in a legal minefield. Dodging the constant threats of lawsuits and regulatory penalties while still managing to do productive work and turn a profit can require policies that would make no sense at all without these artificially imposed constraints. This frequently comes off as sheer irrationality to common people, who tend to imagine that big businesses operate under a far more laissez-faire regime than they actually do.
Moreover, there is the problem of diseconomies of scale. Ordinary common-sense decision criteria -- such as e.g. looking at your life history as you describe it and concluding that, given these facts, you're likely to be a responsible borrower -- often don't scale beyond individuals and small groups. In a very large organization, decision criteria must instead be bureaucratic and formalized in a way that can be, with reasonable cost, brought under tight control to avoid widespread misbehavior. For this reason, scalable bureaucratic decision-making rules must be clear, simple, and based on strictly defined categories of easily verifiable evidence. They will inevitably end up producing at least some decisions that common-sense prudence would recognize as silly, but that's the cost of scalability.
Also, it should be noted that these two reasons are not independent. Consistent adherence to formalized bureaucratic decision-making procedures is also a powerful defense against predatory plaintiffs and regulators. If a company can produce papers with clearly spelled out rules for micromanaging its business at each level, and these rules are per se consistent with the tangle of regulations that apply to it and don't give any grounds for lawsuits, it's much more likely to get off cheaply than if its employees are given broad latitude for common-sense decision-making.
As nearly as I can figure it, people who rely on credit ratings mostly want to avoid loss, but aren't very concerned about missing chances to make good loans.
This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.
Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.
There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with: 1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the general public, or sometimes with anyone outside your team.
By the standards of Traditional Rationality, policy advice is often made without meeting a burden of proof. Best guesses and theoretical considerations are too weak to reach conclusions. A proper practitioner of Traditional Rationality wouldn't be able to make any kind of recommendation, one could identify some promising initial hypotheses, but that's it.
But Just because you didn't have time to come up with a good answer doesn't mean that Ministers don't expect an answer. And a practitioner of Bayesian Rationality always has a best guess as to what is true, even if the evidence base is non-existent you can fall back on your prior. You don't want to be overconfident in stating your position, assumptions must be outlined and sensitivities should be explored. But you still need to give an answer and that's what attracts me to Bayesian approaches: you don't have to be officially agnostic until being presented with a level of evidence that is unrealistically high for policy work.
It seems to me that if you have very good quality evidence then Bayesian and Traditional Rationality are very similar. Good evidence either proves or disproves a proposition for a Traditional Rationalist, and for a Bayesian Rationalist it will shift their probability estimate, as well as increasing their confidence a lot. The biggest difference seem to me to be that Bayesian Rationality seems is able to make use of weak evidence in a way Traditional Rationality can't.
I'm not certain this comment will be coherent, but I would like to compose it before I lose my train of thought. (I'm in an atypical mental state, so I easily could forget the pieces when feeling more normal.) The writing below sounds rather choppy and emphatic, but I'm actually feeling neutral and unconvinced. I wonder if anyone would be able to 'catch this train' and steer it somewhere else perhaps..?
It's an argument for dualism. Here is some background:
I've always been a monist: believing that everything should be coherent from within this reality. This is the idea that if things don't make sense, it is due to limited knowledge and a limited brain, not an incomplete universe. (Where the universe is the physical material world.)
While composing Less Wrong comments, I've often thought about what an incomplete universe would look like. (Since this is what dualists claim -- what do they mean by something existing differently or beyond material existence?)
I've written before that a simulation (a simulation is a reality S that is a subset of something larger) is just as good as (or the same as) "reality" if the simulation is complete within itself. That is, if an agent within the simulation would find that in principle everything within the simulation is coherent and can be understood from within the simulation. Importantly, there is no hint within the simulation of anything existing outside the simulation. (For example, in the multiple worlds theory, if the many worlds don't interact, each world is its own independent complete reality. The worlds are simulated within a larger entity of all the worlds.)
When physical materialists claim that the physical material world is our entire reality, they are claiming that the physical material world is a reality X, and you cannot deduce anything beyond X from within X. That is, there doesn't exist anything but X, as far as we're concerned. (We can speculate about many worlds, but unless the worlds interact, one world cannot deduce the others.) I've always found this to be obvious, because if you can deduce anything beyond X from within X, then what you've deduced is part of the physical material world (because you deduced it, through interaction) and it's part of X after all.
(end of background material)
It just occurred to me that we do have evidence that our physical material world X is incomplete. So I've stumbled on this argument for dualism. It's actually a very old one, but approached from a different angle. As I said, I stumbled upon it.
It's the problem of existence. Being a monist means believing that if things don't make sense, it is due to limited knowledge and a limited brain. But the problem of existence is such that no amount of knowledge will solve it: there's nothing we could ever learn (or even believe) within X that would solve this problem. Not a complete understanding of the physics of the beginning of the universe. Not even theism!
I cannot understand what the answer to the problem could possibly be, but I think that I can understand that there is no answer possible within X. So to the extent that I am correct that this problem is not in theory solvable in X means that X is incomplete.
I could be incorrect about whether this problem is in principle unsolvable in X. But I am relatively certain of it, on the same level as having confidence in logic. If I lose confidence in logic, I have nothing to reason with. So for now, I would find it more reasonable to guess that I'm in a simulation of some kind where this particular conundrum is embedded. X is a subset of a larger reality Y where existence is explained.
Given what we know about X, and the problem of existence, what can we deduce about the larger universe Y where existence is explained? Anything? What about deducing anything from the peculiar fact that X is missing information about existence?
I don't see where dualism comes in. Specifically what kind of dualism are you talking about?
A problem being unsolvable within some system does not imply that there is some outer system where it can be solved. Take the Halting Problem, for example: there are programs such that we cannot prove whether or not they will never halt, and this itself is provable. Yet there is a right answer in any given instance — a program will halt or it won't — but we can never know in some cases.
That you say "I cannot understand what the answer to the problem could possibly be" suggests that it is a wrong question. Ask "Why do I think the universe exists?" instead of "Why does the universe exist?". I have my tentatively preferred answer to that, but maybe you will come up with something interesting.
What is it?
In Harry Potter and the Methods of Rationality, Quirrell talks about a list of the thirty-seven things he would never do as a Dark Lord.
Eliezer, do you have a full list of 37 things you would never do as a Dark Lord and what's on it?
All of the replies to this should be in the thread for discussing HP&tMoR.
This is a reference to the Evil Overlord List. That's why Harry starts snickering. Indeed, it almost is implied that Voldemort wrote the actual evil overlord list. For the most common version of the actual Evil Overlord List see Peter's Evil Overlord List. Having such a list for Voldemort seems to be at least partially just rule of funny.
Did the evil overlord list exist publicly in 1991? I was actually a bit confused by Harry's laughter here. Eliezer seems to be working pretty hard to keep things actually in 1991 (truth and beauty, the journal of irreproducible results, etc.)
That's a good point. I'm pretty sure the Evil Overlord List didn't exist that far back, at least not publicly. It seems like for references to other fictional or nerd-culture elements he's willing to monkey around with time. Thus for example, there was a Professor Summers for Defense Against the Dark Arts which wouldn't fit with the standard chronology for Buffy at all.
Checking wikipedia, it looks possible but not likely that Harry could have seen the list in 1991.
Well, he and his father are described as being huge science fiction fans, so it's not that unlikely that they heard about the list at conventions, or had someone show them an early version of the list printed from email discussions, even if they didn't have Internet access back then.
The reason I think it might actually be plot relevant is that most people can't resist making a list that is much longer than 37 rules long. Plus most of the rules are just lampshades for tropes that show up again and again in fiction with evil overlords. They rarely are such basic, practical advice as "stop bragging so much."
Ah. I'm pretty sure it isn't a real list because of the number 37. 37 is one of the most common numbers for people to pick when they want to pick a small "random" number. Humans in general are very bad at random number generation. More specifically, they are more likely to pick an odd number, and given a specific range of the form 1 to n, they are most likely to pick a number that is around 3n/4. The really clear examples are from 1 to 4 (around 40% pick 3), 1 to 10 (I don't remember the exact number but I think it is around 30% that pick 7). and then 1 to 50 where a very large percentage will pick 37. The upshot is if you ever see an incomplete list claiming to have 37 items, you should assign a high probability that the rest of the list doesn't exist.
It just occurred to me that the odd/even bias applies only because we work in base ten. Humans working in a prime base (like base 11) would be much less biased. (in this respect)
Ouch. I am burned.
Well, that's ok. Because I just wrote a review of Chapter 23 criticizing Harry's rush to conclude that magic is a single-allele Mendellian trait and then read your chapter notes where you say the same thing. That should make us even.
What does 'consciousness' mean?
I'm having an email conversation with a friend about Nick Bostrom's simulation argument and we're now trying to figure out what the word "consciousness" means in the first place.
People here use the C-word a lot, so it must mean something important. Unfortunately I'm not convinced it means the same thing for all of us. What does the theory that "X is conscious" predict? If we encounter an alien, what would knowing that it was "conscious" or "not conscious" tell us? How about if we encountered an android that looked and behaved identically to a human, but inside its head had a very different physical implementation? What would saying it was "conscious" or "not conscious" mean?
And, what does this have to do with my personal subjective experience? It's the foundation (or medium) of everything I know or believe; but most definitions of what it is tend to be dualism-like in that, once again, saying someone else has or doesn't have subjective experience tells us nothing about the physical world.
Help appreciated!
I think my only other comment here has been "Hi." But, the webcomic SMBC has a treatment of the prisoner's dilemma today and I thought of you guys.
Guided by Parasites: Toxoplasma Modified Humans
a ~20 minute (absolutely worth every minute) interview with, Dr. Robert Sapolsky, a leading researcher in the study of Toxoplasma & its effects on humans. This is a must see. Also, towards the end there is discussion of the effect of stress on telomere shortening. Fascinating stuff.
Thanks for the link.
If people's desires are influenced by parasites, what does that do to CEV?
If your desires are influenced by parasites, then the parasites are part of what makes you you. You may as well ask "If people's desires are influenced by their past experience, what does that do to CEV?" or "If people's desires are influenced by their brain chemistry, what does that do to CEV?"
So what if Dr. Evil releases a parasite that rewires humanity's brains in a predetermined manner? Should CEV take that into account or should it aim to become Coherent Extrapolated Disinfected Volition?
What if Dr. Evil publishes a book or makes a movie that rewires humanity's brains in a predetermined manner?
Yep, I made a reference to cultural influence here. That's why I suspect CEV should be applied uniformly to the identity-space of all possible humans rather than the subset of humans that happen to exist when it gets applied. In that case defining humanity becomes very, very important.
Of course, perhaps the current formulation of CEV covers the entire identity-space equally and treats the living population as a sample, and I have misunderstood. But if that is the case, Wei Dai's last article is also bunk, and I trust him to have better understanding of all things FAI than myself.
Heh - my first instinct is to bite the bullet and apply CEV to existing humans only. I couldn't give a strong argument for that, though; I just can't immediately think of a reason to exclude non-culturally influenced humans while including culturally influenced humans.
It's hard to tell what counts as an influence and what doesn't.
It would be interesting to see what would happen if the effects of parasites could be identified and reversed. The results wouldn't necessarily all be good, though.
You may as well ask: "What if Dr. Evil kills every other living organism? Should CEV take that into account or should it aim to become Coherent Extrapolated Resurrected Volition?"
Of course, if someone modifies or kills all the other humans, that will change the result of CEV. Garbage in, garbage out.
The Unreasonable Effectiveness of My Self-Exploration by Seth Roberts.
This is an overview of his self-experiments (to improve his mood and sleep, and to lose weight), with arguments that self-experimentation, especially on the brain, is remarkably effective in finding useful, implausible, low-cost improvements in quality of life, while institutional science is not.
There's a lot about status and science (it took Roberts 10 years to start getting results, and it's just to risky to careers for scientists to take on projects which last that long), and some intriguing theory at the end that activities can be classified into exploitation (low risk, low reward) and exploration (high risk, high reward), and that people aren't apt to want to do exploration full time, so, if given a job that's full-time exploration (like institutional science), they'll turn most of it into exploitation.
Searle has some weird beliefs about consciousness. Here is his description of a "Fading Qualia" thought experiment, where your neurons are replaced, one by one, with electronics:
(J.R. Searle, The rediscovery of the mind, 1992, p. 66, quoted by Nick Bostrom here.)
This nightmarish passage made me really understand why the more imaginative people who do not subscribe to a computational theory of mind are afraid of uploading.
My main criticism of this story would be: What does Searle think is the physical manifestation of those panicked, helpless thoughts?
David Chalmers discusses this particular passage by Searle extensively in his paper "Absent Qualia, Fading Qualia, Dancing Qualia":
http://consc.net/papers/qualia.html
He demonstrates very convincingly that Searle's view is incoherent except under the assumption of strong dualism, using an argument based on more or less the same basic idea as your objection.
I don't have Searle's book, and may be missing some relevant context. Does Searle believe normal humans with unmodified brains can consciously affect their external behavior?
If yes, then there's a simple solution to this fear: do the experiment he describes, and then gradually return the test subject to his original, all-biological condition. Ask him to describe his experience. If he reports (now that he's free of non-biological computing substrate) that he actually lost his sight and then regained it, then we'll know Searle is right, and we won't upload. Nothing for Searle to fear.
But if, as I gather, Searle believes that our "consciousness" only experiences things and is never a cause of external behavior, then this is subject to the same criticism as Searle's support of zombies.
Namely: if Searle is right, then the reason he is giving us this warning isn't because he is conscious. Maybe in fact his consciousness is screaming inside his head, knowing that his thesis is false, but is unable to stop him from publishing his books. Maybe his consciousness is already blind, and has been blind from birth due to a rare developmental accident, and it doesn't know what words he types in his books at all. Why should we listen to him, if his words about conscious experience are not caused by conscious experience?
Searle thinks that consciousness does cause behavior. In the scary story, the normal cause of behavior is supplanted, causing the outward appearance of normality. Thus, it's not that consciousness doesn't affect things, but just that its effects can be mimicked.
Nisan's criticism is devastating, and has the advantage of not requiring technological marvels to assess. I do like the elegance of your simple solution, though.
http://www.kk.org/quantifiedself/2010/05/eric-boyd-and-his-haptic-compa.php
The technology itself is pretty interesting; see also http://www.wired.com/wired/archive/15.04/esp.html
First I'd like to point out a good interview with Ray Kurzweil, which I found more enjoyable than a lot of his monotonous talks. http://www.motherboard.tv/2009/7/14/singularity-of-ray-kurzweil
As a follow-up, I am curious anyone attempted to mathematically model Ray's biggest and most disputed claim, which is the acceleration rate of technology. Most dispute the claim by pointing out that the data points are somewhat arbitrary and invoke data dredging. It would be interesting if the claim was based on a more of a model basis rather than basically a regression. I imagine a model that would represent the entire human society (including our technology) as an information processing machine and would argue that the processing capability gets better by X% after a (rather artificial) 'cycle', contributing to the next cycle.
Note that Kurzweil's responded to the data dredging complaint by taking major lists compiled by other people, combining them and showing that they fit a roughly exponential graph. (I don't have a citation for this unfortunately).
Edit: I'm not aware of anyone making a model of the sort you envision but it seems to suffer they same problem that Kurzweil has in general which is a potential overemphasis on information processing ability.
I would have thought everyone here would have seen this by now, but I hadn't until today so it may be new to someone else as well:
Charlie Munger on the 24 Standard Causes of Human Misjudgment
http://freebsd.zaks.com/news/msg-1151459306-41182-0/
I couldn't post a article due to lack of karma so I had to post here:P
I notice this site is pretty much filled with proponents of MWI, so I thought it'd be interresting to see if there are anyone on here who are actually against MWI, and if so, why?
After reading through some posts it seems the famous Probability, Preferred Basis and Relativity problems are still unsolved.
Are there any more?
Welcome!
Here is a comment by Mitchell Porter.
http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/1csi
Seconding Mitchell Porter's friendly attitude toward the Transactional Interpretation, I recommend this paper by Ruth Kastner and John Cramer.
I have a theory: Super-smart people don't exist, it's all due to selection bias.
It's easy to think someone is extremely smart if you've only seen the sample of their most insightful thinking. But every time that happened to me, and I found that such a promising person had a blog or something like that, it universally took very little time to find something terribly brain-hurtful they've written there.
So the null hypothesis is: there's a large population of fairly-smart-but-nothing-special people, who think and publish their thought a lot. Because the best thoughts get distributed, and average and worse thoughts don't, it's very easy from such small biased samples to believe some of them are far smarter than the rest, but their averages are pretty much the same.
(feel free to replace "smart" by "rational", the result is identical)
How would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?
I think my comment was rather vague, and people aren't sure what I meant.
This is all my impressions, as far as I can tell evidence of all that is rather underwhelming; I'm writing this more to explain my thought than to "prove" anything.
It seems to me that people come in different level of smartness. There are some people with all sort of problems that make them incapable of even human normal, but let's ignore them entirely here.
Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc.
Then there are "smart" people who are capable of original insight, and don't get too stupid too often. They're not measuring example the same thing, but IQ tests are capable of distinguishing between those and the normal people reasonably well. With smart people both their top performance and their average performance is a lot better than with average people. In spite of that, all of them very often fail basic rationality for some particular domains they feel too strongly about.
Now I'm conflicted if people who are so much above "smart" as "smart" is above normal really exists. A canonical example of such person would be Feynman - from my limited information he seems to be just so ridiculously smart. Eliezer seems to believe Einstein is like that, but I have even less information about him. You can probably think of a few such other people.
Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now.
And yet, every time it seemed to me that someone might just be that smart and I started reading their blog - it turned out very quickly that my estimate of their smartness suffered from rapid regression to the mean. All my super-smart candidates managed to say such horrible things, and be deaf to such obvious arguments that I doubt any of them really qualifies.
So here's an alternative theory. No human alive is much smarter than the "normally smart". Of population of normally smart people, thanks to domain expertise, wit and writing skill, compatibility with my beliefs (or at least happening to avoid my red flags), higher productivity, luck etc. some people simply seem much smarter than that.
I'm not trolling here, but consider Eliezer - I've picked the example because it's well known here. For some time he was exactly such a candidate, however:
On the other hand, and this provides some counter-evidence to my theory - let's look at myself. I publish anything on my blog and in comments everywhere that seems to have expected public value higher than zero, and very often I'm in hurry / sleep-depraved, or otherwise far below my top performance. I exaggerate to get the point across very often. I write outside my area of expertise a lot, not uncommonly making severe mistakes. I'm not that good at writing (not to mention that English is not my first language) so things I say may be very unclear.
Unfortunately a normally smart person with my behaviour patterns, and a super-smart person with my behaviour patterns, would probably both fail my super-smartness test.
As you can see, I'm not even terribly convinced that my "super-smart people don't exist" theory is true. I would love to see if other people have good evidence or insight one way or the other.
Another by-the-way: Very often blatantly wrong belief might still be the least-wrong belief given someone's web of beliefs. Often it's easier to believe some minor wrong than to rebuild your whole belief system risking far more damage just to make something small come out correct. So perhaps even my test for being really really wrong is not really all that useful.
Why would they blog? They would already know that most people have nothing of interest to tell them; and if they want to tell other people something, they can do it through other channels. If such a person had a blog, it might be for a very narrow reason, and they would simply refrain from talking about matters guaranteed to produce nothing but time-consuming stupidity in response.
I doubt your disproof of super-smart people, for the very same reasons you do, perhaps with a greater weight assigned to those reasons.
I am also not sure about your definition of super-smart. Is idiot savant (in math, say) super-smart? If you mean super-smart=consistently rational, I suspect nothing prevents people of normal-smart IQ from scoring (super) well there, trading off quantity of ideas for quality. There is a ceiling there as good ideas get more complex and require more processing power, but I suspect given how crazy this world is Norm Smart the Rationalist can score surprisingly highly on relative basis.
As a data point you might want to look at "Monster Minds" chapter of Feynman's "Surely you're joking". Since you mentioned Feynman. The chapter is about Einstein.
Finally, where is your blog? ;)
My blog is here.
You can set that in "preferences".
A few people who blog frequently and fit my criteria for "super-smart": Terence Tao, Cosma Shalizi, John Baez.
I was thinking of Tao as well. Also, Oleg Kiselyov for programming/computer science.
Yep, seconding the recommendation of Oleg. I read a lot of his writings and I'd definitely have included him on the list.
It doesn't seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.
I don't know what's the correct super-smartness cluster, so I cannot make objective predictive definition, at least yet. There's no need to suffer from physics envy here - a lot of useful knowledge has this kind of vagueness. Nobody managed to define "pornography" yet, and it's far easier concept than "super-smartness". This kind of speculation might end up with something useful with some luck (or not).
Even defining by example would be difficult. My canonical examples would be Feynman and Einstein - they seem far smarter than the "normally smart" people.
Let's say I collected a sufficiently large sample of "people who seem super-smart", got as accurate information about them as possible, and did a proper comparison between them and background of normally smart people (it's pretty easy to get good data on those, even by generic proxies like education - so I'm least worried about that) in a way that would be robust against even large number of data errors. That's about the best I can think of.
Unfortunately it will be of no use as my sample will be not random super-smart people but those super-smart people who are also sufficiently famous for me to know about them and be aware of their super-smartness. This isn't what I want to measure at all. And I cannot think of any reasonable way to separate these.
So the project is most likely doomed. It was interesting to think about this anyway.
I think you're giving the "normal person" too little credit.
Agreed. If nothing else, refugee situations aren't that uncommon in human history, and the majority are able to migrate and adapt if they're physically permitted to do so.
Reminds me of 'My Childhood Role Model'.
As for the actual meat of your comment, I don't have much to add. 'Smart' is a slippery enough word that I'd guess one's belief in 'super-smart people' depends on how one defines 'smart.'
I'm not sure that the ability to have original thoughts is at all closely connected to the ability to think rationally. What makes you reach that conclusion?
Have you tried looking at Terence Tao's blog? I think he fits your model, but it may be that many of his posts will be too technical for a non-mathematician. I'm not sure in general if blogging is a good medium for actually finding this sort of thing. It is easy to see if a blogger isn't very smart. it isn't clear to me that it is a medium that allows one to easily tell if someone is very smart.
I'm not a psychologist but I thought I could improve on the vagueness of the original discussion.
There are a few factors which determine "smartness" (or potential for success):
Speed. Having faster hardware.
Pattern Recognition. Being better at "chunking".
Memory.
Creativity. (="divergent" thinking.)
Detail-awareness.
Experience. Having incorporated many routines into the subconscious thanks to extensive practice.
Knowledge. (Quality is more important than quantity.)
The first five traits might be considered part of someone's "talent." Experience and knowledge, which I'll group together as "training", must be gained through hard work. Potential for success is determined by a geometric (rather than additive) combination of talent and training: that is, roughly,
potential for success=talent * training
All this math, of course, is not remotely intended to be taken at face value, but it's merely the most efficient way to make my point.
The "super-smart" start life with more talent than average. The rule of the bell curve holds, so they generally do not have an overwhelming cognitive advantage over the average person. But they have enough talent to justify investing much more of their resources into training. This is because a person with 15 talent will gain 15 success for every unit of time they put into training, while a unit of training is worth 17 success for a person with 17 talent. The less time you have to spend, the more time costs, so all other things being equal, the person with more talent will put more time into training. Suppose the person with 15 talent puts 100 units of time into training, and the person with 17 talent puts 110 units of time into training. Then:
person with 15 talent * 100 training => 15000 success
person with 17 talent * 110 training => 18700 success
Which is 25% more success for only 13% more talent.
There's probably some more formal work done along these lines, I'm not an economist either.
If you're interpreting "super-smart" to mean always right, or at least reasonable, and thus never severely wrong-headed, I think you're correct that no one like that exists, but it seems like a rather comic bookish idea of super-smartness.
Also, I have no idea how good your judgment is about whether what you call brain-hurtful is actually ideas I'd think were egregiously wrong.
I think there are a lot of folks smart enough to be special people-- those who come up with worthwhile insights frequently.
And even if it's just a matter of generating lots of ideas and then publishing the best, recognizing the best is a worthwhile skill. It's conceivable that idea-generation and idea-recognizing are done by two people who together give the impression of one person who's smarter than either of them.
I was thinking something similar just today:
Some people think out loud. Some people don't. Smart people who think out loud are perceived as "witty" or "clever." You learn a lot from being around them; you can even imitate them a little bit. They're a lot of fun. Smart people who don't think out loud are perceived as "geniuses." You only ever see the finished product, never their thought processes. Everything they produce is handed down complete as if from God. They seem dumber than they are when they're quiet, and smarter than they are when you see their work, because you have no window into the way they think.
In my experience, there are far more people who don't think out loud in math than in less quantitative fields. This may be part of why math is perceived as so hard; there are all these smart people who are hard to learn from, because they only reveal the finished product and not the rough draft. Rough drafts make things look feasible. Regular smart people look like geniuses if they leave no rough drafts. There may really be people who don't need rough drafts in the way that we mundanes do -- I've heard of historical figures like that, and those really are savants -- but it's possible that some people's "genius" is overstated just because they're cagey about expressing half-formed ideas.
I Am a Strange Loop by Hofstadter may be of interest-- it's got a lot about how he thinks as well as his conclusions.
You may be right about math. Reading the Polymath research threads (like this one) made me aware that even Terry Tao thinks in small and well-understood steps that are just slightly better informed than those of the average mathematician.
After more-or-less successfully avoiding it for most of LW's history, we've plunged headlong into mind-killer territory. I'm a little bit worried, and I'm intrigued to find out what long-time LWers, especially those who've been hesitant about venturing that direction, expect to see as a result over the next month or two.
It is problematic but necessary, in my opinion. Politics IS the mind-killer, but politics DOES matter. Avoiding the topic would seem to be an admission that this rationality thing is really just a pretty toy.
But it would be nice to lay down some ground-rules.
It doesn't look encouraging. The discussions just don't converge, they meander all over the place and leave no crystalline residue of correct answers. (Achievement unlocked: Mixed Metaphor)
My feelings on this are mixed. I've found LW to be a refreshing refuge from such quarrels. On the other hand, without careful thought political debates reliably descend into madness quickly, and it is not as if politics is unimportant. Perhaps taking the mental techniques discussed here to other forums could improve the generally atrocious level of reasoning usually found in online political discussions, though I expect the effect would be small.
I don't think anyone has mentioned a political party or a specific current policy debate yet. That's when things really go downhill.
I think a current policy debate has potential for better results, since it would offer the potential for betting, and avoid some of the self-identification and loyalty that's hard to avoid when applying a model as simple as a political philosophy to something as complex as human culture.
Since we've had some discussion about additions/modifications to the site, and LW -- as I understand it -- was a originally a sort of spin-off from OB, maybe addition of a karma-based prediction market of some sort would be suitable (and very interesting).
Maybe make bets of karma? That might be very interesting. It would have less bite than monetary stakes, but highly risk averse individuals might be more willing to join the system.
I think having such a low-stakes game to play would be beneficial not only to highly risk-averse individuals, but to anyone. It would provide a useful training ground (maybe even a competitive ladder in a rationality dojo) for anyone who wants to also play with higher stakes elsewhere.
Edit: I'm currently a mediocre programmer (and intend to become good via some practice). And while I don't participate often in the community (yet), this could be fun and educational enough that I would be willing to contribute a fairly substantial amount of labour to it. If anyone with marginally more know-how is willing to implement such an idea, let me know and I'll join up.
Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.
I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.
Suppose I have ten people and a stick. The appropriate infinitely powerful theoretical being offers me a choice. I can hit all ten of them with a stick, or I can hit one of them nine times. "Hitting with a stick" has some constant negative utility for all the people. What do I do?
This seems to me to be exactly dust specks vs. torture scaled down to humanly intuitable scales. I think the obvious answer is to hit all the people once. Examining my intuition tells me that this is because I think the aggregation function for utility is different across different people than across one person's possible futures. Specifically, my intuition tells me to maximize across people the minimum expected utilty across an individual's future.
So, is there a name for this position?
Do people think my example is equivalent to DSvsT?
Do people get the same or different answer with this question as they do with DSvsT?
I don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.
I'd analyze your question this way. Ask any one of the ten people which they would prefer: A) to get hit B) to have a 1/10th chance of getting hit 9 times.
Assuming rationality and constant disutility of getting hit, every one of them would choose B.
I don't think you can justifiably expect to be able to tell your brain something this self-evidently unrealistic, and have it update its intuitions accordingly.
DSvsT was not directly an argument for utilitarianism, it was an argument for tradeoffs and quantitative thinking and against any kind of rigid rules, sacred values, or qualitative thinking which prevents tradeoffs. For any two things, both of which have some nonzero value, there should be some point where you are willing to trade off one for the other - even if one seems wildly less important than the other (like dust specks compared to torture). Utilitarianism provides a specific answer for where that point is, but the DSvsT post didn't argue for the utilitarian answer, just that the point had to be at less than 3^^^3 dust specks. You would probably have to be convinced of utilitarianism as a theory before accepting its exact answer in this particular case.
The stick-hitting example doesn't challenge the claim about tradeoffs, since most people are willing to trade off one person getting hit multiple times with many people each getting hit once, with their choice depending on the numbers. In a stadium full of 100,000 people, for instance, it seems better for one person to get hit twice than for everyone to get hit once. Your alternative rule (maximin) doesn't allow some tradeoffs, so it leads to implausible conclusions in cases like this 100,000x1 vs. 1x2 example.
I think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge.
For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.
I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.
I think you're mistaken about the marginal utility-- being hit again after you've already been injured (especially if you're hit on the same spot) is probably going to be worse than the first blow.
Marginal disutility could plausibly work in the opposite direction from marginal utility.
Each 10% of your money that you lose impacts your quality of life more. Each 10% of money that you gain impacts your quality of life less. There might be threshold effects for both, but I think the direction is right.
I was thinking more along the lines of scope failure: If some one said you were going to be hit 11 times would you really expect it to feel exactly 110% as bad as being hit ten times?
But yes, from a traditional economics point of view, your post makes a hell of a lot more sense. Upvoted.
Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.
Oh, and I'd love to hear what you mean about this.
There's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.
Are there any rationalist psychologists?
Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?
As a start, http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy is a branch of psychotherapy with some respect around here because of the evidence that it sometimes works, compared to the other fields of psychotherapy with no evidence.
Do they really have such a poor track record? I know some scientists have very little respect for the "soft" sciences, but sociologist can at least make generalizations from studies done on large scales. Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective?
Yes this is essentially a post stating my incredulity. Would you mind quelling it?
It's not that they're 0% effective, it's that they're not much more effective than placebo therapy (i.e. being put on a waiting list for therapy), or keeping a journal.
CBT is somewhat more effective, but I've also heard that it's not as effective for high-ruminators... i.e., people who already obsess about their thinking.
Scientific medicine is difficult and expensive. I worry that the apparent success of CBT may be because methodological compromises needed to make the research practical happen to flatter CBT more than they flatter other approaches.
I might be worrying about the wrong thing. Do we know anything about the usefulness of Prozac in treating depression? Since we turn a blind eye to the unblinding of all our studies by the sexual side-effects of Prozac, and also refuse to consider the direct impact of those side-effects it could be argued that we don't actually have any scientific knowledge of the effectiveness of the drug.
It's not that other forms of psychotherapy are scientifically shown to be 0% effective; it's just that evidence-based psychotherapy is a surprisingly recent field. Psychotherapy can still work even if some fields of it have not had rigorous studies showing their effectiveness... but you might as well go with a therapist that has training in a field of psychotherapy that has some scientific method behind it.
http://www.mentalhelp.net/poc/view_doc.php?type=doc&id=13023&cn=5
I can't help you with the Denver area in particular, but the general answer is a definite yes. In an interesting juxtaposition, American Psychologist magazine had a recent issue prominently featuring discussion of how to get past the misuse of statistics discussed in this very LW open thread. And it's not the first time the magazine addressed the point.
Does cognitive rationalist therapy count as both rationalist and psychology for purposes of this question?
I think Learning Methods is a more sophisticated rationalist approach than CBT (it does a more meticulous job of identifying underlying thoughts), and might be worth checking into.
Interesting. I found the site to be not very helpful, until I hit this page, which strongly suggests that at least one thing people are learning from this training is the practical application of the Mind Projection Fallacy:
The quote is from an article written by an LM student, and some insights from the learning process that helped her overcome her stage fright.
IOW, at least one aspect of LM sounds a bit like "rationality dojo" to me (in the sense that here's an ordinary person with no special interest in rationalism, giving a beautiful (and more detailed than I quoted here) explanation of the Mind Projection Fallacy, based on her practical applications of it in everyday life .
(Bias disclaimer: I might be positively inclined to what I'm reading because some of it resembles or is readily translatable to aspects of my own models. Another article that I'm in the middle of reading, for example, talks about the importance of addressing the origins of nonconsciously-triggered mental and physical reactions, vs. consciously overriding symptoms -- another approach I personally favor.)