Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 01 May 2017 03:03:14PM 3 points [-]

effectively deal with Gleb-like people

Here on LW Gleb got laughed at almost immediately as he started posting. Did he actually manage to make any inroads into EA/Bay Area communities? I know EA ended up writing a basically "You are not one of us, please go away" post/letter, but it took a while.

Comment author: pcm 02 May 2017 03:45:20PM 1 point [-]

I'd guess the same fraction of people reacted disrespectfully to Gleb in each community (i.e. most but not all). The difference was more that in an EA context, people worried that he would shift money away from EA-aligned charities, but on LW he only wasted peoples' time.

Comment author: ThoughtSpeed 25 April 2017 11:07:30PM 3 points [-]
  1. Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).

  2. Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.

I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).

Comment author: pcm 26 April 2017 05:19:41PM 3 points [-]

Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives.

Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.

Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That's different from "you don't have to do anything".

Most of the difficulties I've had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some "learned helplessness" about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I've only experienced in reaction to observing people around me being in appropriate moods.

Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.

Comment author: Daniel_Burfoot 25 April 2017 10:52:21PM 2 points [-]

Claim: EAs should spend a lot of energy and time trying to end the American culture war.

America, for all its terrible problems, is the world's leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it's going to continue help people all over the world in the centuries and millennia to come. Likewise for medical technology. If an American company discovers a cure for cancer, that will benefit people all over the globe... and it will also benefit the citizens of Muskington, the capitol of the Mars colony, in the year 4514.

It should be obvious to any student of history that most societies, in most historical eras, are not very innovative. Europe in the 1000s was not very innovative. China in the 1300s was not very innovative, India in the 1500s was not very innovative, etc etc. France was innovative in the 1700s and 1800s but not so much today. So the fact that the US is innovative today is pretty special: the ability to innovate is a relatively rare property of human societies.

So the US is innovative, and that innovation is enormously beneficial to humanity, but it's naive to expect that the current phase of American innovation will last forever. And in fact there are a lot of signs that it is about to die out. Certainly if there were some large scale social turmoil in the US, like revolution, civil war, or government collapse, it would pose a serious threat to America's ability to innovate.

That means there is an enormous ethical rationale for trying to help American society continue to prosper. There's a first-order rationale: Americans are humans, and helping humans prosper is good. But more important is the second-order rationale: Americans are producing technology that will benefit all humanity for all time.

Currently the most serious threat to the stability of American society is the culture war: the intense partisan political hatred that characterizes our political discourse. EAs could have a big impact by trying to reduce partisanship and tribalism in America, thereby helping to lengthen and preserve the era of American innovation.

Comment author: pcm 26 April 2017 03:07:32PM 1 point [-]

You write about its importance, yet I suspect EAs mostly avoid it due to doubts about tractability and neglectedness.

Comment author: Benquo 24 April 2017 04:03:49AM 5 points [-]

It was very much not obvious to me that GiveWell doubted its original VillageReach recommendation until I emailed. What published information made this obvious to you?

The main explanation I could find for taking VillageReach off the Top Charities list was that they no longer had room for more funding. At the time I figured this simply meant they'd finished scaling up inside the country and didn't have more work to do of the kind that earned the Top Charity recommendation.

Comment author: pcm 25 April 2017 02:21:47AM 5 points [-]

From http://blog.givewell.org/2012/03/26/villagereach-update/:

We are also more deeply examining the original evidence of effectiveness for VillageReach’s pilot project. Our standards for evidence continue to rise, and our re-examination has raised significant questions that we intend to pursue in the coming months.

I had donated to VillageReach due to GiveWell's endorsement, and I found it moderately easy to notice that they had changed more than just the room for funding conclusion.

Comment author: Viliam 30 March 2017 12:02:53PM *  10 points [-]

I feel sad that the project is gone before I even understood how it was supposed to work.

I was like: "I have absolutely no idea what this is supposed to be or to do, but smart people seem enthusiastic about it, so it's probably a smart thing, and maybe later when I have more time, I will examine it more closely."

Now, my question is... how much should I use this as an outside view for other activities of MIRI?

Comment author: pcm 30 March 2017 04:37:00PM 2 points [-]

how much should I use this as an outside view for other activities of MIRI?

I'm unsure whether you should think of it as a MIRI activity, but to the extent you should, then it seems like moderate evidence that MIRI will try many uncertain approaches, and be somewhat sensible about abandoning the ones that reach a dead end.

Comment author: pcm 01 March 2017 06:34:57PM 0 points [-]

I think your conclusion might be roughly correct, but I'm confused by the way your argument seems to switch between claiming that an intelligence explosion will eventually reach limits, and claiming that recalcitrance will be high when AGI is at human levels of intelligence. Bostrom presumably believes there's more low-hanging fruit than you do.

Comment author: pcm 12 January 2017 06:34:45PM 1 point [-]

I have a relevant blog post on models of willpower.

Comment author: pcm 12 January 2017 06:27:38PM 2 points [-]

I reviewed the book here.

Comment author: pcm 25 December 2016 07:51:30PM 1 point [-]

I subsidized some InTrade contracts in 2008. See here, here and here.

Comment author: sarahconstantin 06 December 2016 05:40:02PM 4 points [-]

I wonder if there's any way to measure rationality in animals.

Bear with me for a second. The Cognitive Reflection Test is a measure of how well you can avoid the intuitive-but-wrong answer and instead make the more mentally laborious calculation. The Stroop test is also a measure of how well you can avoid making impulsive mistakes and instead force yourself to focus only on what matters. As I recall, the "restrain your impulses and focus your thinking" skill is a fairly "biological" one -- it's consistently associated with activity in particular parts of the brain, influenced by drugs, and impaired in conditions like ADHD.

Could we -- or have we already -- design a variant of this made out of mazes that rats could run through?

I might look into this more carefully myself, but does anyone know off the top of their heads?

Comment author: pcm 07 December 2016 05:10:17PM 1 point [-]

See Rosati et al., The Evolutionary Origins of Human Patience: Temporal Preferences in Chimpanzees, Bonobos, and Human Adults, Current Biology (2007). Similar to the marshmallow test.

View more: Next