Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pcm 03 July 2017 10:46:04PM 1 point [-]

Tool-boxism implies that there is no underlying theory that describes the mechanisms of intelligence.

If I try to apply this to protein folding instead of intelligence, it sounds really strange.

Most people who make useful progress at protein folding appear to use a relatively tool-boxy approach. And they all appear to believe that quantum mechanics provides a very good theory of protein folding. Or it least it would be, given unbounded computing power.

Why is something similar not true for intelligence?

Comment author: pcm 07 June 2017 05:00:44PM 0 points [-]

I agree with most of what you said. But in addition to changing the community atmosphere, we can also change how guarded we feel in reaction to a given environment.

CFAR has helped me be more aware of when I'm feeling guarded (againstness), and has helped me understand that those feelings are often unnecessary and fixable.

Authentic relating events (e.g. Aletheia) have helped to train my subconscious to feel more safe about feeling less guarded in contexts such as LW meetups.

There's probably some sense in which I've lowered my standards, but that's mostly been a fairly narrow sense of that term: some key parts of my system 1 have become more willing to bring ideas to my conscious attention. That has enabled me to be less guarded, with essentially no change in the intellectual standards that I use at a system 2 level.

Comment author: ChristianKl 02 June 2017 03:49:23PM 0 points [-]

I'm seeking for a book that lies out the orthodox mainstream view, is that the case for the book you recommend? (I generally don't have a problem with unorthodox views, but in this case I seek to develop clear knowledge of the orthodox view)

Comment author: pcm 02 June 2017 07:22:29PM 0 points [-]

It isn't designed to describe the orthodox view. I think the ideas it describes are moderately popular among mainstream experts, but probably some experts dispute them.

Comment author: ChristianKl 01 June 2017 10:36:05PM 1 point [-]

I want to learn more about the basics of pathopsychology. I have read about different mental illnesses at various times in different contexts but I never really studied the basics of the standard concepts of the different mental illnesses.

Which textbook or pop-science book gives a good overview?

Comment author: pcm 02 June 2017 03:32:27PM 0 points [-]

I enjoyed Shadow Syndromes, which is moderately close to what you asked for.

Comment author: simbyotic 02 June 2017 11:30:27AM *  0 points [-]

This is literally the most useful thread there could possibly be for me because there are times I think "I would really like to learn about X" but don't know what the names for X in an academic setting.

So, top of my mind:

  • Neuroscience of art & art appreciation
  • Evolutionary basis for storytelling
  • Something about disorders like Cotard's and what they mean for our understanding of consciousness

Is this a monthly thread btw? If not, it should!

Comment author: pcm 02 June 2017 03:30:02PM 0 points [-]

Henrich's The Secret of our Success isn't exactly about storytelling, but it provides a good enough understanding of human evolution that it would feel surprising to me if humans didn't tell stories.

Comment author: Lumifer 01 May 2017 03:03:14PM 3 points [-]

effectively deal with Gleb-like people

Here on LW Gleb got laughed at almost immediately as he started posting. Did he actually manage to make any inroads into EA/Bay Area communities? I know EA ended up writing a basically "You are not one of us, please go away" post/letter, but it took a while.

Comment author: pcm 02 May 2017 03:45:20PM 1 point [-]

I'd guess the same fraction of people reacted disrespectfully to Gleb in each community (i.e. most but not all). The difference was more that in an EA context, people worried that he would shift money away from EA-aligned charities, but on LW he only wasted peoples' time.

Comment author: ThoughtSpeed 25 April 2017 11:07:30PM 3 points [-]
  1. Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).

  2. Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.

I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).

Comment author: pcm 26 April 2017 05:19:41PM 3 points [-]

Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives.

Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.

Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That's different from "you don't have to do anything".

Most of the difficulties I've had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some "learned helplessness" about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I've only experienced in reaction to observing people around me being in appropriate moods.

Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.

Comment author: Daniel_Burfoot 25 April 2017 10:52:21PM 2 points [-]

Claim: EAs should spend a lot of energy and time trying to end the American culture war.

America, for all its terrible problems, is the world's leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it's going to continue help people all over the world in the centuries and millennia to come. Likewise for medical technology. If an American company discovers a cure for cancer, that will benefit people all over the globe... and it will also benefit the citizens of Muskington, the capitol of the Mars colony, in the year 4514.

It should be obvious to any student of history that most societies, in most historical eras, are not very innovative. Europe in the 1000s was not very innovative. China in the 1300s was not very innovative, India in the 1500s was not very innovative, etc etc. France was innovative in the 1700s and 1800s but not so much today. So the fact that the US is innovative today is pretty special: the ability to innovate is a relatively rare property of human societies.

So the US is innovative, and that innovation is enormously beneficial to humanity, but it's naive to expect that the current phase of American innovation will last forever. And in fact there are a lot of signs that it is about to die out. Certainly if there were some large scale social turmoil in the US, like revolution, civil war, or government collapse, it would pose a serious threat to America's ability to innovate.

That means there is an enormous ethical rationale for trying to help American society continue to prosper. There's a first-order rationale: Americans are humans, and helping humans prosper is good. But more important is the second-order rationale: Americans are producing technology that will benefit all humanity for all time.

Currently the most serious threat to the stability of American society is the culture war: the intense partisan political hatred that characterizes our political discourse. EAs could have a big impact by trying to reduce partisanship and tribalism in America, thereby helping to lengthen and preserve the era of American innovation.

Comment author: pcm 26 April 2017 03:07:32PM 1 point [-]

You write about its importance, yet I suspect EAs mostly avoid it due to doubts about tractability and neglectedness.

Comment author: Benquo 24 April 2017 04:03:49AM 5 points [-]

It was very much not obvious to me that GiveWell doubted its original VillageReach recommendation until I emailed. What published information made this obvious to you?

The main explanation I could find for taking VillageReach off the Top Charities list was that they no longer had room for more funding. At the time I figured this simply meant they'd finished scaling up inside the country and didn't have more work to do of the kind that earned the Top Charity recommendation.

Comment author: pcm 25 April 2017 02:21:47AM 5 points [-]

From http://blog.givewell.org/2012/03/26/villagereach-update/:

We are also more deeply examining the original evidence of effectiveness for VillageReach’s pilot project. Our standards for evidence continue to rise, and our re-examination has raised significant questions that we intend to pursue in the coming months.

I had donated to VillageReach due to GiveWell's endorsement, and I found it moderately easy to notice that they had changed more than just the room for funding conclusion.

Comment author: Viliam 30 March 2017 12:02:53PM *  10 points [-]

I feel sad that the project is gone before I even understood how it was supposed to work.

I was like: "I have absolutely no idea what this is supposed to be or to do, but smart people seem enthusiastic about it, so it's probably a smart thing, and maybe later when I have more time, I will examine it more closely."

Now, my question is... how much should I use this as an outside view for other activities of MIRI?

Comment author: pcm 30 March 2017 04:37:00PM 2 points [-]

how much should I use this as an outside view for other activities of MIRI?

I'm unsure whether you should think of it as a MIRI activity, but to the extent you should, then it seems like moderate evidence that MIRI will try many uncertain approaches, and be somewhat sensible about abandoning the ones that reach a dead end.

View more: Next