Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Feedback on LW 2.0
Comment author: pcm 04 October 2017 08:27:37PM *  1 point [-]

There's something about reading the new style that makes me uncomfortable, and prompts me to skim some posts that I would have read more carefully on the old site. I'm not too clear on what causes that effect. I'm guessing that some of it is the excessive amount of white, causing modest sensory overload.

Some of it could be the fact that less of a post fits on a single screenful: I probably form initial guesses about a post's value based on the first screenful, and putting less substance on that first screenful leads me to guess that the post has less substance. Or maybe I associate large fonts with click-baity sites, and small fonts with more intellectual sites.

The editor used for writing comments is really annoying. E.g. links expand to include unrelated text, or unexpectedly stop being links.

I want a way to enter html and/or markup that I can cut and paste after writing them in an editor with which I'm more comfortable. Without that, I'll probably give up on writing comments that are more thoughtful than Facebook comments.

I presume the new karma system will be an important improvement. I'm unhappy that it's bundled with such large changes to aspects of the UI that were working adequately.

Comment author: pcm 29 September 2017 06:29:15PM *  1 point [-]

Most of your post is good, but you're too eager to describe trends as mysterious.

Also, your link to "a previous post" is broken.

Moore's law appears to be a special case of Wright's Law. I.e. it seems well explained by experience curve effects (or possibly economies of scale).

Secondly, we have strong reasons to suspect that there won't be any explanation that ties together things like the early evolution of life on Earth, human brain evolution, the agricultural revolution, the industrial revolution, and future technology development. These phenomena have decent local explanations that we already roughly understand

I don't see these strong reasons.

Age of Em gives some hints (page 14) that the last three transitions may have been caused by changes in how innovation diffused, maybe related to population densities enabling better division of labor.

I think Henrich's The Secret of our Success gives a good theory human evolution which supports Robin's intuitions there.

For the industrial revolution, there are too many theories, with inadequate evidence to test them. But it does seem possible that the printing press played a role that's pretty similar to Henrich's explanations for early human evolution.

I don't know much about the causes of the agricultural revolution.

Comment author: MaryCh 15 September 2017 06:19:51AM 0 points [-]

I think being very tired is a separate feeling, from being simply tired and being exhausted. One can be a bit very tired on Monday and a lot very tired on Friday - and still not exhausted.

Comment author: pcm 15 September 2017 02:50:05PM 1 point [-]

I'm sometimes able to distinguish different types of feeling tired, based on what my system 1 wants me to do differently: sleep more, use specific muscles less, exercise more slowly, do less of a specific type of work, etc.

Comment author: pcm 03 July 2017 10:46:04PM 1 point [-]

Tool-boxism implies that there is no underlying theory that describes the mechanisms of intelligence.

If I try to apply this to protein folding instead of intelligence, it sounds really strange.

Most people who make useful progress at protein folding appear to use a relatively tool-boxy approach. And they all appear to believe that quantum mechanics provides a very good theory of protein folding. Or it least it would be, given unbounded computing power.

Why is something similar not true for intelligence?

Comment author: pcm 07 June 2017 05:00:44PM 0 points [-]

I agree with most of what you said. But in addition to changing the community atmosphere, we can also change how guarded we feel in reaction to a given environment.

CFAR has helped me be more aware of when I'm feeling guarded (againstness), and has helped me understand that those feelings are often unnecessary and fixable.

Authentic relating events (e.g. Aletheia) have helped to train my subconscious to feel more safe about feeling less guarded in contexts such as LW meetups.

There's probably some sense in which I've lowered my standards, but that's mostly been a fairly narrow sense of that term: some key parts of my system 1 have become more willing to bring ideas to my conscious attention. That has enabled me to be less guarded, with essentially no change in the intellectual standards that I use at a system 2 level.

Comment author: ChristianKl 02 June 2017 03:49:23PM 0 points [-]

I'm seeking for a book that lies out the orthodox mainstream view, is that the case for the book you recommend? (I generally don't have a problem with unorthodox views, but in this case I seek to develop clear knowledge of the orthodox view)

Comment author: pcm 02 June 2017 07:22:29PM 0 points [-]

It isn't designed to describe the orthodox view. I think the ideas it describes are moderately popular among mainstream experts, but probably some experts dispute them.

Comment author: ChristianKl 01 June 2017 10:36:05PM 1 point [-]

I want to learn more about the basics of pathopsychology. I have read about different mental illnesses at various times in different contexts but I never really studied the basics of the standard concepts of the different mental illnesses.

Which textbook or pop-science book gives a good overview?

Comment author: pcm 02 June 2017 03:32:27PM 0 points [-]

I enjoyed Shadow Syndromes, which is moderately close to what you asked for.

Comment author: simbyotic 02 June 2017 11:30:27AM *  0 points [-]

This is literally the most useful thread there could possibly be for me because there are times I think "I would really like to learn about X" but don't know what the names for X in an academic setting.

So, top of my mind:

  • Neuroscience of art & art appreciation
  • Evolutionary basis for storytelling
  • Something about disorders like Cotard's and what they mean for our understanding of consciousness

Is this a monthly thread btw? If not, it should!

Comment author: pcm 02 June 2017 03:30:02PM 0 points [-]

Henrich's The Secret of our Success isn't exactly about storytelling, but it provides a good enough understanding of human evolution that it would feel surprising to me if humans didn't tell stories.

Comment author: Lumifer 01 May 2017 03:03:14PM 3 points [-]

effectively deal with Gleb-like people

Here on LW Gleb got laughed at almost immediately as he started posting. Did he actually manage to make any inroads into EA/Bay Area communities? I know EA ended up writing a basically "You are not one of us, please go away" post/letter, but it took a while.

Comment author: pcm 02 May 2017 03:45:20PM 1 point [-]

I'd guess the same fraction of people reacted disrespectfully to Gleb in each community (i.e. most but not all). The difference was more that in an EA context, people worried that he would shift money away from EA-aligned charities, but on LW he only wasted peoples' time.

Comment author: ThoughtSpeed 25 April 2017 11:07:30PM 3 points [-]
  1. Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).

  2. Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.

I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).

Comment author: pcm 26 April 2017 05:19:41PM 3 points [-]

Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives.

Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.

Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That's different from "you don't have to do anything".

Most of the difficulties I've had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some "learned helplessness" about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I've only experienced in reaction to observing people around me being in appropriate moods.

Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.

View more: Next