wedrifid comments on Self-skepticism: the first principle of rationality - Less Wrong

36 Post author: aaronsw 06 August 2012 12:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 06 August 2012 06:19:54AM *  36 points [-]

You started with an intent to associate SIAI with self delusion

I see, he must be one of those innately evil enemies of ours, eh?

My current model of aaronsw is something like this: He's a fairly rational person who's a fan of Givewell. He's read about SI and thinks the singularity is woo, but he's self-skeptical enough to start reading SI's website. He finds a question in their FAQ where they fail to address points made by those who disagree, reinforcing the woo impression. At this point he could just say "yeah, they're woo like I thought". But he's heard they run a blog on rationality, so he makes a post pointing out the self-skepticism failure in case there's something he's missing.

The FAQ on the website is not the place to signal humility and argue against your own conclusions.

Why not? I think it's an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.

Overall, I thought aaronsw's post had a much higher information to accusations ratio than your comment, for whatever that's worth. As criticism goes his is pretty polite and intelligent.

Also, aaronsw is not the first person I've seen on the internet complaining about lack of self-skepticism on LW, and I agree with him that it's something we could stand to work on. Or at least signalling self-skepticism; it's possible that we're already plenty self-skeptical and all we need to do is project typical self-skeptical attitudes.

For example, Eliezer Yudkowsky seems to think that the rational virtue of "humility" is about "taking specific actions in anticipation of your own errors", not actually acting humble. (Presumably self-skepticism counts as humility by this definition.) But I suspect that observing how humble someone seems is a typical way to gauge the degree to which they take specific actions in anticipation of their own errors. If this is the case, it's best for signalling purposes to actually act humble as well.

(I also suspect that acting humble makes it easier to publicly change your mind, since the status loss for doing so becomes lower. So that's another reason to actually act humble.)

(Yes, I'm aware that I don't always act humble. Unfortunately, acting humble by always using words like "I suspect" everywhere makes my comments harder to read and write. I'm not sure what the best solution to this is.)

Comment author: wedrifid 06 August 2012 09:07:29AM *  12 points [-]

I see, he must be one of those innately evil enemies of ours, eh?

I made no such claim. I do claim that the specific quote I was replying to is a transparent falsehood. Do you actually disagree?

Far from being innately evil aaronsw appears to be acting just like any reasonably socially competent human with some debate skills can be expected to act when they wish to persuade people of something. It just so happens that doing so violates norms against bullshit, undesired forms of rhetoric and the use of arguments as soldiers without applying those same arguments to his own position.

Forget "innately evil". In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).

The FAQ on the website is not the place to signal humility and argue against your own conclusions.

Why not? I think it's an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.

If you sincerely believe that the optimal use of a FAQ on an organisations website is to argue against your own conclusions then I suggest that you have a fundamentally broken model of how the world---and people---work. It would be an attempt at counter-signalling that would fail abysmally. I'd actually feel vicarious embarrassment just reading it.

Comment author: John_Maxwell_IV 06 August 2012 09:41:00PM *  3 points [-]

Far from being innately evil aaronsw appears to be acting just like any reasonably socially competent human with some debate skills can be expected to act when they wish to persuade people of something. It just so happens that doing so violates norms against bullshit, undesired forms of rhetoric and the use of arguments as soldiers without applying those same arguments to his own position.

Forget "innately evil". In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).

Hm. Maybe you're right that I'm giving him too much credit just because he's presenting a view unpopular on LW. (Although, come to think of it, having a double standard that favors unpopular conclusions might actually be a good idea.) In any case, it looks like he rewrote his post.

If you sincerely believe that the optimal use of a FAQ on an organisations website is to argue against your own conclusions then I suggest that you have a fundamentally broken model of how the world---and people---work. It would be an attempt at counter-signalling that would fail abysmally. I'd actually feel vicarious embarrassment just reading it.

I think the optimal use of an FAQ is to give informed and persuasive answers to the questions it poses, and that an informed and persuasive answer will acknowledge, steel-man, and carefully refute opposing positions.

I'm not sure why everyone seems to think the answers to the questions in an FAQ should be short. FAQs are indexed by question, so it's easy for someone to click on just those questions that interest them and ignore the rest. lukeprog:

the linear format is not ideal for analyzing such a complex thing as AI risk

...

What we need is a modular presentation of the evidence and the arguments, so that those who accept physicalism, near-term AI, and the orthogonality thesis can jump right to the sections on why various AI boxing methods may not work, while those who aren't sure what to think of AI timelines can jump to those articles, and those who accept most of the concern for AI risk but think there's no reason to assert humane values over arbitrary machine values can jump to the article on that subject.

I even suggested creating a question-and-answer site as a supplement to lukeprog's proposed wiki.

I don't fault SI much for having short answers in the current FAQ, but it seems to me that FAQs are ideal tools for presenting longer answers relative to other media.

One option is for each question in the FAQ to have a page dedicated to answering it in depth. Then the main FAQ page could give a one-paragraph summary of SI's response along with a link to the longer answer. Maybe this would achieve the benefits of both a long and and a short FAQ?