multifoliaterose comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 20 August 2010 03:46:39AM 20 points [-]

Unknown reminds me that Multifoliaterose said this:

The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity. At present, anybody who holds such a belief is suffering from extreme delusions of grandeur.

This makes explicit something I thought I was going to have to tease out of multi, so my response would roughly go as follows:

  • If no one can occupy this epistemic state, that implies something about the state of the world - i.e., that it should not lead people into this sort of epistemic state.
  • Therefore you are deducing information about the state of the world by arguing about which sorts of thoughts remind you of your youthful delusions of messianity.
  • Reversed stupidity is not intelligence. In general, if you want to know something about how to develop Friendly AI, you have to reason about Friendly AI, rather than reasoning about something else.
  • Which is why I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me. In other words, I am reluctant to argue on this level not just for the obvious political reasons (it's a sure loss once the argument starts), but because you're trying to extract information about the real world from a class of arguments that can't possibly yield information about the real world.
  • That said, as far as I can tell, the world currently occupies a ridiculous state of practically nobody working on problems like "develop a reflective decision theory that lets you talk about self-modification". I agree that this is ridiculous, but seriously, blame the world, not me. Multi's principle would be reasonable only if the world occupied a much higher level of competence than it in fact does, a point which you can further appreciate by, e.g., reading the QM sequence, or counting cryonics signups, showing massive failure on simpler issues.
  • That reflective decision theory actually is key to Friendly AI is something I can only get information about by thinking about Friendly AI. If I try to get information about it any other way, I'm producing noise in my brain.
  • We can directly apply multi's stated principle to conclude that reflective decision theory cannot be known to be critical to Friendly AI. We were mistaken to start working on it; if no one else is working on it, it must not be knowably critical; because if it were knowably critical, we would occupy a forbidden epistemic state.
  • Therefore we have derived knowledge about which problems are critical in Friendly AI by arguing about personal psychology.
  • This constitutes a reductio of the original principle. QEA. (As was to be argued.)
Comment author: multifoliaterose 20 August 2010 06:39:52PM 0 points [-]

I agree with khafra. Your response to my post is distortionary. The statement which you quote was a statement about the reference class of people who believe themselves to be the most important person in the world. The statement which you quote was not a statement about FAI.

Any adequate response to the statement which you quote requires that you engage with the last point that khafra made:

Whether this likelihood ratio is large enough to overcome the evidence on AI-related existential risk and the paucity of serious effort dedicated to combating it is an open question.

You have not satisfactorily addressed this matter.

Comment author: Furcas 21 August 2010 03:36:59PM *  4 points [-]

It looks to me like Eliezer gave your post the most generous interpretation possible, i.e. that it actually contained an argument attempting to show that he's deluding himself, rather than just defining a reference class and pointing out that Eliezer fits into it. Since you've now clarified that your post did nothing more than that, there's not much left to do except suggest you read all of Eliezer's posts tagged 'FAI', and this.