You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lessdazed comments on Connecting Your Beliefs (a call for help) - Less Wrong Discussion

24 Post author: lukeprog 20 November 2011 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 20 November 2011 10:35:00AM *  10 points [-]

I hadn’t noticed that my worldview already implied intelligence explosion.

I'd like to see a post on that worldview. The possibility of an intelligence explosion seems to be an extraordinary belief. What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote your whole life to that possibility?

I’m not talking about the problem of free-floating beliefs that don’t control your anticipations. No, I’m talking about “proper” beliefs that require observation, can be updated by evidence, and pay rent in anticipated experiences.

How do you anticipate your beliefs to pay rent? What kind of evidence could possible convince you that an intelligence explosion is unlikely, how could your beliefs be surprised by data?

Comment author: lessdazed 21 November 2011 12:21:11AM 2 points [-]

What kind of evidence could possible convince you that an intelligence explosion is unlikely, how could your beliefs be surprised by data?

There is no reason to believe intelligence stops being useful for problem solving as one gets more of it, but I can easily imagine evidence that would suggest that.

A non-AI intelligence above human level, such as a human with computer processors integrated into his or her brain, a biologically enhanced human, etc. might prove to be no more usefully intelligent than Newton, Von Neumann, etc. despite being an order of magnitude smarter by practical measures.

to the extent that you would afterwards devote your whole life to that possibility?

Leveraging makes little sense according to many reasonable utility functions. If one is guessing the color of random cards, and 70% of the cards are red, and 30% are blue, and red and blue pay out equally, one should always guess red each turn.

Where utility is related to the ln of money, it makes sense to diversify, but that is different from a life-impact sort of case where one seeks to maximize the utility of others.

The outside view is to avoid total commitment lest one be sucked into a happy death spiral and suffer from the sunk costs fallacy, but if those and similar fallacies can be avoided, total commitment makes sense.