lessdazed comments on Connecting Your Beliefs (a call for help) - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (73)
I'd like to see a post on that worldview. The possibility of an intelligence explosion seems to be an extraordinary belief. What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote your whole life to that possibility?
How do you anticipate your beliefs to pay rent? What kind of evidence could possible convince you that an intelligence explosion is unlikely, how could your beliefs be surprised by data?
There is no reason to believe intelligence stops being useful for problem solving as one gets more of it, but I can easily imagine evidence that would suggest that.
A non-AI intelligence above human level, such as a human with computer processors integrated into his or her brain, a biologically enhanced human, etc. might prove to be no more usefully intelligent than Newton, Von Neumann, etc. despite being an order of magnitude smarter by practical measures.
Leveraging makes little sense according to many reasonable utility functions. If one is guessing the color of random cards, and 70% of the cards are red, and 30% are blue, and red and blue pay out equally, one should always guess red each turn.
Where utility is related to the ln of money, it makes sense to diversify, but that is different from a life-impact sort of case where one seeks to maximize the utility of others.
The outside view is to avoid total commitment lest one be sucked into a happy death spiral and suffer from the sunk costs fallacy, but if those and similar fallacies can be avoided, total commitment makes sense.