JoshuaZ comments on Open Thread: July 2010, Part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (770)
I am tentatively interpreting your remark about "not wanting to leave out those I have *plonked" as an indication that you might read comments by such individuals. Therefore, I'm am going to reply to this remark. I estimate a small probability (< 5%) that you will actually consider what I have to say in this comment, but I also estimate that explicitly stating that estimate increase the probability rendering the estimate possibly as high as 10%. I estimate a much higher chance that this remark will be of some benefit to readers here, especially if they haven't seen your earlier comments.
The post you are replying to made no mention of AI at all. You seem to be focusing on the word "dumb" and assuming a very narrow definition. This is interesting in that in reading the remark I interpreted "dumb" as almost exactly what you think it is not talking about, that is lacking knowledge and technology. Incidentally, I'm not sure how AI would not fall in the technology category.
That's an interesting claim. As one of the posters here who seems to annoy you the most, I find it interesting that I a) estimate a very tiny probability for a Singularity type event involving AI b) am not signed up cryonics (although I am considering it) and c) estimate a very small chance that technology in the next fifty years will allow indefinite extensions of life spans. While I am a sample size of one, I don't think I'm that far off from the usual LW contributor (although obviously this could be due to the standard bias of humans assuming that others are similar to them.)
I can't speak directly for the individuals who want to create a strong very powerful singleton AI, but your claim that they wish to do to allow them to be as immoral as they like seems false. Indeed, much of the discussion about such AIs centers around taking human morality and how one would get an AI to obey general human moral and ethical norms. So how one gets that they want to be as immoral as they want is not at all clear.
I also don't understand how trying to create an AI that's powerful is the same as believing in an all powerful deity that exists independently of humans.
Curiously, most of the pro-cryonics individuals here estimate low probabilities of successful cryonics I haven't see anyone here make an explicit estimate that was more than 25%. ( If anyone here does estimate a higher chance I'd be curious to hear it and see what their logic is.) I've seen multiple people here who put the estimate at <10% I'm pretty sure that very few religious individuals who believe in an afterlife would put that low a probability estimate.
I'm also not sure why you choose to focus so much on Christianity as the comparison religion. Many religions have aspects very similar to what you laid out in your comparison. Zoroastrianism has many elements that pre-dated Christianity, and various Jewish sects also had similar beliefs. Moreover, if any religion gets to be Christianity v2.0 it would be Islam, with possibly Mormonism or the Bahai being 3.0.
Now, it is true that many transhumanists and Singularitarians (note that these are not necessarily the same thing) do have attitudes that come across as intensely religious in form. These issues have been discussed here before (note how those comments were voted up which shows that such criticism when properly targeted and well thought out is considered worth discussing here. This provides an interesting contrast to your remarks. It is also difficult to reconcile such upvoting with your model of LW as full of fanatical transhumanist Singularitarians.)
You also seem to be again trying to score some sort of rhetorical points with name calling and labeling. I don't think that almost anyone here, either posters or readers is going to be more persuaded by your opinions if you use that term. Frankly, as someone who finds a lot of the more borderline religious aspects of transhumanism and Singularitarianism to be pretty disturbing, reading your remarks makes me feel more sympathetic to those viewpoints simply out of an emotional reaction against your poor arguments.
Three questions: First, What aspects of LW's regard to rationalism do you think are seriously warped? Second, do you think the community is monolithic in its attitude towards rationality? (I for example am both not an epistemological Bayesian and also think that LW frequently downplays to its own detriment the complicated history of science and scientific discoveries. But I don't think I'd label things as so warped. ) Third, if you think that LW's rationalism is so warped what do you think you are gaining by posting here?
The problem with religious beliefs is not that they are false (they don't have to be), but that they are believed for the purpose of signaling belonging to a group, rather than because they are true. This does cause them to often be wrong or not even wrong, but the wrongness is not the problem, epistemic practices that lead to them are. Correspondingly, the reasons for a given religious belief turning out to be wrong are a different kind of story from the reasons for a given factual belief turning out to be wrong. The comparison of factual mistakes in religious beliefs and factual mistakes made by people who try to figure things out is a shallow analogy that glosses over the substance of the processes.