ciphergoth comments on Against Cryonics & For Cost-Effective Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (180)
Because given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
What do you expect me to do? Just believe you? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn't allow me to devote all my resource to the SIAI without ever going to movies or the like. The thought makes me reluctant to give anything at all.
ETA
Do you have an explanation for the circumstance that you are the only semi-popular person who has figured all this out? The only person who's aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?
I'm talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all this for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site rather than other near-term risks that might very well wipe us out.
Why aren't Eric Drexler, Gary Drescher or other AI researches like Marvin Minsky worried to the extent that they signal their support for your movement?
You may be forced to make a judgement under uncertainty.
My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. Therefore I perceive it as unreasonable to put all my eggs in one basket.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority.
Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their accuracy. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?
Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.
You ask a lot of good questions in these two comments. Some of them are still open questions in my mind.