JenniferRM comments on A Rationalist's Tale - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (305)
Very interesting story. Since I'm born in an atheist family and never believed in God, I lack any similar experience, and somehow, I regret it, because that experience must definitely be of a great help to change your mind about other topics. The closest experience I have to this is the Santa Claus thing, but I was such a young child that I only have confuse memory about how I started to doubt. But the process looks similar : there is nice Santa Claus person that gives me present, I start to doubt it's real and feel bad because I don't want the "magic of chirstmas" to go away, and then I realize that it's something even more "magical" than elves and flying Santa Claus going faster than light : it's the love of my parents, who spent days going from shop to shop to find the silly present I asked for in my letter to Santa Claus that the teacher gave them... it has the three phases : belief in something supernatural that makes you happy, doubt and feeling sad, and then realizing that reality makes you even more happy. But it's so lost in the mist of early childhood that it doesn't have the potency you describe.
Oh, on other topic, I'm still doubtful about "Singularity", « an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion'» sounds a logical jump with no foundation to me, let me try to explain : let's assume we have a measure of intelligence into a single real number, I(M). An intelligent machine can design a better version of itself, so we have I(Mn+1) > I(Mn). That's a strictly monotonically increasing sequence. That's all we know. A strictly monotonically increasing sequence can have a finite limit (like 1+1/2+1/4+1/8+... has a limit of 2), or can grow towards infinity very slowly (like log(n)). How do we know that designing a better intelligence is not an exponentially difficult task ? How do we know that above a given level, the formula doesn't look like I(Mn+1) = I(Mn) + 1/n, because every increase in intelligence is so much harder to make ? I guess there is an answer to that, but I couldn't find it in siginst FAQ... does any of you have a pointer to an answer to that question ?
There are lots of words on the subject in the FOOM debate but that's (1) full of lots of "intuition, examples, and hand waving" on both sides, (2) ended with neither side convincing the other, and (3) produced no formal coherent treatise on the subject where evidence could be dropped into place to give an unambiguous answer that a third party could see was obviously true. It is worth a read if you're looking for an intuition pump, not if you want a summary answer.
If you want to examine it from another angle to think about timing and details and so on, you might try using The Uncertain Future modeling tool. If you have the time to feed it input, I'm curious to know what output you get :-)
It seems to me that I'm both pessimistic and optimisc (or anyway, not well calibrated). I got :
Catastrophe by 2070 : 65.75%
AI by 2070 : 98.3%
I would have given much less to both (around 25%-33% for catastrophe, and around 50-75% for AI) if you directly asked me... so I'm badly calibrated, either in the way I answered to the individual questions, or to my final estimate (most likely to both...). I'll have to read the FOOM debate and think more about the issue. Thanks for the pointers anyway.
(Btw, it's painful, the applet doesn't support copy/paste...)