Comment author: JoshuaZ 18 May 2011 03:42:06AM *  18 points [-]

There's a related problem that often isn't appreciated. In general, in the natural environment if the average lifespan is around L, evolution will have no problem creating all sorts of tricks to maximize what it gets out of organs but causes them to fail just around or sometime after L. That means, that if evolution can get an advantage by making things fail late in the process, it will. This is consistent with the Gompertz curve, and it also suggests that optimists like Aubrey de Grey may be massively underestimating the difficulty in extending lifespan. As we get a larger population of very elderly, we're likely to run into diseases and problems we've never even seen before. To reach actuarial escape velocity, we will likely need to anticipate such diseases, and effective treatments, before we even ever encounter the diseases. That requires a degree of understanding of the human body that is well beyond our current level.

Comment author: Sergej_Shegurin 20 April 2016 01:39:37PM *  2 points [-]

If we can 3D-print or grow up organs than the problem mentioned by you gets effectively solved for anything but our brains. That's why I like organ engineering approach much better than other approaches.

As for brain, CRISPR/Cas9 engineering is a really great approach. It gives us potentially so many degrees of freedom.

Comment author: brazil84 13 December 2015 08:39:43PM 17 points [-]

How many women would it take to carry a human baby from conception to viable birth in 1 month?

Comment author: Sergej_Shegurin 21 December 2015 03:14:48PM 1 point [-]

We all know that human pregnancy doesn't scale. We all know that some other problems do scale. So I really don't understand those 18 points to the comment. One can always think up many different analogies leading to different conclusions. Even if we ignore scaling issue, sigma of duration of pregnancy is smth like a week perhaps. However other processes like creative thinking or inventing new ideas might have sigma comparable to mean.

Comment author: Sergej_Shegurin 21 December 2015 03:04:45PM 0 points [-]

I'm deeply sure that this cost is far less than one trillion dollars if we put them in Cas9/CRISPR, tissue engineering, acerebral clone growing etc. I think this my website http://sciencevsdeath.com/index.html might be interesting for you.

Also, I'm glad to see people asking such great questions :)

Comment author: Sergej_Shegurin 09 July 2015 08:24:03PM *  0 points [-]

Anyone must agree that the first task we want our AI to solve is FAI (even if we are "100%" sure that our plan has no leaks we still would like our AI to check it while we are able to shut AI down). It's easy to imagine that AI lies about it's own safety but many AIs lying about their safety (including safety of other AIs!) is much harder to imagine (while certainly still possible but also less probable). Only when we are incredibly sure in our FAI solution we can ask AI to solve other questions for us. Also, those AIs would constantly try to find bad consequences of our main_AI proposals (because they also don't want to risk their lifes, and also because we ask them to give us this information). Also, certainly we don't give access to internet and take some precautions considering people interacting with AI etc etc (which is well described in other places).

Certainly, this overall solution still has its drawbacks (I think every solution will have them) and we have to improve it in many ways. In my opinion, it's good if we don't launch AI during next 1000 years :-) but the problem is terrorist organizations and mad people that would be able to launch it despite our intentions... so we have to launch AI more or less soon anyway (or get rid of all terrorists and mad clever people which is nearly impossible). So we have to formulate a combination of tricks that is as safe as we can get. I find counter-productive to throw away everything which is not "100%" safe trying to find some magic "100%" super-solution.

Comment author: Sergej_Shegurin 02 March 2015 04:23:27PM -3 points [-]

I would execute a magical script programmed in advance. You think about script's number and it implements many magical actions for example paralising anyone except Harry faster than anyone makes a move or even understands anything.

Comment author: Sergej_Shegurin 19 February 2015 07:07:28PM -1 points [-]

In my opinion, the best of proposed solutions for AI safety problem is to make the AI number 1, to tell him that we are going to create another AI (number 2) and ask AI number 1 to tell us how to ensure friendliness and safety of AI number 2, and how to ensure that unsafe AI is not created. This solution has its chances to fail, but still in my opinion it's much better than any other proposed solution. What do you think?