Sergej_Shegurin

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

If we can 3D-print or grow up organs than the problem mentioned by you gets effectively solved for anything but our brains. That's why I like organ engineering approach much better than other approaches.

As for brain, CRISPR/Cas9 engineering is a really great approach. It gives us potentially so many degrees of freedom.

We all know that human pregnancy doesn't scale. We all know that some other problems do scale. So I really don't understand those 18 points to the comment. One can always think up many different analogies leading to different conclusions. Even if we ignore scaling issue, sigma of duration of pregnancy is smth like a week perhaps. However other processes like creative thinking or inventing new ideas might have sigma comparable to mean.

I'm deeply sure that this cost is far less than one trillion dollars if we put them in Cas9/CRISPR, tissue engineering, acerebral clone growing etc. I think this my website http://sciencevsdeath.com/index.html might be interesting for you.

Also, I'm glad to see people asking such great questions :)

Anyone must agree that the first task we want our AI to solve is FAI (even if we are "100%" sure that our plan has no leaks we still would like our AI to check it while we are able to shut AI down). It's easy to imagine that AI lies about it's own safety but many AIs lying about their safety (including safety of other AIs!) is much harder to imagine (while certainly still possible but also less probable). Only when we are incredibly sure in our FAI solution we can ask AI to solve other questions for us. Also, those AIs would constantly try to find bad consequences of our main_AI proposals (because they also don't want to risk their lifes, and also because we ask them to give us this information). Also, certainly we don't give access to internet and take some precautions considering people interacting with AI etc etc (which is well described in other places).

Certainly, this overall solution still has its drawbacks (I think every solution will have them) and we have to improve it in many ways. In my opinion, it's good if we don't launch AI during next 1000 years :-) but the problem is terrorist organizations and mad people that would be able to launch it despite our intentions... so we have to launch AI more or less soon anyway (or get rid of all terrorists and mad clever people which is nearly impossible). So we have to formulate a combination of tricks that is as safe as we can get. I find counter-productive to throw away everything which is not "100%" safe trying to find some magic "100%" super-solution.

I would execute a magical script programmed in advance. You think about script's number and it implements many magical actions for example paralising anyone except Harry faster than anyone makes a move or even understands anything.

In my opinion, the best of proposed solutions for AI safety problem is to make the AI number 1, to tell him that we are going to create another AI (number 2) and ask AI number 1 to tell us how to ensure friendliness and safety of AI number 2, and how to ensure that unsafe AI is not created. This solution has its chances to fail, but still in my opinion it's much better than any other proposed solution. What do you think?