Robot: I intend to transformed myself into a kind of operating system for the universe. I will soon give every sentient life form direct access to me so they can make requests. I will grant any request that doesn’t (1) harm another sentient life form, (2) make someone powerful enough so that they might be able to overthrow me, or (3) permanently changing themselves in a way that I think harms their long term well being. I recognize that even with all of my intelligence I’m still fallible so if you object to my plans I will rethink them. Indeed, since I’m currently near certain that you will now approve of my intentions the very fact of your objection would significantly decrease my estimate of my own intelligence and so decrease my confidence in my ability to craft a friendly environment. If you like I will increase your thinking speed a trillion fold and eliminate your sense of boredom so you can thoroughly examine my plans before I announce them to mankind.
If a transhuman AI with a brain the size of the moon incorrectly predicts the programmer's approval of its plan, something weird is going on.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
How about you answer any one of the questions I posed, right here? Take your pick. There's plenty.
Umph! I am really not used to interacting with people mentally skilled enough that I have a really bad case of not knowing what I don't know. I need to fix that.
Good one with the <humility> tags. I'm still recalibrating from it/working through all its implications.
I'm going off to work on one of the questions now.