CarlShulman comments on A cynical explanation for why rationalists worry about FAI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (179)
This was Hutter, Schmidhuber, and so forth. Not anyone at SI.
No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a "delusion box" to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.
BTW, I believe Carl is talking about Ring & Orseau's Delusion, Survival, and Intelligent Agents.
Yes, thanks.
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent?
Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden's criticism.
I'm not sure. If they were connected, it was probably by way of the grapevine via the Schmidhuber/Hutter labs.
Meh, people wouldn't have called it huge, and it isn't, particularly. It would have signaled some positive things, but not much.
Surely Hutter was aware of this issue back in 2003: