lukeprog comments on Reply to Holden on The Singularity Institute - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (213)
I think the issue is that we need a successful SPARC and an "Open Problems in Friendly AI" sequence more urgently than we need an HPMOR finale.
A sudden, confusing vision just occurred, of the two being somehow combined. Aaagh.
Spoiler: Voldemort is a uFAI.
For the record:
(And later in the thread, when asked about "so far": "And I have no intention at this time to do it later, but don't want to make it a blanket prohibition.")
In the earlier chapters, it seemed to me that the Hogwarts facility dealing with Harry was something like being faced with an AI of uncertain Friendliness.
Correction: It was more like the faculty dealing with an AI that's trying to get itself out of its box.
I think our values our positively maximized by delaying the HPMOR finale as long as possible, my post was more out of curiosity to see what would be most helpful to Eliezer.