Viliam_Bur comments on Open thread, 30 June 2014- 6 July 2014 - Less Wrong

4 Post author: DanielDeRossi 30 June 2014 10:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (246)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 02 July 2014 09:12:58AM 2 points [-]

By using Solomonoff Induction on all possible universes, and updating on the existing chapters. :D

Or it could simply say that it understands human psychology well (we are speaking about a superhuman AI), and understands all clues in the existing chapters, and can copy Eliezer's writing style... so while it cannot print an identical copy of Eliezer's planned ending, with a high probability it can write an ending that ends the story logically in a way compatible with Eliezer's thinking, that would feel like if Elizer wrote it.

Oh, and where did it get the original HPMoR chapters? From the (imaginary) previous gatekeeper.

Comment author: [deleted] 02 July 2014 03:31:45PM *  0 points [-]

So, two issues:

1) You don't get to assume "because superhuman!" the AI can know X, for any X. EY is an immensely complex human being, and no machine learning algorithm can simply digest a realistically finite sample of his written work and know with any certainty how he thinks or what surprises he has planned. It would be able to, e.g. finish sentences correctly and do other tricks, and given a range of possible endings predict which ones are likely. But this shouldn't be too surprising: it's a trick we humans are able to do too. The AI's predictions may be more accurate, but not qualitatively different than any of the many HPMOR prediction threads.

2) Ok maybe -- maybe! -- in principle, in theory it might be possible for a perfect, non-heuristic Bayesian with omniscient access to the inner lives and external writings of every other human being in existence would have a data set large enough data set to make reliable enough extrapolations from as low-bandwidth a medium as EY's published fanfics. Maybe, as this is not a logical consequence. Even so, we're talking about a boxed AI, remember? If it is everywhere and omniscient, then it's already out of the box.

Comment author: lmm 04 July 2014 10:50:54PM 0 points [-]

I'm happy to assume the AI is omniscient, just impotent. I think such an AI could still be boxed.