One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I suspect the anecdote about Eliezer only sidetracks your readers.
Hence the problem with sneer in actual criticism. Not that I'm opposed to sneering. Far from it. But you'd better be making solid points while you sneer. If you make a bunch of half ass points just to sneer, don't expect people to dig your one diamond out of that pile of crap. They will look elsewhere for criticism, if they're interested in it at all. And quite reasonably so.
EY writes:
Yep. Don't expect to find diamonds in a pile of crap. Expect to find more crap.
I suspect how reader's respond to my anecdote about Eliezer will fall along party lines, so to speak.
Which is kind of the point of the whole post. How one responds to the criticism shouldn't be a function of one's loyalty to Eliezer. Especially when su3su2u1 explicitly isn't just "making up most of" his criticism. Yes, his series of review-posts are snarky, but he does point out legitimate science errors. That he chooses to enjoy HPMOR via (c) rather than (a) shouldn't have any bearing on the true-or-false-ness of his criticism.
I've read su3su2u1's reviews. I agree with them. I also really enjoyed HPMOR. This doesn't actually require cognitive dissonance.
(I do agree, though, that snarkiness isn't really useful in trying to get people to listen to criticism, and often just backfires)