That it hasn't been radically triumphant isn't strong evidence towards its lack of world-beating potential though. Pragmatism is weird and confusing, perhaps it just hasn't been exposited or argued for clearly and convincingly enough. Perhaps it historically has been rejected for cultural reasons ("we're doing physicalism so nyah"). I think there is value on clearly presenting it to the LW/MIRI crowd. There are unresolved problems with a naturalistic philosophy that should be pointed out, and it seems that pragmatism solves them.
As for originality, I'm not sure how think about this. Pretty much everything has already been thought of, but it is hard to read all of the literature to be familiar with it. So how do you write? Acknowledge that there probably is some similar exposition, but we don't know where it is? What if you've come up with most of these ideas yourself? What if every fragment of your idea has been thought of, but it has never been put together in this particular way (which I suspect is going to be the case with us). The only reason for not appearing to be original is so not to seem arrogant to people like you who've read these arguments before.
Do you have direct, object-level criticisms of our version of pragmatism? Because that would be great. We've been having a hard time finding ones that we haven't already fixed, and it seems really unlikely that there aren't any. (I've been working on this with OP)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Albert is able to predict with absolute certainty that we would make a decision that we would regret, but it unable to communicate the justification for that certainty? That is wildly inconsistent.
I agree that an AI with such amazing knowledge should be unusually good at communicating its justifications effectively (because able to anticipate responses, etc.) I'm of the opinion that this is one of the numerous minor reasons for being skeptical of traditional religions; their supposedly all-knowing gods seem surprisingly bad at conveying messages clearly to humans. But to return to VAuroch's point, in order for the scenario to be "wildly inconsistent," the AI would have to be perfect at communicating such justifications, not merely unusually good. Even such amazing predictive ability does not seem to me sufficient to guarantee perfection.