Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I'd like to comment on your notation:

" Yesterday, I proposed that you should resist the temptation to generalize over all of mind design space. If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn't buy it."

You seem to be saying that X(q) take a mind q, specified as a string of bits, and returns "true" if a bit in a certain place is 1 and "false" otherwise. Is this a standard notion in current philosophy of AI, because back when I took it, we didn't use notations like this. Can I find this in any Science-Citation-Index-rated journals?

As for the intended point of the article, I'm really not sure I understand. A proof can have soundness and validity defined in formal terms. A proof can have its "convincing" property defined formally. Therefore this looks like a problem in foundations of mathematics, or a problem in the metamathematics of logic. Is there a semantic element that I'm missing?