PhilGoetz comments on Deleting paradoxes with fuzzy logic - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (70)
Fuzzy logic is just sloppy probability, although Lofti Zadeh doesn't realize it. (I heard him give a talk on it at NIH, and my summary of his talk is: He invented fuzzy logic because he didn't understand how to use probabilities. He actually said: "What if you ask 10 people if Bill is tall, and 4 of them say yes, but 6 of them say no? Probabilities have no way of representing this.")
You can select your "fuzzy logic" functions (the set of functions used to specify a fuzzy logic, which say what value to assign A and B, A or B, and not A, as a function of the values of A and B) to be consistent with probability theory, and then you'll always get the same answer as probability theory.
The rules for standard probability theory are correct. But "sloppy" fuzzy-logic probability functions, like "A or B = max(A,B); A and B = min(A,B); not(A) = 1-A", have advantages when Bayesian logic gives lousy results. Here are 2 situations where fuzzy logic outperforms use of Bayes' law:
You have incomplete or inaccurate information. Say you are told that A and B have a correlation of 1: P(A|B) = P(B|A) = 1. By Bayes' law, P(A^B) = P(AvB) = P(A) = P(B). Then you're told that P(A) and P(B) are different. You're then asked to compute P(A^B). Bayes law fails you, because the facts you've been given are inconsistent. Fuzzy logic is a heuristic that lets you plow through the inconsistency: it enforces p(AvB) >= p(A^B), when Bayes' law just blows up.
You are a robot, making a plan. For every action you take, you have a probability of success that you always associate with that action. You assume that the probability of success for each step in a plan is independent of the other steps. But in reality, sometimes they are highly correlated. Because you assume probabilities are independent, you strongly favor short plans over long plans. Using fuzzy logic allows you to construct longer plans.
Fuzzy logic is just a pragmatic computational tool. Nothing that's going to help you get around a paradox, except in the sense that it will let you construct a model that's inaccurate enough that the paradox disappears from sight.
When you switch to using these numbers to differentiate between "short" and "extremely short", that's not probability. But then you're no longer talking about truth values. You're just measuring things. The number 17 is no more true than the number 3.
All that said, the approach you just described is interesting. I'm missing something, but it's very late, so I'll have to try to figure it out tomorrow.
I think I've figured it out.
You have a set of equations for p(X1), p(X2), etc., where
p(X1) = f1(p(X2), p(X3), ... p(Xn))
p(X2) = f2(p(X1), p(X3), ... p(Xn))
...
Warrigal is saying: This is a system of n equations in n unknowns. Solve it.
But this has nothing to do with whether you're using fuzzy logic!
If you define the functions f1, f2, ... so that each corresponds to something like
f1(p(X2), p(X3) , ...) = p(X2 and (X3 or X4) ... )
using standard probability theory, then you're not using fuzzy logic. If you define them some other way, you're using fuzzy logic. The approach described lets us find a consistent assignment of probabilities (or truth-values, if you prefer) either way.
Is this really the case?
In fuzzy logic, one requires that the real-numbered truth value of a sentence is a function of its constituents. This allows the "solve it" reply.
If we swap that for probability theory, we don't have that anymore... instead, we've got the constraints imposed by probability theory. The real-numbered value of "A & B" is no longer a definite function F(val(A), val(B)).
Maybe this is only a trivial complication... but, I am not sure yet.