All of joracine's Comments + Replies

In other words, I think it's more useful to think of those definitions as an algorithm (perhaps ML): certainty ~ f(risk, uncertainty); and the definitions provided of the driving factors as initial values. The users can then refine their threshold to improve the model's prediction capability over time, but also as a function of the class of problems (i.e. climate vs software).

1MichaelA
I think I agree with substantial parts of both the spirit and specifics of what you say. And your comments have definitely furthered my thinking, and it's quite possible I'd now write this quite differently, were I to do it again. But I also think you're perhaps underestimating the extent to which risk vs uncertainty very often is treated as an absolute dichotomy, with substantial consequences. I'll now attempt to lay out my thinking in response to your comments, but I should note that my goal isn't really to convince you of "my side", and I'd consider it a win to be convinced of why my thinking is wrong (because then I've learned something, and because that which can be destroyed by the truth should be, and all that). From memory, I think I agreed with basically everything in Eliezer's sequence A Human's Guide to Words. One core point from that seems to closely match what you're saying: And it's useful to have words to point to clusters in thingspace, because it'd be far too hard to try to describe, for example, a car on the level of fundamental physics. So instead we use labels and abstractions, and accept there'll be some fuzzy boundaries and edge cases (e.g., some things that are sort of like cars and sort of like trucks). One difference worth noting between that example and the labels "risk" and "uncertainty" is that risk and uncertainty are like two different "ends" or "directions" of a single dimension in thingspace. (At least, I'd argue they are, and it's possible that that has to be based on a Bayesian interpretation of probability.) So here it seems to me it'd actually be very easy to dispense with having two different labels. Instead, we can just have one for the dimension as a whole (e.g., "trustworthy", "well-grounded", "resilient"; see here), and then use that in combination with "more", "less", "extremely", "hardly at all", etc., and we're done. We can then very clearly communicate the part that's real (that reflects the territory) from when we tr

For the most part, you seem to spend a lot of time trying to discover whether terms like unknown probability and known probability make sense. Yet, those are language artifacts which, like everything language, is merely a use of a clarification algorithm as means to communicate abstractions. Each class represents primarily its dominating modes, but becomes increasingly useless at the margins. As such, you yourself make a false dichotomy by trying to discuss whether these terms are useful or not by showing that at the border they might fail: they fail, and

... (read more)
1joracine
In other words, I think it's more useful to think of those definitions as an algorithm (perhaps ML): certainty ~ f(risk, uncertainty); and the definitions provided of the driving factors as initial values. The users can then refine their threshold to improve the model's prediction capability over time, but also as a function of the class of problems (i.e. climate vs software).