Hi
Sorry if diving in with my question is a breach of your etiquette, but I have a kind of burning question I was hoping some of you guys could help me with. I've been reading the core texts and clicking around but can't quite figure out if this has been covered before.
Does anyone know of any previous attempts at building a model of ranking the quality of statements? By which I mean ranking things like epistemic claims, claims about causation and that kind of thing. Something that aims to distill the complexity of the degrees of certainty and doubt we should have into something simple like a number? Really importantly, I mean something that would be universally applicable, objective (or something like it) not just based on an estimate of one's own subjective certainty (my understanding of Bayesian reasoning and Alvin Goldman style social epistemology).
I've been working on something like that for a couple of years as a kind of hobby . I've read a lot of things on subjects that are adjacent (probability, epistemology, social psychology) but never found anything that seems like an attempt to do that.
I think that means I'm either a unique genius, a crazy person or bad at describing/ searching for what I'm looking for. Option 1 seems unlikely, option 2 is definitely possible but I suspect that option 3 is the real one. Does anyone know of any work in this area they can point me towards?
Cheers - M
Definitely the "framework or rubrik" option. More like a rubrik than anything else, but with some fun nuance here and there. Work would be done by humans but all following the same rules.
There are a number of ways that I would like to use it in the future, but in the immediate most practical sense what I'm working on is a plan to create internet content that answers people's questions (via google. Siri, Alexa, etc) but makes declarative statements about the quality of information used to create those answers.
So for example, right now (02/08/20) if somebody asks google "does the MMR vaccine cause autism?" you get this page:
https://www.google.com/search?q=does+the+MMR+vaccine+cause+autism%3F&oq=does+the+MMR+vaccine+cause+autism%3F&aqs=chrome..69i57j0.9592j1j8&sourceid=chrome&ie=UTF-8
Which is a series of articles from various sites all pointing you in the direction of the right answer, but ultimately dancing around it and really just inviting you to make up your on mind.
What I would want to do is to create content that directly answers even difficult questions and trades the satisfaction of directness of the answer for the intellectual work of making you think about the quality rating we give it.
Creating a series of rules that gets to the heart of how the quality of evidence varies for different types of claims is obviously quite difficult. I think I've found a way to do it, but I would really like to know if it's been tried before and failed for some reason, or if someone has a better or faster way than mine.
I think that my way around the problems mentioned in the above replies is just conceding from the start that my model is not and can never be a perfect representation of the world. However, if it's done well enough it could bring a lot of clarity to a lot of problems.