I'm not sure this is an exact match to your question but it sounds like maybe what you're looking for is something like Solomonoff induction.
In Bayes the subjectivity come from choosing priors. Solomonoff induction includes an objective way to calculate the priors (see also Kolmogorov complexity). Unfortunately it isn't actually computable - I was asking a kind of similar question last year which has some answers about this.
I asked a follow-up question regarding complexity whose answers were super useful to my understanding of these kinds of things - particularly the sequence which johnswentworth wrote.
That was really interesting. Some of it was a little too technical for me, but hopefully I can spend some time learning some of the parts that threw me and see if I can figure out exactly how close that is.
My first impression is that would be the microscopic view of one part of the whole model. I actually had in mind something much more basic, but where that level of complexity could be added slowly as the overall model is built. It's a kind of never-ending project that improves it's accuracy as more is added to it.
In one imaginary iteration of this, I just hire people to do that level of work for me and tell me what the answer is.
Anyway, thanks.
Your first problem is that you need a theory for just how do statements relate to the state of the world. Have you read Wittgenstein's Philosophical Investigations?
Overall, this basically sounds like analytical philosophy plus 1970s style AI. Lots of people have probably figured this would be a nice thing to have, but once you drop out of the everyday understanding of language and try to get to the bottom of what's really going on, you end up in the same morass where AI research and modern philosophy are stuck in.
Thanks for the reply
I haven't read anything besides overviews of (or takes on) Wittgenstein, but if you think it's worthwhile I'll definitely give it a shot.
I can't say that I'm familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?
I really am mostly just anxious not to waste my time on things that have been done before and failed.
I can't say that I'm familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?
You might want to take a look at the A Human's Guide to Words sequence. (Or, for a summary, see just the last post in that sequence: 37 Ways That Words Can Be Wrong.)
I read "37 ways...". Thanks. I think I understand what you mean now.
I think those would definitely be the sorts of problems I would run into if I was to do this via a Philosophy PHD (something I've thought about, but don't think I would be very likely to pursue) or in building an AI algorithm.
I think they are problems that I would need to be cognizant of, but I think I have a workaround that still lets me create something valuable, but maybe not something that would satisfy philosophers.
The problem is that we think statements have a somewhat straightforward relation to reality because we can generally make sense of them quite easily. In reality it turns out that that ease comes from a lot of hidden work our brain does being smart on the spot every time it needs to fit a given sentence to the given state of reality, and nobody really appreciated this until people started trying to build AIs that do anything similar and repeatedly ended up with things with no ability to distinguish between things that are realistically plausible and incoherent nonsense.
I'm not really sure how to communicate this effectively beyond gesturing at the sorry history of the artificial intelligence research program from the 1950s onwards despite thousands of extremely clever people putting their minds to it. The sequences ESrogs suggests in the sibling reply also deal with stuff like this.
I think that means I'm [...] bad at describing/ searching for what I'm looking for.
One thing that might help, in terms of understanding what you're looking for, is -- how do you expect to be able to use this "model of ranking"?
It's not quite clear to me whether you're looking for something like an algorithm -- where somebody could code it up as a computer program and you could feed in sentences and it will spit out scores, or something more like a framework or rubrik -- where the work of understanding and evaluating sentences will still be done by people, but they can use the framework/rubrik as a guide to decide how to rate the sentences, or something else.
Definitely the "framework or rubrik" option. More like a rubrik than anything else, but with some fun nuance here and there. Work would be done by humans but all following the same rules.
There are a number of ways that I would like to use it in the future, but in the immediate most practical sense what I'm working on is a plan to create internet content that answers people's questions (via google. Siri, Alexa, etc) but makes declarative statements about the quality of information used to create those answers.
So for example, right now (02/08/20) if somebody asks google "does the MMR vaccine cause autism?" you get this page:
Which is a series of articles from various sites all pointing you in the direction of the right answer, but ultimately dancing around it and really just inviting you to make up your on mind.
What I would want to do is to create content that directly answers even difficult questions and trades the satisfaction of directness of the answer for the intellectual work of making you think about the quality rating we give it.
Creating a series of rules that gets to the heart of how the quality of evidence varies for different types of claims is obviously quite difficult. I think I've found a way to do it, but I would really like to know if it's been tried before and failed for some reason, or if someone has a better or faster way than mine.
I think that my way around the problems mentioned in the above replies is just conceding from the start that my model is not and can never be a perfect representation of the world. However, if it's done well enough it could bring a lot of clarity to a lot of problems.
Ah! It's much clearer to me now what you're looking for.
Two things that come to mind as vaguely similar:
1) The habit of some rationalist bloggers of flagging claims with "epistemic status". (E.g. here or here)
2) Wikipedia's guidelines for verifiability (and various other guidelines that they have)
Of course, neither is exactly what you're talking about, but perhaps they could serve as inspiration.
I'm glad i managed to finally be understandable. Part of the problem is that my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet. The other problem is that I'm frequently straying into academic territories I don't know that well so I think I tend to use words to describe it that are probably not be the correct ones.
Thanks for those, it was interesting to see how some other people have approached the problem and if nothing else it tells me that other people are trying to take the epistemology of everyday discourse seriously so hopefully there will be an appetite for my version.
my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet
FWIW, it may be worth keeping in mind the Silicon Valley maxim that ideas are cheap, and execution is what matters. In most cases you're far more likely to make progress on the idea if you get it out into the open, especially if execution at all depends on having collaborators or other supporters. (Also helpful to get feedback on the idea.) The probability that someone else successfully executes on an idea that you came up with is low.
I've heard similar things and agree completely. It's just difficult to fight the impulse to bury away the details!
Hi
Sorry if diving in with my question is a breach of your etiquette, but I have a kind of burning question I was hoping some of you guys could help me with. I've been reading the core texts and clicking around but can't quite figure out if this has been covered before.
Does anyone know of any previous attempts at building a model of ranking the quality of statements? By which I mean ranking things like epistemic claims, claims about causation and that kind of thing. Something that aims to distill the complexity of the degrees of certainty and doubt we should have into something simple like a number? Really importantly, I mean something that would be universally applicable, objective (or something like it) not just based on an estimate of one's own subjective certainty (my understanding of Bayesian reasoning and Alvin Goldman style social epistemology).
I've been working on something like that for a couple of years as a kind of hobby . I've read a lot of things on subjects that are adjacent (probability, epistemology, social psychology) but never found anything that seems like an attempt to do that.
I think that means I'm either a unique genius, a crazy person or bad at describing/ searching for what I'm looking for. Option 1 seems unlikely, option 2 is definitely possible but I suspect that option 3 is the real one. Does anyone know of any work in this area they can point me towards?
Cheers - M