The Provenance ontology lets us assert how information was produced and how it has been evaluated.
The goal should be the ability to filter conclusions based on arbitrary epistemic standards.
In Jimrandomh's scenario, one could create their own argument structure, linking to appropriate concepts, and linking to the existing argument only where appropriate. Possibly only the conclusion. I think AIF would also allow citing any post or comment as an argument, so freeform stuff can still be connected even if it isn't automatically filterable. Other people could do the structuring and validation.
The novel David's Sling by Marc Stiegler features a hyper-rationality organization (kind of like a CFAR CEV) and this organization has a truth/argument-mapping process. Notably, it features both a software component as well as a skill/form that people are trained in. This could be considered to be the thing that Double Crux wants to evolve into.
You're thinking about this problem and you like to read novels, I would recommend it. It doesn't go into enough detail that I would say it's a necessary source though.
I've come across a number of argument-structuring tools in the past. I think doing this right is much harder than people give it credit for.
The core problem is that most of the action in a quality discussion doesn't actually consist of claims and counterclaims. For example, I went to Truthsift and the first claim I found on their randomized front page was "vaccines are safe" (http://truthsift.com/search_view?topic=Are-Vaccines-Safe-?&id=406&nid=4083). The response I wanted to make is that vaccine is too broad a category; a well-managed discussion would first clarify that the discussion was limited to the sort of FDA-approved vaccines one would normally be prescribed, not to random research chemicals, then break into sections about specific vaccines and vaccine additives, the trustworthiness of the FDA approval process, and so on. But Truthsift wouldn't let me say that, first because I'd have to somehow mash that into a pro-and-con structure, and second because I'd have to merge that into an incompatible structure that someone else set up. This is entirely representative of my experiences with argument-structuring sites: I show up, find an elaborate structure that partitions the question in a way I think is wrong, and bounce.
Arbital is currently better because it doesn't try to structure everything; it leaves space for all the hard-to-tag irregular stuff to happen in comment threads. I think there's a large valley in between comment threads and a structure that can incorporate all the irregular things that happen in discussions, and that TruthSift's mistake was that it tried to squeeze out the irregular stuff.
After reading the Doc(tm), I think there is still design space to explore. For most readers, and many authors, keeping track of the points in an argument is actually pretty hard mental work. Just a little help would make them be better at it.
The trick would be to create "just enough" structure that is easy to fold into the process of authoring a long post or writing a quick comment. I don't think we need to reproduce all the elements of formal analysis here (and if we tried, it would be unusable).
In the end, I might argue myself back into the model where there are just "claims", not "claims" and "evidence".
But if we can keep our eyes out for a low-effort way to solve the problem, the return still feels high.
It's very likely that because we haven't seen any solution to this problem, the ones that do exist are not low-effort. This is part of Eliezer's argument (as you'll probably get from the doc).
Great analysis of problems with TruthSift. Perhaps we should start a list of irregular and natural behavior that Arbital needs to handle gracefully. Hmm, I wonder what kind of page that should go on?
Can you point to other argument-structuring tools?
I see the point about this being a hard problem - it increases the likely investment needed to get the return, so my "high ROI for improving discussion quality" is a claim at risk. But if we can keep our eyes out for a low-effort way to solve the problem, the return still feels high.
I've been looking for something like this for a long time now. I hope Arbital can be the platform that does it well.
I think the Semantic Web solves a lot of this, and could solve all of it.
Some argument structuring tools using AIF - I haven't evaluated their UI's yet.
The Provenance ontology lets us assert how information was produced and how it has been evaluated.
The goal should be the ability to filter conclusions based on arbitrary epistemic standards.
In Jimrandomh's scenario, one could create their own argument structure, linking to appropriate concepts, and linking to the existing argument only where appropriate. Possibly only the conclusion. I think AIF would also allow citing any post or comment as an argument, so freeform stuff can still be connected even if it isn't automatically filterable. Other people could do the structuring and validation.
The novel David's Sling by Marc Stiegler features a hyper-rationality organization (kind of like a CFAR CEV) and this organization has a truth/argument-mapping process. Notably, it features both a software component as well as a skill/form that people are trained in. This could be considered to be the thing that Double Crux wants to evolve into.
You're thinking about this problem and you like to read novels, I would recommend it. It doesn't go into enough detail that I would say it's a necessary source though.
I've come across a number of argument-structuring tools in the past. I think doing this right is much harder than people give it credit for.
The core problem is that most of the action in a quality discussion doesn't actually consist of claims and counterclaims. For example, I went to Truthsift and the first claim I found on their randomized front page was "vaccines are safe" (http://truthsift.com/search_view?topic=Are-Vaccines-Safe-?&id=406&nid=4083). The response I wanted to make is that vaccine is too broad a category; a well-managed discussion would first clarify that the discussion was limited to the sort of FDA-approved vaccines one would normally be prescribed, not to random research chemicals, then break into sections about specific vaccines and vaccine additives, the trustworthiness of the FDA approval process, and so on. But Truthsift wouldn't let me say that, first because I'd have to somehow mash that into a pro-and-con structure, and second because I'd have to merge that into an incompatible structure that someone else set up. This is entirely representative of my experiences with argument-structuring sites: I show up, find an elaborate structure that partitions the question in a way I think is wrong, and bounce.
Arbital is currently better because it doesn't try to structure everything; it leaves space for all the hard-to-tag irregular stuff to happen in comment threads. I think there's a large valley in between comment threads and a structure that can incorporate all the irregular things that happen in discussions, and that TruthSift's mistake was that it tried to squeeze out the irregular stuff.
After reading the Doc(tm), I think there is still design space to explore. For most readers, and many authors, keeping track of the points in an argument is actually pretty hard mental work. Just a little help would make them be better at it.
The trick would be to create "just enough" structure that is easy to fold into the process of authoring a long post or writing a quick comment. I don't think we need to reproduce all the elements of formal analysis here (and if we tried, it would be unusable).
In the end, I might argue myself back into the model where there are just "claims", not "claims" and "evidence".
It's very likely that because we haven't seen any solution to this problem, the ones that do exist are not low-effort. This is part of Eliezer's argument (as you'll probably get from the doc).
Great analysis of problems with TruthSift. Perhaps we should start a list of irregular and natural behavior that Arbital needs to handle gracefully. Hmm, I wonder what kind of page that should go on?
Can you point to other argument-structuring tools?
I see the point about this being a hard problem - it increases the likely investment needed to get the return, so my "high ROI for improving discussion quality" is a claim at risk. But if we can keep our eyes out for a low-effort way to solve the problem, the return still feels high.
I basically agree with everything here.