SforSingularity comments on Formalizing informal logic - Less Wrong

12 Post author: Johnicholas 10 September 2009 08:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: SforSingularity 12 September 2009 12:05:04AM 2 points [-]

Reading this comment makes me think that the problem of building formal tools to aid inference is um, not exactly a million miles away from the object-level problem of building AGI. Hierarchical models, bayes nets, meta-arguments about reliability of approximations. Perhaps the next thing we'll be asking for is some way to do interventions in the real world to support causal reasoning?

Comment author: Johnicholas 12 September 2009 01:44:58AM *  2 points [-]

What evidence would convince you personally that some purported Seed AGI was in fact Friendly?

Sound arguments are central to Friendly AGI research. The arguments cannot be too long, either. If someone hands you a giant proof, and you accept it as evidence without understanding it, then your implicit argument is something like "from expert testimony" or "long proofs that look valid to spot-checks are likely to be sound".

Comment author: Steve_Rayhawk 12 September 2009 02:01:36AM *  0 points [-]

Perhaps the next thing we'll be asking for is some way to do interventions in the real world to support causal reasoning?

Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making? These would be a special case of formal arguments that someone should do something.

Do you mean that if someone automated the experiments there might be an AI danger? I know a lot of arguments for ways there could be dangers similar to AI dangers from these kinds of tools, but they are mostly about automation of the reasoning.

Comment author: SforSingularity 12 September 2009 02:54:44PM 0 points [-]

Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making?

yes, for example.

I was just pointing out that the two are similar problems: improving human group decisionmaking using formal tools vs. making a thinking machine. The implications of this are complex, and could include both dangers and benefits. One benefit is that as AI gets more advanced, better decisionmaking tools will become available. For example, the notion of a "cognitive bias" probably depends causally upon us having a notion of normative rationality to compare humans with.