Just a thought I had today. I'm sure that it's trivial to the extent that it's correct, but it's a slow work day and I've been lurking here for too long.

Superintelligent AI (or other post-human intelligence) is unlikely to use the concept of "evidence" in the same way we do. It's very hard for neural networks (including human brains) to explain what they "know". The human brain is a set of information-gathering tools plugged into various levels of pattern-recognition systems. When we say we know something, that's an entirely intuitive process. There's no manual tallying going on - the tallying is happening deep in our subconscious, pre-System 1 thinking.

The idea of scientific thinking and evidence is not gathering more information - it's throwing out all the rest of the information we've gathered. It's saying "I will rely on only these controlled variables to come to a conclusion, because I think that's more trustworthy than my intuition." Which is because our intuitions are optimized for winning tribal social dynamics and escaping tigers.

In fact, it's so hard for neural networks to explain why they know what they know that one of the things that's been suggested is a sub-neural network with read access to the top network, optimized only for explaining it to humans.

The nature of reality is such that diseases are diagnosable (or will be very soon) by neural networks using the help of ton of uninteresting, uncompelling micro-bits of evidence, such as "people wearing this color shirt/having this color eyes/of this age-gender-race combination have a slightly higher prior for having these diseases". These things, while being true in a statistical sense, don't make a compelling narrative that you could encode as Solid Diagnostic Rules (to say nothing of the way one could game the system if they were encoded that way).

As an example, OpenAI Five is able to outperform top humans at Dota 2, but the programmers have no idea 'why'. They make statements like 'we had OpenAI run a probability analysis based only on the starting hero selection screen, and OpenAI gave itself a 96% chance of winning, so it evidently thinks this composition is very strong.' And the actual reason, in fact, doesn't boil down into human-compatible narratives like "well, they've got a lot of poke and they match up well in lane", which is close to the limit of narrative complexity the human concept of 'evidence' can support.

New Comment
5 comments, sorted by Click to highlight new comments since:

I wonder if the current state of the art corresponds to the pre-conscious level of evolution, before the internal narrator and self-awareness. Maybe soon the neural networks will develop the skill of explaining (or rationalizing) their decisions.

This seems pretty likely. An AI that does internal reasoning will find it useful to have its own opinions on why it thinks things, which need bear about as much relationship to their internal microscopic function as human opinions about thinking do to human neurons.

The concept of evidence that we have in Anglo-American discourse isn't a universal one of humanity as a whole.

If you go to a good humanities department people can tell you about knowledges that are structured quite differently.

I don't think the "idea of scientific thinking and evidence" has so much to do with throwing away information as adding reflection, post which you might excise the cruft.

Being able to describe what you're doing, ie usefully compress existing strategies-in-use, is probably going to be helpful regardless of level of intelligence because it allows you to cheaply tweak your strategies when either the situation or the goal is perturbed.

[-]TAG10

There's no general agreement among humans about what constitutes evidence, which is why Aumanns theorem has so little to do with reality. How can two agents be exposed to the same evidence when they don't agree on what constitutes evidence.?