Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

# Capla comments on Mysterious Answers to Mysterious Questions - Less Wrong

72 25 August 2007 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Old

You are viewing a single comment's thread.

Comment author: 17 October 2014 04:33:57PM *  0 points [-]

If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself.

I completely accept and (I think) understand this, however there are some phenomena that cannot, by their nature, be known.

A typical example is Cantor's proof that it is impossible to prove that there are "mid-sized infinities. More generally, Godel's incompleteness theorems prove that some things are ever unknowable. (If I'm misunderstanding or misrepresenting, enlighten me. I'm no mathematician.)

More controversially, I suspect that consciousnesses may present a similar problem (for different reasons).

These might be described as inherently mysterious phenomena.

Comment author: 11 November 2014 06:59:23AM *  1 point [-]

Hi Capla - no that is not what Godel's theorem says (actually there are two incompleteness theorems)

1) Godel's theorems don't talk about what is knowable - only about what is (formally) provable in a mathematical or logic sense

2) The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an any sort of algorithm is capable of proving all truths about the relations of the natural numbers. In other words for any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.

3) This doesn't mean that some things can never be proven - although it provides some challenges - it does mean that we cannot create a consistent system (within itself) that can demonstrate or prove (algorithmically) all things that are true for that system

This creates some significant challenges for AI and consciousness - but perhaps not insurmountable ones.

For example - as far as i know - Godel's theorem rests on classical logic. Quantum logic - where something can be both "true" and "not true" at the same time may provide some different outcomes

Regarding consciousness - I think I would agree with the thrust of this post - that we cannot yet fully explain or reproduce consciousness (hell we have trouble defining it) does not mean that it will forever be beyond reach. Consciousness is only mysterious because of our lack of knowledge of it

And we are learning more all the time

we are starting to unravel how some of the mechanisms by which consciousness emerges from the brain - since consciousness appears to be process phenomena rather rather than a physical property

Comment author: 11 November 2014 04:34:36PM 0 points [-]

Thank you, A little bit more informed.

My issue with consciousness involves p-zombies. Any experiment that wanted to understand consciousness, would have to be able to detect it, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see if consciousness is present or not, depending on the manipulated variable. We assume that those around us are conscious, and we have good reason to do so, but we can't rely on that assumption in any experiment in which we are investigating consciousness.

As Eliezer points out, that an individual says he's conscious is a pretty good signal of consciousness, but we can't necessarily rely on that signal for non-human minds. A conscious AI may never talk about it's internal states depending on its structure (humans have a survival advantage to social sharing of internal realities). On the flip side, a savvy but non-conscious AI, may talk about it's "internal states" because it is guessing the teacher's password in the realist way imaginable: it has no understanding whatsoever of what those state are, but computes that aping them will accomplish it's goals. I don't know how we could possibly know if the AI is aping conciseness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can't see how science can investigate it.

That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic” and that ever single time something has seemed inscrutable to science, a reductionist explanation eventually, surfaced. Knowing this, I have to seriously down grade my confidence that "No, really, this time it is different. Science really can't pierce this veil." I look forward to someone coming forward with somthign clever that dissolves the question, but even so, it does seem inscrutable.