Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
dmfdmf10

Of course my motives are irrelevant here but for the record I am trying to understand epistomology and its application to my self and, ultimately to AI. How about you, what are your motives?

Not knowing the exact details of where the PoC flaw is in QM is not a devastating criticism of my point, though your tone seems to suggest that you think it is. Why does the USPTO no longer accept applications for perpetual motion machines? Because it violates the first and/or second laws of thermo, no need to dig further into the details. This is just how principles work and once a fundamental error is identified then that's it, end of discussion.... unless I was a physicist and wanted to dig in and take a crack at resolving the QM quandries which I do not. Jaynes left us a pretty large clue that the PoC error probably lies in the mis-use of probability theory as he described. As a non physicist that's all (and more) than I need to know.

dmfdmf-10

Its not an explicit form of Primacy of Consciousness like prayer or wishing. Its implicit in QM and its basic premises. One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors. I think the ancient philosophers got hung up on this when debating whether a color like "red" was in the object or subject. This went round and round for a few hundred years until someone pointed out that its both (form/object distinction).

Jaynes covers similar idea in his book and articles where he ascribes this error to traditional frequentists who hold probabilities as a property of things (a metaphysical concept) instead of a measure or property of our lack of knowledge (an epistemological, bayesian concept). Moreover, committing the PoC error will lead you to supernaturalism eventually so MWI is just a logical outcome of that error.

dmfdmf-20

Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore's rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don't we need a theory of knowledge first? I'm new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?

I'm with SA on the MWI of QM. I think EY is throwing the scientific baby out with the physics bath water. It seems to me that the MWI is committing the mind projection fallacy or the fallacy of the primacy of consciousness. I also agree with whoever said (paraphrased) that all these interpretations of QM just differ on where they hide the contradictions... they are all unsatisfactory and it will take a genius to figure it out.