Probably mostly to learn things - though you would have to consult with my shrink for more details. Of course I'm not doing that in this thread - I guess that, here I'm trying to help you out on this issue while showing that I know what I'm talking about. Maybe someday, someone can return the favour - if they see me talking nonsense.
Or maybe it's just a case of:
http://mohel.dk/grafik/andet/Someone_Is_Wrong_On_The_Internet.jpg
Jaynes' criticism doesn't apply to the MWI. The MWI doesn't involve probabilities - it's a deterministic theory:
Shouldn't this cartoon be revised "Someone is more wrong on the Internet" ?
BTW, got slammed with work but as soon as I get the chance I am going to reply to comments. Thank you for your patience.
So: you know all about the mind projection fallacy - but don't seem to be able to find a coherent way to link it to the MWI, even though you seem to want to do that. I don't know what your motives are - and so don't see the point.
Of course my motives are irrelevant here but for the record I am trying to understand epistomology and its application to my self and, ultimately to AI. How about you, what are your motives?
Not knowing the exact details of where the PoC flaw is in QM is not a devastating criticism of my point, though your tone seems to suggest that you think it is. Why does the USPTO no longer accept applications for perpetual motion machines? Because it violates the first and/or second laws of thermo, no need to dig further into the details. This is just how principles work and once a fundamental error is identified then that's it, end of discussion.... unless I was a physicist and wanted to dig in and take a crack at resolving the QM quandries which I do not. Jaynes left us a pretty large clue that the PoC error probably lies in the mis-use of probability theory as he described. As a non physicist that's all (and more) than I need to know.
Neither consciousness nor mind are primary in the MWI - so I can't see where you are getting that from.
Its not an explicit form of Primacy of Consciousness like prayer or wishing. Its implicit in QM and its basic premises. One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors. I think the ancient philosophers got hung up on this when debating whether a color like "red" was in the object or subject. This went round and round for a few hundred years until someone pointed out that its both (form/object distinction).
Jaynes covers similar idea in his book and articles where he ascribes this error to traditional frequentists who hold probabilities as a property of things (a metaphysical concept) instead of a measure or property of our lack of knowledge (an epistemological, bayesian concept). Moreover, committing the PoC error will lead you to supernaturalism eventually so MWI is just a logical outcome of that error.
Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore's rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don't we need a theory of knowledge first? I'm new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?
I'm with SA on the MWI of QM. I think EY is throwing the scientific baby out with the physics bath water. It seems to me that the MWI is committing the mind projection fallacy or the fallacy of the primacy of consciousness. I also agree with whoever said (paraphrased) that all these interpretations of QM just differ on where they hide the contradictions... they are all unsatisfactory and it will take a genius to figure it out.
- Name: David
- Space: SF Bay Area
- Time: 46
- Education: MS Econ, MS Mechanical Engr.
- Occupation: IT Consultant
I am interested in reason, how it works and how I can improve my own abilities. I have been an AI/Singularity skeptic but am reconsidering these ideas on reading Jaynes over the past year. Working on integrating the work of Rand, Aristotle, Jaynes, Turing, Godel and Shannon because I think all the essentials are covered in these author's work. Love the blog, especially the commitment to clear understanding but also clearly identifying that which we don't understand. Unfortunately many of the topics are too technical for me but I enjoy the discussion anyway.
When some field is afflicted with deep and persistent philosophical conflicts, this isn't necessarily a sign that one of the sides is right and the other is just being silly. It might be a sign that some crucial unifying insight is waiting several steps ahead.
I agree with this. Such ongoing disputes in a field are often signs of a shared false premise leading to the false alternative as frequentist -v- bayesian. The "unifying insight" comes from identifying and correcting the false premise. Finding it requires examining the field at a more fundamental level. This was my point in my first post to LW in the "Unspeakable Morality" thread when I wrote...
And hopefully the whole frequentist -v- bayesian dichotomy-debate will turn out not to have a false premise behind it. Of this I am not sure.
I am just learning Bayesian ideas but I am learning it with the caveat that it might have accepted a false premise that is also behind Frequentist ideas. Good fun!
Demands for moral justification have their Charybdis and their Scylla:
A rather fancy way of saying the horns of a dilemma. If I were a Bayesian I might say that my prior is to believe that this is a sure sign of a false premise hidden in there somewhere leading to the false alternative. If I where a frequentist I might say 999 times out 1000 such dilemmas are a sure sign of the same. Ethics is full of such horns and dilemmas, handed out like poisoned candy to the kiddies on halloween by the very professors who are suppose to find the error and resolve them. In any case, a B or a F should be motivated to prove the hypothesis or rule it out. Throwing up one's hands and creating ad hoc rules for moral issues seems.... more wrong.
And hopefully the whole frequentist -v- bayesian dichotomy-debate will turn out not to have a false premise behind it. Of this I am not sure.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm driving down from San Mateo. If anyone north needs a lift I can pickup at the Caltrain Station in San Mateo. Anyone south of me I can swing by on my way. Contact me at yhfin at yahoo dott kom to coordinate.