MockTurtle
MockTurtle has not written any posts yet.

MockTurtle has not written any posts yet.

Even though it's been quite a few years since I attended any quantum mechanics courses, I did do a talk as an undergraduate on this very experiment, so I'm hoping that what I write below will not be complete rubbish. I'll quickly go through the double slit experiment, and then try to explain what's happening in the delayed choice quantum eraser and why it happens. Disclaimer: I know (or knew) the maths, but our professors did not go to great lengths explaining what 'really' happens, let alone what happens according to the MWI, so my explanation comes from my understanding of the maths and my admittedly more shoddy understanding of the MWI.... (read 534 more words →)
I wonder what probability epiphenomenalists assign to the theory that they are themselves conscious, if they admit that belief in consciousness isn't caused by the experiences that consciousness brings.
The more I think about it, the more absurdly self-defeating it sounds, and I have trouble believing that ANYONE could hold such views after having thought about it for a few minutes. The only reason I continue to think about it is because it's very easy to believe that some people, no matter how an AI acted and for how long, would never believe the AI to be conscious. And that bothers me a lot, if it affects their moral stance on that AI.
I really enjoyed this, it was very well written! Lots of fun new concepts, and plenty of fun old ones being used well.
Looking forward to reading more! Even if there aren't too many new weird things in whatever follows, I really want to see where the story goes.
I very much like bringing these concepts of unambiguous past and ambiguous future to this problem.
As a pattern theorist, I agree that only memory (and the other parts of my brain's patterns which establish my values, personality, etc) matter when it comes to who I am. If I were to wake up tomorrow with Britney Spear's memories, values, and personality, 'I' will have ceased to exist in any important sense, even if that brain still had the same 'consciousness' that Usul describes at the bottom of his post.
Once one links personal identity to one's memories, values and personality, the same kind of thinking about uploading/copying can be applied to future Everett... (read more)
Surely there is a difference in kind here. Deleting a copy of a person because it is no longer useful is very different from deleting the LAST existing copy of a person for any reason.
Does the fact that naive neural nets almost always fail when applied to out of sample data constitute a strong general argument against the anti-universalizing approach?
I think this demonstrates the problem rather well. In the end, the phenomenon you are trying to model has a level of complexity N. You want your model (neural network or theory or whatever) to have the same level of complexity - no more, no less. So the fact that naive neural nets fail on out of sample data for a given problem shows that the neural network did not reach sufficient complexity. That most naive neural networks fail shows that most problems have at least a... (read more)
The first paper he mentions in the machine learning section can be found here, if you'd like to take a look: Murphy and Pazzani 1994 I had more trouble finding the others which he briefly mentions, and so relied on his summary for those.
As for the 'complexity of phenomena rather than theories' bit I was talking about, your reminder of Solomonoff induction has made me change my mind, and perhaps we can talk about 'complexity' when it comes to the phenomena themselves after all.
My initial mindset (reworded with Solomonoff induction in mind) was this: Given an algorithm (phenomenon) and the data it generates (observations), we are trying to come up with algorithms (theories) that create... (read more)
Looking at the machine learning section of the essay, and the paper it mentions, I believe the author to be making a bit too strong a claim based on the data. When he says:
"In some cases the simpler hypotheses were not the best predictors of the out-of-sample data. This is evidence that on real world data series and formal models simplicity is not necessarily truth-indicative."
... he fails to take into account that many more of the complex hypotheses get high error rates than the simpler hypotheses (despite a few of the more complex hypotheses getting the smallest error rates in some cases), which still says that when you have a whole range... (read more)
If the many-worlds interpretation is truly how the world is, and if having multiple copies of myself as an upload is more valuable than just having one copy on more powerful (or distributed) hardware, then...
I could bid for a job asking for a price which could be adequate if I were working by myself. I could create N copies of myself to help complete the job. Then, assuming there's no easy way to meld my copies back into one, I could create a simple quantum lottery that deletes all but one copy.
Each copy is guaranteed to live on in its own Everett branch, able to enjoy the full reward from completing the job.
Interesting questions to think about. Seeing if everyone independently describes the clothes the same way (as suggested by others) might work, unless the information is leaked. Personally, my mind went straight to the physics of the thing, 'going all science on it' as you say - as emperor, I'd claim that the clothes should have some minimum strength, lest I rip them the moment I put them on. If a piece of the fabric, stretched by the two tailors, can at least support the weight of my hand (or some other light object if you're not too paranoid about the tailor's abilities as illusionists), then it should be suitable.
Then, when your... (read more)