Even though it's been quite a few years since I attended any quantum mechanics courses, I did do a talk as an undergraduate on this very experiment, so I'm hoping that what I write below will not be complete rubbish. I'll quickly go through the double slit experiment, and then try to explain what's happening in the delayed choice quantum eraser and why it happens. Disclaimer: I know (or knew) the maths, but our professors did not go to great lengths explaining what 'really' happens, let alone what happens according to the MWI, so my explanation comes from ...
I wonder what probability epiphenomenalists assign to the theory that they are themselves conscious, if they admit that belief in consciousness isn't caused by the experiences that consciousness brings.
The more I think about it, the more absurdly self-defeating it sounds, and I have trouble believing that ANYONE could hold such views after having thought about it for a few minutes. The only reason I continue to think about it is because it's very easy to believe that some people, no matter how an AI acted and for how long, would never believe the AI to be conscious. And that bothers me a lot, if it affects their moral stance on that AI.
I really enjoyed this, it was very well written! Lots of fun new concepts, and plenty of fun old ones being used well.
Looking forward to reading more! Even if there aren't too many new weird things in whatever follows, I really want to see where the story goes.
I very much like bringing these concepts of unambiguous past and ambiguous future to this problem.
As a pattern theorist, I agree that only memory (and the other parts of my brain's patterns which establish my values, personality, etc) matter when it comes to who I am. If I were to wake up tomorrow with Britney Spear's memories, values, and personality, 'I' will have ceased to exist in any important sense, even if that brain still had the same 'consciousness' that Usul describes at the bottom of his post.
Once one links personal identity to one's memories, ...
Surely there is a difference in kind here. Deleting a copy of a person because it is no longer useful is very different from deleting the LAST existing copy of a person for any reason.
Does the fact that naive neural nets almost always fail when applied to out of sample data constitute a strong general argument against the anti-universalizing approach?
I think this demonstrates the problem rather well. In the end, the phenomenon you are trying to model has a level of complexity N. You want your model (neural network or theory or whatever) to have the same level of complexity - no more, no less. So the fact that naive neural nets fail on out of sample data for a given problem shows that the neural network did not reach sufficient comple...
The first paper he mentions in the machine learning section can be found here, if you'd like to take a look: Murphy and Pazzani 1994 I had more trouble finding the others which he briefly mentions, and so relied on his summary for those.
As for the 'complexity of phenomena rather than theories' bit I was talking about, your reminder of Solomonoff induction has made me change my mind, and perhaps we can talk about 'complexity' when it comes to the phenomena themselves after all.
My initial mindset (reworded with Solomonoff induction in mind) was this: Given a...
Looking at the machine learning section of the essay, and the paper it mentions, I believe the author to be making a bit too strong a claim based on the data. When he says:
"In some cases the simpler hypotheses were not the best predictors of the out-of-sample data. This is evidence that on real world data series and formal models simplicity is not necessarily truth-indicative."
... he fails to take into account that many more of the complex hypotheses get high error rates than the simpler hypotheses (despite a few of the more complex hypotheses ge...
If the many-worlds interpretation is truly how the world is, and if having multiple copies of myself as an upload is more valuable than just having one copy on more powerful (or distributed) hardware, then...
I could bid for a job asking for a price which could be adequate if I were working by myself. I could create N copies of myself to help complete the job. Then, assuming there's no easy way to meld my copies back into one, I could create a simple quantum lottery that deletes all but one copy.
Each copy is guaranteed to live on in its own Everett branch, able to enjoy the full reward from completing the job.
Firstly, thank you for creating this well-written and thoughtful post. I have a question, but I would like to start by summarising the article. My initial draft of the summary was too verbose for a comment, so I condensed it further - I hope I have still captured the main essense of the text, despite this rather extreme summarisation. Please let me know if I have misinterpreted anything.
People who predict doomsday scenarios are making one main assumption: that the AI will, once it reaches a conclusion or plan, EVEN if there is a measure of probability assi...
From these examples, I might guess that these mistakes fall into a variety of already existing categories, unlike something like the typical mind fallacy which tends to come down to just forgetting that other people may have different information, aims and thought patterns.
Assuming you're different from others, and making systematic mistakes caused by this misconception, could be attributed to anything from low-self esteem (which is more to do with judgments of one's own mind, not necessarily a difference between one's mind and other people's), to the Fund...
I would say that it has to do with the consequences of each mistake. When you subconsciously assume that others think the way you do, you might see someone's action and immediately assume they have done it for the reason you would have done it (or, if you can't conceive of a reason you would do it, you might assume they are stupid or insane).
On the other hand, assuming people's minds differ from you may not lead to particular assumptions in the same way. When you see someone do something, it doesn't push you into thinking that there's no way the person did...
I think I may be a little confused about your exact reason to reject the correspondence theory of truth. From my reading, it seems to me that you reject it because it cannot justify any truth claim, since any attempt to do so is simply comparing one model to another - since we have no unmediated access to 'reality'. Instead, you seem to claim that pragmatism is more justified when claiming that something is true, using something along the lines of "it's true if it works in helping me achieve my goals".
There are two things that confuse me: 1) I do...
Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!).
Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since...
I think you've helped me see that I'm even more confused than I realised! It's true that I can't go down the road of 'if I do not currently care about something, does it matter?' since this applies when I am awake as well. I'm still not sure how to resolve this, though. Do I say to myself 'the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness'?
I think that seems like a pretty solid thing ...
I remember going through a similar change in my sense of self after reading through particular sections of the sequences - specifically thinking that logically, I have to identify with spatially (or temporally) separated 'copies' of me. Unfortunately it doesn't seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of 'what if the teleporter malfunctions and y...
How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.
Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in betwee...
Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?
Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start...
How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.
Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in betwee...
This is a really brilliant idea. Somehow I feel that using the Bayesian network system on simple trivial things at first (like the student encounter and the monster fight) is great for getting the player into the spirit of using evidence to update on particular beliefs, but I can imagine that as you go further with the game, the system would be applied to more and more 'big picture' mysteries of the story itself, such as where the main character's brother is.
Whenever I play conversation based adventure games or mystery-solving games such as Phoenix Wright,...
I know this post is a little old now, but I found myself wondering the same thing (and a little disappointed that I am the only one to comment) and found this. I must say that it's hard to find anyone around my social groups who has heard of LessWrong or even just cares about rationality, so it'd be great to meet up with other LWers! I'm currently attending the University of Birmingham, and live near the university.
Interesting questions to think about. Seeing if everyone independently describes the clothes the same way (as suggested by others) might work, unless the information is leaked. Personally, my mind went straight to the physics of the thing, 'going all science on it' as you say - as emperor, I'd claim that the clothes should have some minimum strength, lest I rip them the moment I put them on. If a piece of the fabric, stretched by the two tailors, can at least support the weight of my hand (or some other light object if you're not too paranoid about the tai... (read more)