All of MockTurtle's Comments + Replies

Interesting questions to think about. Seeing if everyone independently describes the clothes the same way (as suggested by others) might work, unless the information is leaked. Personally, my mind went straight to the physics of the thing, 'going all science on it' as you say - as emperor, I'd claim that the clothes should have some minimum strength, lest I rip them the moment I put them on. If a piece of the fabric, stretched by the two tailors, can at least support the weight of my hand (or some other light object if you're not too paranoid about the tai... (read more)

Even though it's been quite a few years since I attended any quantum mechanics courses, I did do a talk as an undergraduate on this very experiment, so I'm hoping that what I write below will not be complete rubbish. I'll quickly go through the double slit experiment, and then try to explain what's happening in the delayed choice quantum eraser and why it happens. Disclaimer: I know (or knew) the maths, but our professors did not go to great lengths explaining what 'really' happens, let alone what happens according to the MWI, so my explanation comes from ... (read more)

0torekp
Belated thanks to you and MrMind, these answers were very helpful.

I wonder what probability epiphenomenalists assign to the theory that they are themselves conscious, if they admit that belief in consciousness isn't caused by the experiences that consciousness brings.

The more I think about it, the more absurdly self-defeating it sounds, and I have trouble believing that ANYONE could hold such views after having thought about it for a few minutes. The only reason I continue to think about it is because it's very easy to believe that some people, no matter how an AI acted and for how long, would never believe the AI to be conscious. And that bothers me a lot, if it affects their moral stance on that AI.

5kilobug
Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness. After all, if you can speak about consciousness exactly like we do and yet be a p-zombie, why doesn't Chalmer assume he's the only not being a zombie, and therefore letting go of all forms of caring for others and all morality ? The fact that Chalmer and people like him still behave like they consider other people to be as conscious as they are probably points to the fact they have belief-in-belief, more than actual belief, in the possibility of zombieness.

I really enjoyed this, it was very well written! Lots of fun new concepts, and plenty of fun old ones being used well.

Looking forward to reading more! Even if there aren't too many new weird things in whatever follows, I really want to see where the story goes.

I very much like bringing these concepts of unambiguous past and ambiguous future to this problem.

As a pattern theorist, I agree that only memory (and the other parts of my brain's patterns which establish my values, personality, etc) matter when it comes to who I am. If I were to wake up tomorrow with Britney Spear's memories, values, and personality, 'I' will have ceased to exist in any important sense, even if that brain still had the same 'consciousness' that Usul describes at the bottom of his post.

Once one links personal identity to one's memories, ... (read more)

Surely there is a difference in kind here. Deleting a copy of a person because it is no longer useful is very different from deleting the LAST existing copy of a person for any reason.

0[anonymous]
I see no such distinction. Murder is murder.

Does the fact that naive neural nets almost always fail when applied to out of sample data constitute a strong general argument against the anti-universalizing approach?

I think this demonstrates the problem rather well. In the end, the phenomenon you are trying to model has a level of complexity N. You want your model (neural network or theory or whatever) to have the same level of complexity - no more, no less. So the fact that naive neural nets fail on out of sample data for a given problem shows that the neural network did not reach sufficient comple... (read more)

3Lumifer
This is one possibility. Another, MUCH more common in practice, is that your NN overfitted the in-sample data and so trivially failed at out-of-sample forecasting. To figure out the complexity of the process you're trying to model, you first need to be able to separate features of that process from noise and this is far from a trivial exercise.

The first paper he mentions in the machine learning section can be found here, if you'd like to take a look: Murphy and Pazzani 1994 I had more trouble finding the others which he briefly mentions, and so relied on his summary for those.

As for the 'complexity of phenomena rather than theories' bit I was talking about, your reminder of Solomonoff induction has made me change my mind, and perhaps we can talk about 'complexity' when it comes to the phenomena themselves after all.

My initial mindset (reworded with Solomonoff induction in mind) was this: Given a... (read more)

127chaos
It might be a difference of starting points, then. We can either start with a universal approach, a broad prior, and use general heuristics like Occam's Razor, then move towards the specifics of a situation, or we can start with a narrow prior and a view informed by local context, to see how Nature typically operates in such domains according to the evidence of our intuitions, then try to zoom out. Of course both approaches have advantages in some cases, so what's actually being debated is their relative frequency. I'm not sure of any good way to survey the problem space in an unbiased way to assess whether or not this assertion is typically true (maybe Monte Carlo simulations over random algorithms or something ridiculous like that?), but the point that adding unnecessary additional assumptions to a theory is flawed practice seems like a good heuristic argument suggesting we should generally assume simplicity. Does the fact that naive neural nets almost always fail when applied to out of sample data constitute a strong general argument against the anti-universalizing approach? Or am I just mixing metaphors recklessly here, with this whole "localism" thing? Simplicity and generalizability are more or less the same thing, right? Or is that question assuming the conclusion once again?

Looking at the machine learning section of the essay, and the paper it mentions, I believe the author to be making a bit too strong a claim based on the data. When he says:

"In some cases the simpler hypotheses were not the best predictors of the out-of-sample data. This is evidence that on real world data series and formal models simplicity is not necessarily truth-indicative."

... he fails to take into account that many more of the complex hypotheses get high error rates than the simpler hypotheses (despite a few of the more complex hypotheses ge... (read more)

127chaos
Did you look up the papers he referenced, then? Or are you speaking just based on your impression of his summaries? I too thought that his summaries were potentially misleading, but I failed to track down the papers he mentioned to verify that for certain. This perspective is new to me. What are your thoughts on things like Salmonoff induction? It seems to me like that's sufficiently abstract that it requires simplicity is a meaningful idea even outside the human psyche. I cannot really imagine any thinking-like process that doesn't involve notions of simplicity.

If the many-worlds interpretation is truly how the world is, and if having multiple copies of myself as an upload is more valuable than just having one copy on more powerful (or distributed) hardware, then...

I could bid for a job asking for a price which could be adequate if I were working by myself. I could create N copies of myself to help complete the job. Then, assuming there's no easy way to meld my copies back into one, I could create a simple quantum lottery that deletes all but one copy.

Each copy is guaranteed to live on in its own Everett branch, able to enjoy the full reward from completing the job.

Firstly, thank you for creating this well-written and thoughtful post. I have a question, but I would like to start by summarising the article. My initial draft of the summary was too verbose for a comment, so I condensed it further - I hope I have still captured the main essense of the text, despite this rather extreme summarisation. Please let me know if I have misinterpreted anything.

People who predict doomsday scenarios are making one main assumption: that the AI will, once it reaches a conclusion or plan, EVEN if there is a measure of probability assi... (read more)

From these examples, I might guess that these mistakes fall into a variety of already existing categories, unlike something like the typical mind fallacy which tends to come down to just forgetting that other people may have different information, aims and thought patterns.

Assuming you're different from others, and making systematic mistakes caused by this misconception, could be attributed to anything from low-self esteem (which is more to do with judgments of one's own mind, not necessarily a difference between one's mind and other people's), to the Fund... (read more)

I would say that it has to do with the consequences of each mistake. When you subconsciously assume that others think the way you do, you might see someone's action and immediately assume they have done it for the reason you would have done it (or, if you can't conceive of a reason you would do it, you might assume they are stupid or insane).

On the other hand, assuming people's minds differ from you may not lead to particular assumptions in the same way. When you see someone do something, it doesn't push you into thinking that there's no way the person did... (read more)

0[anonymous]
For example, I often think I am unusually cowardly or clumsy. Then I am totally surprised when I find after like 3 months of martial arts practice I am already better on both accounts than like 20-30% of the new starters, I was sure I will never ever get better at it, which roughly predicts average ability - but then why does it feel so unusually low? I tend to think others are far more social than me. Then I start wondering, the fact that we are living in the same flat for 3 years now and never had a chat with a neighbor cannot be 100% my fault, it is 50% mine for not initiating such a conversation, but also 50% theirs as they too didn't. So it may actually be they are not that much more social than me.

I think I may be a little confused about your exact reason to reject the correspondence theory of truth. From my reading, it seems to me that you reject it because it cannot justify any truth claim, since any attempt to do so is simply comparing one model to another - since we have no unmediated access to 'reality'. Instead, you seem to claim that pragmatism is more justified when claiming that something is true, using something along the lines of "it's true if it works in helping me achieve my goals".

There are two things that confuse me: 1) I do... (read more)

0eternal_neophyte
Rather than saying "I believe snow is white" we should be saying "that whiteness is snow", since we layer models around percepts rather than vice versa. If discovering the truth means to fit percept to a model, it seems obvious that you need access to a complete model to begin with, and thus follow the OP's complaints about needing unmediated access to reality. This is I think responsible for the confusion surrounding this topic.

Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!).

Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since... (read more)

I think you've helped me see that I'm even more confused than I realised! It's true that I can't go down the road of 'if I do not currently care about something, does it matter?' since this applies when I am awake as well. I'm still not sure how to resolve this, though. Do I say to myself 'the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness'?

I think that seems like a pretty solid thing ... (read more)

I remember going through a similar change in my sense of self after reading through particular sections of the sequences - specifically thinking that logically, I have to identify with spatially (or temporally) separated 'copies' of me. Unfortunately it doesn't seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of 'what if the teleporter malfunctions and y... (read more)

0SeekingEternity
The first is a short story that is basically a "garden path" toward this whole idea, and was a real jolt for me; you wonder why the narrator would be worried about this experiment going wrong, because she won't be harmed regardless. That world-view gets turned on its ear at the end of the story. The second is longer, but still a pretty short story; I didn't see a version of it online independent of the novel-length collection it's published in. It explores the Star Trek transporter idea, in greater detail and more rationally than Star Trek ever dared to do. The third is a huuuuuuge comic archive (totally worth reading anyhow, but it's been updating every single day for almost 15 years); the story arc in question is The Teraport Wars ( http://www.schlockmercenary.com/2002-04-15 ), and the specific part starts about here: http://www.schlockmercenary.com/2002-06-20 . Less "thinky" but funnier / more approachable than the others.

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in betwee... (read more)

-9advancedatheist
9SeekingEternity
Short version: I adjusted my sense of "self" until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day. It didn't actually take much for me to take that leap when it came to cryonics. The trigger for me was "you don't die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die". I'm not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: "Yes, of course I want you to perform the last-ditch operation to save my life!" If you're curious: My default self-view for a long time was basically "the continuity that led to me is me, and any forks or future copies/simulations aren't me", which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it "CBH Alpha") as myself. If a copy of me was created; "I" was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn't make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can't pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves. This doesn't mean I'm not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn't need to be optimized for. On the other hand, this does mean I'm not OK with the idea of a mac
jefftk220

Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?

Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start... (read more)

2Richard_Kennaway
Perhaps that is not so obvious. While you are awake, do you actually have that want while it is not in your attention? Which is surely most of the time. If you are puzzled about where the want goes while you are asleep, should you also be puzzled about where it is while you are awake and oblivious to it? Or looking at it the other way, if the latter does not puzzle you, should the former? And if the former does not, should the Long Sleep of cryonics? Perhaps this is a tree-falls-in-forest-does-it-make-a-sound question. There is (1) your experience of a want while you are contemplating it, and (2) the thing that you are contemplating at such moments. Both are blurred together by the word "want". (1) is something that comes and goes even during wakefulness; (2) would seem to be a more enduring sort of thing that still exists while your attention is not on it, including during sleep, temporarily "dying" on an operating table, or, if cryonics works, being frozen.

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in betwee... (read more)

1pengvado
I think your answer is in The Domain of Your Utility Function. That post isn't specifically about cryonics, but is about how you can care about possible futures in which you will be dead. If you understand both of the perspectives therein and are still confused, then I can elaborate.

This is a really brilliant idea. Somehow I feel that using the Bayesian network system on simple trivial things at first (like the student encounter and the monster fight) is great for getting the player into the spirit of using evidence to update on particular beliefs, but I can imagine that as you go further with the game, the system would be applied to more and more 'big picture' mysteries of the story itself, such as where the main character's brother is.

Whenever I play conversation based adventure games or mystery-solving games such as Phoenix Wright,... (read more)

I know this post is a little old now, but I found myself wondering the same thing (and a little disappointed that I am the only one to comment) and found this. I must say that it's hard to find anyone around my social groups who has heard of LessWrong or even just cares about rationality, so it'd be great to meet up with other LWers! I'm currently attending the University of Birmingham, and live near the university.