Comment author: [deleted] 05 May 2015 10:27:33AM *  0 points [-]

For example, I often think I am unusually cowardly or clumsy. Then I am totally surprised when I find after like 3 months of martial arts practice I am already better on both accounts than like 20-30% of the new starters, I was sure I will never ever get better at it, which roughly predicts average ability - but then why does it feel so unusually low?

I tend to think others are far more social than me. Then I start wondering, the fact that we are living in the same flat for 3 years now and never had a chat with a neighbor cannot be 100% my fault, it is 50% mine for not initiating such a conversation, but also 50% theirs as they too didn't. So it may actually be they are not that much more social than me.

In response to comment by [deleted] on Stupid Questions May 2015
Comment author: MockTurtle 05 May 2015 01:49:43PM *  0 points [-]

From these examples, I might guess that these mistakes fall into a variety of already existing categories, unlike something like the typical mind fallacy which tends to come down to just forgetting that other people may have different information, aims and thought patterns.

Assuming you're different from others, and making systematic mistakes caused by this misconception, could be attributed to anything from low-self esteem (which is more to do with judgments of one's own mind, not necessarily a difference between one's mind and other people's), to the Fundamental Attribution Error (which could lead you to think people are different from you by failing to realise that you might have the same behaviour if you were in the same situation as they are, due to your current ignorance of what that situation is). Also, I don't know if there is a fallacy name for this, but regarding your second example, it sounds like the kind of mistake one makes when one forgets that other people are agents too. When all you can observe is your own mind, and the internal causes from your side which contribute to something in the outside world, it can be easy to forget to consider the other brains contributing to it. So, again, I'm not sure I would really put it down to something as precise as 'assuming one's mind is different from that of other people'.

(Edit: The top comment in this post by Yvain seems to expand a little on what you're talking about.)

Comment author: [deleted] 04 May 2015 10:39:09AM 5 points [-]

Why do we discuss typical mind fallacy more than the atypical mind fallacy (the later is not even an accepted term, I came up with it) ?

I am far more likely to assume that "I am so special snowflake" than to assume everybody is like me. Basically this is what the ego, the pride, the vanity in me wants to do.

In response to comment by [deleted] on Stupid Questions May 2015
Comment author: MockTurtle 05 May 2015 10:09:16AM 0 points [-]

I would say that it has to do with the consequences of each mistake. When you subconsciously assume that others think the way you do, you might see someone's action and immediately assume they have done it for the reason you would have done it (or, if you can't conceive of a reason you would do it, you might assume they are stupid or insane).

On the other hand, assuming people's minds differ from you may not lead to particular assumptions in the same way. When you see someone do something, it doesn't push you into thinking that there's no way the person did that for any reason you would do it. I don't think it will have that same kind of effect on your subconscious assumptions. I might be missing something, though. How do you see the atypical mind fallacy affecting your behaviour/thoughts in general?

Comment author: MockTurtle 24 March 2015 04:48:51PM 5 points [-]

I think I may be a little confused about your exact reason to reject the correspondence theory of truth. From my reading, it seems to me that you reject it because it cannot justify any truth claim, since any attempt to do so is simply comparing one model to another - since we have no unmediated access to 'reality'. Instead, you seem to claim that pragmatism is more justified when claiming that something is true, using something along the lines of "it's true if it works in helping me achieve my goals".

There are two things that confuse me: 1) I don't see why the inability to justify a truth statement based on the correspondence theory would cause you to reject that theory as a valid definition of truth. In your post, you seem to accept that there IS a world which exists independently of us, in some way or other. If I say, "I believe that 'this snow is white' is true, which is to say, I believe that there exists a reality independent from my model of it where such objects in some way exist and are behaving in a manner corresponding to my statement"... That is what I understand by the correspondence theory of truth, so even if I cannot ever prove it (this could all be hallucinations for all I know), it still is a meaningful statement, surely? At least philosophically speaking? To me, there is a difference between 'if the statement that snow is white is true, it is because I am successful in my actions if I act as if snow is white' and 'if the statement that snow is white is true, it is because there exists an actual reality (which I have no unmediated access to), independent of my thoughts and senses, which has corresponding objects acting in corresponding ways to the statement, which somehow affect my observations'. When people argue about what truth really means, I don't see how it is only meaningful to advocate for the former definition over the latter, even if the latter is admittedly not particularly useful in a non-philosophical way. 2) Isn't acting on the world to achieve your goals a type of experiment, establishing correspondence between one model (if I do this, I will achieve my goal) and another model (my model of reality as relayed by the success or failure of my actions)? I don't see how, just because there is a goal other than finding out about an underlying reality, it would be any more correct or meaningful to say that this experiment reveals more truth than experiments whose only goal is to try to get the least mediated view of reality possible.

As far as I can see, if we assume even the smallest probability that our actions (whether they be pragmatic-goal-achieving or pure observation) are affected by some underlying, unmediated reality which we have no direct access to, then the more such actions we take, the more is revealed about this thing which actually affects our model.

Comment author: jkaufman 01 December 2014 12:05:28PM *  17 points [-]

Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?

Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start them again?

My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.

(But I'm not signed up for cryonics because I don't think the information would be preserved.)

Comment author: MockTurtle 02 December 2014 10:31:29AM -1 points [-]

Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!).

Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won't exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time...

Thanks for the help!

Comment author: RichardKennaway 01 December 2014 12:00:40PM 2 points [-]

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life.

Perhaps that is not so obvious. While you are awake, do you actually have that want while it is not in your attention? Which is surely most of the time.

If you are puzzled about where the want goes while you are asleep, should you also be puzzled about where it is while you are awake and oblivious to it? Or looking at it the other way, if the latter does not puzzle you, should the former? And if the former does not, should the Long Sleep of cryonics?

Perhaps this is a tree-falls-in-forest-does-it-make-a-sound question. There is (1) your experience of a want while you are contemplating it, and (2) the thing that you are contemplating at such moments. Both are blurred together by the word "want". (1) is something that comes and goes even during wakefulness; (2) would seem to be a more enduring sort of thing that still exists while your attention is not on it, including during sleep, temporarily "dying" on an operating table, or, if cryonics works, being frozen.

Comment author: MockTurtle 02 December 2014 10:22:57AM 0 points [-]

I think you've helped me see that I'm even more confused than I realised! It's true that I can't go down the road of 'if I do not currently care about something, does it matter?' since this applies when I am awake as well. I'm still not sure how to resolve this, though. Do I say to myself 'the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness'?

I think that seems like a pretty solid thing to think, and is useful, but when I say it to myself right now, it doesn't feel quite right. For now I'll meditate on it and see if I can internalise that message. Thanks for the help!

Comment author: CBHacking 01 December 2014 12:09:59PM *  6 points [-]

Short version: I adjusted my sense of "self" until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day.

It didn't actually take much for me to take that leap when it came to cryonics. The trigger for me was "you don't die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die". I'm not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: "Yes, of course I want you to perform the last-ditch operation to save my life!"

If you're curious: My default self-view for a long time was basically "the continuity that led to me is me, and any forks or future copies/simulations aren't me", which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it "CBH Alpha") as myself. If a copy of me was created; "I" was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn't make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can't pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves.

This doesn't mean I'm not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn't need to be optimized for. On the other hand, this does mean I'm not OK with the idea of a machine that duplicates myself for the purpose of the duplicate dying, unless it's literally a matter of saving any instance of myself; I would optimize for the benefit of all of me, not just for the one who pushed the button.

I'm not yet sure how I'd feel about a "transporter" which offered the option of destroying the original, but didn't have to. The utility of such a thing is obviously so high I would use it, and I'd probably default to destroying the original just because I don't feel like I'm such a wonderful benefit to the world that there needs to be more of me (so long as there's at least one), but when I reframe the question from "why would I want to not be transported (i.e. to go on experiencing life here instead of wherever I was being sent)" to "why would I want to have fewer experiences than I could (i.e. only experience the destination of the transporter, instead of simultaneously experiencing both), I feel like I'd want to keep the original. If we alter the scenario just slightly, such that the duplicate is created as a fork and the fork is then optionally destroyed, I don't think I would ever choose destruction except if it was a scenario along the lines of "painless disintegration or death by torture" and the torture wasn't going to last long (no rescue opportunity) but I'd still experience a lot of pain.

These ideas largely came about from various fiction I've read in the last few years. Some examples that come to mind:

Comment author: MockTurtle 02 December 2014 10:12:30AM -1 points [-]

I remember going through a similar change in my sense of self after reading through particular sections of the sequences - specifically thinking that logically, I have to identify with spatially (or temporally) separated 'copies' of me. Unfortunately it doesn't seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of 'what if the teleporter malfunctions and you don't get recreated at your destination? Is that a bad thing?' is almost without meaning, as there would no-longer be a 'me' to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious.

As pointed out by Richard, this is probably even more absurd than I realise, as I am not 'conscious' of all my desires at all times, and thus I cannot go on this road of 'if I do not currently care about something, does it matter?'. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when.

Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those 'click' moments...

Comment author: MockTurtle 01 December 2014 11:09:03AM 6 points [-]

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset - I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.

I'm definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?

I'm pretty sure I'd be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?

Comment author: MockTurtle 20 November 2014 11:43:47AM 0 points [-]

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset - I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.

I'm definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?

I'm pretty sure I'd be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?

Comment author: MockTurtle 20 November 2014 10:33:06AM 3 points [-]

This is a really brilliant idea. Somehow I feel that using the Bayesian network system on simple trivial things at first (like the student encounter and the monster fight) is great for getting the player into the spirit of using evidence to update on particular beliefs, but I can imagine that as you go further with the game, the system would be applied to more and more 'big picture' mysteries of the story itself, such as where the main character's brother is.

Whenever I play conversation based adventure games or mystery-solving games such as Phoenix Wright, I can see how the player is intended to guess certain things from clues, and ask the right questions to gain more crucial information, but having the Bayesian Network be explicitly represented in the game means the game is a lot simpler in some ways (you don't have to do all the updating in your head) but also introduces a different kind of challenge (the player can be shot down if ve tries to guess the answer to the mystery right away with too little data, and it becomes a lot more to do with which pieces of evidence that could be looked at could provide the most information). A growing vision in my mind of what a game like this would look like is making me quite excited to play it!

But I think I'm getting a little carried away. The game as an educational tool would probably quite different from a game which tries to make mystery-solving a challenge. Getting the balance right, to make it fun, might still be pretty challenging, I think.

As a side note, it'd be pretty awesome to use this system to show particular logical fallacies that people (either other characters, or the main character before applying proper probability theory) in the game could make.

Comment author: MockTurtle 19 January 2012 01:24:27PM 0 points [-]

I know this post is a little old now, but I found myself wondering the same thing (and a little disappointed that I am the only one to comment) and found this. I must say that it's hard to find anyone around my social groups who has heard of LessWrong or even just cares about rationality, so it'd be great to meet up with other LWers! I'm currently attending the University of Birmingham, and live near the university.

View more: Prev | Next