P(X = exact value) = 0: Is it really counterintuitive?

8 lucidfox 29 July 2011 12:45PM

I'm probably not going to say anything new here. Someone must have pondered over this already. However, hopefully it will invite discussion and clear things up.

Let X be a random variable with a continuous distribution over the interval [0, 10]. Then, by the definition of probability over continuous domains, P(X = 1) = 0. The same is true for P(X = 10), P(X = sqrt(2)), P(X = π), and in general, the probability that X is equal to any exact number is always zero, as an integral over a single point.

This is sometimes described as counterintuitive: surely, at any measurement, X must be equal to something, and thus its probability cannot be zero since its clearly happened. It can be, of course, argued that mathematical probability is abstract function that does not exactly map to our intuitive understanding of probability, but in this case, I would argue that it does.

What if X is the x-coordinate of a physical object? If classical physics are in question - for example, we pointed a needle at a random point on a 10 cm ruler - then it cannot be a point object, and must have a nonzero size. Thus, we can measure the probability of the 1 cm point lying within the space the end of the needle occupies, a probability that is clearly defined and nonzero.

But even if we're talking about a point object, while it may well occupy a definite and exact coordinate in classical physics, we'll never know what exactly it is. For one, our measuring tools are not that precise. But even if they had infinite precision, statements like "X equals exactly 2.(0)" or "X equals exactly π" contain infinite information, since they specify all the decimal digits of the coordinate into infinity. We would have an infinite number of measurements to confirm it. So while X may objectively equal exactly 2 or π - again, under classical physics - measurers would never know it. At any given point, to measurers, X would lie in an interval.

Then of course there is quantum physics, where it is literally impossible for any physical object, including point objects, to have a definite coordinate with arbitrary precision. In this case, the purely mathematical notion that any exact value is an impossible event turns out (by coincidence?) to match how the universe actually works.

LW systemic bias: US centrism

10 lucidfox 19 July 2011 07:21PM

Recently, I have noticed a cultural bias for the United States running through LW threads. It is perhaps to be expected of an English-language website, but for one that is about, among other things, overcoming bias, it is important to recognize one's own. 

Aspects of the bias I have observed include:

  • Using Imperial units over the SI system, which is the standard for scientific literature and discussion.
  • Presuming the US by default when it is assumed that no country name needs to be given.
  • Expecting reader familiarity with US-specific cultural concepts.
  • A tendency to focus on the US first and foremost when talking about worldwide problems and scenarios.

I'm not the first to raise such concerns, either.

By comparison, e.g. the English Wikipedia strikes me as an example of an international English-language project that's relatively successful at recognizing and fighting systemic bias, and a whole set of template messages to mark articles with identified problems.

To quote Wikipedia itself:

The average Wikipedian on the English Wikipedia is (1) a male, (2) technically inclined, (3) formally educated, (4) an English speaker (native or non-native), (5) European–descent, (6) aged 15–49, (7) from a majority-Christian country, (8) from a developed nation, (9) from the Northern Hemisphere, and (10) likely employed as a white-collar worker or enrolled as a student rather than employed as a labourer.

The reason I haven't mentioned other obvious biases, such as gender, age, education, or First World biases, is because those (in my experience) tend to be more subtle here on LW and because I'm myself subject to some of them. However, I might cook something up on them later.

Well, that does it, I suppose

2 lucidfox 17 July 2011 10:51AM

My first post here on LW related to gender identity, based on my own introspection, generated some interesting discussion that I enjoyed reading and commenting on. While there were disagreements on the origin of transsexuality, there was an agreement that it was a condition genuinely in need of treatment.

Fast forward to now, and what do we have? People throwing accusations all over the place, calling transsexuality a "delusion", comparing it with religious belief, or referring to the discredited autogynephilia (sexual fetish) theory.

How could this have happened? Either:

1) the audience of LW changed significantly in the half-year interim;

or 2) the lack of personal input in the second post caused people to more freely voice their true opinions, rather than those they suspected I would take offense at.

I don't know which possibility to lean towards, but if previously I only suspected LW was the wrong community for me (what with the singularity-worship that I don't share), now I'm almost convinced in this.

Transsexuals and otherkin

11 lucidfox 15 July 2011 07:10AM

After reflecting on the "Gender Identity and Rationality" post, there is something that continues to bug me, a shred of doubt burning through my brain.

What is it about gender identity that separates it from fringe subcultures like otherkin, soulbonders, and whatever else? Why is one considered socially acceptable (however grudgingly and however rocky history the recognition has), and the other isn't? Is such a distinction justified in the first place?

What's so substantially different between "I'm really another gender on the inside" and "I'm really another species on the inside"? Muddling the waters is the fact that I know some transsexuals who also are or used to be otherkin.

I have seen two different points of view on this subject:

1. Well, who are we to claim that otherkin are wrong? Perhaps their condition deserves legitimate recognition and sympathy.

2. The difference is between identifying with something that verifiably exists (and exists within the psychological unity of humankind), and identifying with a species that is either non-sapient (and thus unable to be targeted by human empathy to the same extent that humans are), or flat-out doesn't exist (dragons, fae, and other fantasy creatures).

While I'm myself leaning towards the second point of view, I find the argument rather weak. It implies that in a hypothetical setting with multiple intelligent species, "species identity" may be a socially valid characteristic, and a human citizen of the Federation claiming to be mentally a Klingon would be worth paying attention to. And I find that... counterintuitive.

Thoughts?

Hype Aversion/Backlash as an Immune Response?

7 lucidfox 04 July 2011 05:58AM

"Check out this book/movie/show! It's got everything! Everyone and their mother is talking about it, and it seems just your type!"

Sounds familiar? Chances are, more than once such hype made you, if anything, more reluctant to approach the work in question, and if you do, you may be up for bitter disappointment; the work may not even be that bad per se, and fairly enjoyable if you just heard about it on your own, but it simply fails in your mind to live up to the massive hype that positions it as the best thing to happen to the universe since the Big Bang.

I know that in such situations, my mental energy is often channeled in the opposite direction: into venting bitter disappointment, into arguing that it's not as great as everyone seems to think it is, into looking for just about anything critical anyone has to say about it, anywhere. Finding refuge in knowing that at least I'm not lonely in my dissent. Except when I apparently am.

Typically, I expect any work of fiction, community, or social movement to have its share of praisers and critics. When a healthy balance of positive and negative opinions is preserved, I'm calm about it, regardless of my personal opinion on the subject. When something is universally critically panned, it sometimes sparks my curiosity. ("Come on, it can't possibly be that bad!" Except when it occasionally is.) But when something is unanimously liked, and criticism is next to nonexistant, and I just plain "don't get it"... then things get ugly.

"What in the blazes did everyone find in it? Why am I not affected by this outbreak of unanimous praise?" I've had this feeling before about Neon Genesis Evangelion (which by now has got its own share of skepticism and criticism), about the Haruhi Suzumiya franchise (which I now actually like, although its obsessive fandom still rubs me the wrong way); the current contenders are Twitter, Steven Moffat's grip over Doctor Who, and My Little Pony.

I suspect that in my case, the backlash is an automatic response that is a part of my "defense mechanism", so to speak, against hostile memes. I can usually detect not-so-subtle attempts at mind manipulation, such as loaded questions, biased presentation and dodging inconvenient subjects; this is why, for example, I don't watch TV news and feel uncomfortable when someone else does, as if I can feel it trying to invade my brain.

I suspect, thus, that such an "allergic" reaction to hype is my attempt to balance the equation. The more universal the praise is, the less criticism there is (and the more quickly critics are shunned), the more my mind treats it as some kind of infection of the collective conscious, a malignant meme that it needs to repel. I don't, of course, seriously believe there is some kind of mind control at work, but it feels like it, and so I subconsciously try to distance myself from the event, trying to maintain integrity even in the face of apparently the entire world going mad. Hoping, perhaps, to slow down the spread, even if it might seem as hopeless as trying to survive in the middle of an ocean in a trough during a thunderstorm.

Perhaps it represents a bias, perhaps it's not a big deal, and I should just learn not to let such things bother me, even when I feel like a lonely dissenter?

Iterated Sleeping Beauty and Copied Minds

2 lucidfox 21 December 2010 07:21AM

Before I move on to a summation post listing the various raised thought experiments and paradoxes related to mind copying, I would like to cast attention to a particular moment regarding the notion of "subjective probability".

In my earlier discussion post on the subjective experience of a forked person, I compared the scenario where one copy is awakened in the future to the Sleeping Beauty thought experiment. And really, it describes any such process, because there will inevitably be a time gap, however short, between the time of fork and the copy's subjective awakening: no copy mechanism can be instant.

In the traditional Sleeping Beauty scenario, there are two parties: Beauty and the Experimenter. The Experimenter has access to a sleep-inducing drug that also resets Beauty's memory to the state at t=0. Suppose Beauty is put to sleep at t=0, and then a fair coin is tossed. If the coin comes heads, Beauty is woken up at t=1, permanently. If the coin comes tails, Beauty is woken up at t=1, questioned, memory-wiped, and then woken up again at t=2, this time permanently.

In this experiment, intuitively, Beauty's subjective anticipation of the coin coming tails, without access to any information other than the conditions of the experiment, should be 2/3. I won't be arguing here whether this particular answer is right or wrong: the discussion has been raised many times before, and on Less Wrong as well. I'd like to point out one property of the experiment that differentiates it from other probability-related tasks: erasure of information, which renders the whole experiment a non-experiment.

In Bayesian theory, the (prior) probability of an outcome is the measure of our anticipation of it to the best of our knowledge. Bayesians think of experiments as a way to get new information, and update their probabilities based on the information gained. However, in the Sleeping Beauty experiment, Beauty gains no new information from waking up at any time, in any outcome. She has the exact same mind-state at any point of awakening that she had at t=0, and is for all intents and purposes the exact same person at any such point. As such, we can ask Beauty, "If we perform the experiment, what is your anticipation of waking up in the branch where the coin landed tails?", and she can give the same answer without actually performing the experiment.

So how does it map to the mind-copying problem? In a very straightforward way.

Let's modify the experiment this way: at t=0, Beauty's state is backed up. Let's suppose that she is then allowed to live her normal life, but the time-slices are large enough that she dies within the course of a single round. (Say, she has a normal human lifespan and the time between successive iterations is 200 years.) However, at t=1, a copy of Beauty is created in the state at which the original was at t=0, a coin is tossed, and if and only if it comes tails, another copy is created at t=2.

If Beauty knows the condition of this experiment, no matter what answer she would give in the classic formulation of the problem, I don't expect it to change here. The two formulations are, as far as I can see, equivalent.

However, in both cases, from the Experimenter's point of view, the branching points are independent events, which allows us to construct scenarios that question the straightforward interpretation of "subjective probability". And for this, I refer to the last experiment in my earlier post.

Imagine you have an indestructible machine that restores one copy of you from backup every 200 years. In this scenario, it seems you should anticipate waking up with equal probability between now and the end of time. But it's inconsistent with the formulation of probability for discrete outcomes: we end up with a diverging series, and as the length of the experiment approaches infinity (ignoring real-world cosmology for the moment), the subjective probability of every individual outcome (finding yourself at t=1, finding yourself at t=2, etc.) approaches 0. The equivalent classic formulation is a setup where the Experimenter is programmed to wake Beauty after every time-slice and unconditionally put her back to sleep.

This is not the only possible "diverging Sleeping Beauty" problem. Suppose that at t=1, Beauty is put back to sleep with probability 1/2 (like in the classic experiment), at t=2 she is put back to sleep with probability 1/3, then 1/4, and so on. In this case, while it seems almost certain that she will eventually wake up permanently (in the same sense that it is "almost certain" that a fair random number generator will eventually output any given value), the expected value is still infinite.

In the case of a converging series of probabilities of remaining asleep - for example, if it's decided by a coin toss at each iteration whether Beauty is put back to sleep, in which case the series is 1/2 + 1/4 + 1/8 + ... = 1 -- Beauty can give a subjective expected value, or the average time at which she expects to be woken up permanently.

In a general case, let Ei be the event "the experiment continues at stage i" (that is, Beauty is not permanently awakened at stage i, or in the alternate formulation, more copies are created beyond that point). Then if we extrapolate the notion of "subjective probability" that leads us to the answer 2/3 in the classic formulation, then the definition is meaningful if and only if the series of objective probabilitiesi=1..∞ P(Ei) converges -- it doesn't have to converge to 1, we'll just need to renormalize the calculations otherwise. Which, given that the randomizing events are independent, simply doesn't have to happen.

Even if we reformulate the experiment in terms of decision theory, it's not clear how it will help us. If the bet is "win 1 utilon if you get your iteration number right", the probability of winning it in a divergent case is 0 at any given iteration. And yet, if all cases are perfectly symmetric information-wise so that you make the same decision over and over again, you'll eventually get the answer right, with exactly one of you winning the bet, even no matter what your "decision function" is - even if it's simply something like "return 42;". Even a stopped clock is right sometimes, in this case once.

It would be tempting, seeing this, to discard the notion of "subjective anticipation" altogether as ill-defined. But that seems to me like tossing out the Born probabilities just because we go from Copenhagen to MWI. If I'm forked, I expect to continue my experience as either the original or the copy with a probability of 1/2 -- whatever that means. If I'm asked to participate in the classic Sleeping Beauty experiment, and to observe the once-flipped coin at every point I wake up, I will expect to see tails with a probability of 2/3 -- again, whatever that means.

The situations described here have a very specific set of conditions. We're dealing with complete information erasure, which prevents any kind of Bayesian update and in fact makes the situation completely symmetric from the decision agent's perspective. We're also dealing with an anticipation all the way into infinity, which cannot occur in practice due to the finite lifespan of the universe. And yet, I'm not sure what to do with the apparent need to update my anticipations for times arbitrarily far into the future, for an arbitrarily large number of copies, for outcomes with an arbitrarily high degree of causal removal from my current state, which may fail to occur, before the sequence of events that can lead to them is even put into motion.

Copying and Subjective Experience

5 lucidfox 20 December 2010 12:14PM

The subject of copying people and its effect on personal identity and probability anticipation has been raised and, I think, addressed adequately on Less Wrong.

Still, I'd like to bring up some more thought experiments.

Recently I had a dispute on an IRC channel. I argued that if some hypothetical machine made an exact copy of me, then I would anticipate a 50% probability of jumping into the new body. (I admit that it still feels a little counterintuitive to me, even though this is what I would rationally expect.) After all, they said, the mere fact the copy was created doesn't affect the original.

However, from an outside perspective, Maia1 would see Maia2 being created in front of her eyes, and Maia2 would see the same scene up to the moment of forking, at which point the field of view in front of her eyes would abruptly change to reflect the new location.

Here, it is obvious from both an inside and outside perspective which version has continuity of experience, and thus from a legal standpoint, I think, it would make sense to regard Maia1 as having the same legal identity as the original, and recognize the need to create new documents and records for Maia2 -- even if there is no physical difference.

Suppose, however, that the information was erased. For example, suppose a robot sedated and copied the original me, then dragged Maia1 and Maia2 to randomly chosen rooms, and erased its own memory. At this point, neither either of me, nor anyone else would be able to distinguish between the two. What would you do here from a legal standpoint? (I suppose if it actually came to this, the two of me would agree to arbitrarily designate one as the original by tossing an ordinary coin...)

And one more moment. What is this probability of subjective body-jump actually a probability of? We could set up various Sleeping Beauty-like thought experiments here. Supposing for the sake of argument that I'll live at most a natural human lifespan no matter which year I find myself in, imagine that I make a backup of my current state and ask a machine to restore a copy of me every 200 years. Does this imply that the moment the backup is made -- before I even issue the order, and from an outside perspective, way before any of this copying happens -- I should anticipate subjectively jumping into any given time in the future, and the probability of finding myself as any of them, including the original, tends towards zero the longer the copying machine survives?

 

Medieval Ballistics and Experiment

8 lucidfox 20 December 2010 10:13AM

I'm reading a popular science encyclopedia now, particularly chapters about the history of physics. The chapter goes on to evaluate the development of the concept of kinetic energy, starting with Aristotle's (grossly incorrect) explanation of a flying arrow saying that it's kept in motion by the air behind it, and then continuing to medieval impetus theory. Added: The picture below illustrates the trajectory of a flying cannonball as described by Albert of Saxony.

What struck me immediately was how drastically different from observations its predictions were. The earliest impetus theory predicted that a cannonball's trajectory was an angle: first a slanted straight line until the impetus runs out, then a vertical line of freefall. A later development added an intermediate stage, as seen on the picture to the left. At first the impetus was at full force, and would launch the cannonball in a straight line; then it would gradually give way to freefall and curve until the ball would be falling in a straight line.

While this model is closer to reality than the original prediction, I still cannot help but think... How could they deviate from observations so strongly?

Yes, yes, hindsight bias.

But if you launch a stream of water out of a slanted tube or sleeve, even if you know nothing about paraboles, you can observe that the curve it follows in the air is symmetrical. Balls such as those used for games would visibly not produce curves like depicted.

Perhaps the idea of verifying theories with experiments was only beginning to coalesce at that time, but what kind of possible thought process could lead one to publish theories so grossly out of touch with everyday observations, even those that you see without making any explicit experiments? Did the authors think something along the lines of "Well, reality should behave this way, and if it doesn't, it's its own fault"?

Social Presuppositions

11 lucidfox 02 December 2010 01:25PM

During discussion in my previous post, when we touched the subject of human statistical majorities, I had a side-thought. If taking the Less Wrong audience as an example, the statistics say that any given participant is strongly likely to be white, male, atheist, and well, just going by general human statistics, probably heterosexual.

But in my actual interaction, I've taken as a rule not to make any assumptions about the other person. Does it mean, I thought, that I reset my prior probabilities, and consciously choose to discard information? Not relying on implicit assumptions seems the socially right thing to do, I thought; but is it rational?

When I discussed it on IRC, this quote by sh struck me as insightful:

I.e. making the guess incorrectly probably causes far more friction than deliberately not making a correct guess you could make.

I came up with the following payoff matrix:

  Bob
Has trait X (p = 0.95) Doesn't have trait X (p = 0.05)
Alice Acts as if Bob has trait X +1 -100
Acts without assumptions about Bob 0 0

In this case, the second option is strictly preferable. In other words, I don't discard the information, but the repercussions to our social interaction in case of an incorrect guess outweigh the benefit from guessing correctly. And it also matters whether either Alice or Bob is an Asker or a Guesser.

One consequence I can think of is that with a sufficiently low p, or if Bob wouldn't be particularly offended by Alice's incorrect guess, taking the guess would be preferable. Now I wonder if we do that a lot in daily life with issues we don't consider controversial ("hmm, are you from my country/state too?"), and if all the "you're overreacting/too sensitive" complaints come from Alice incorrectly assessing a too low-by-absolute-value negative payoff in (0, 1).

Gender Identity and Rationality

35 lucidfox 01 December 2010 04:32PM

Not sure if I would be better off posting this on the main page instead, but since it's almost entirely about my personal experiences, here it goes.

Two years ago, I underwent a radical change in my worldview. A series of events caused me to completely re-evaluate my beliefs in everything related to gender, sexuality, tolerance, and diversity -- which in turn caused a cascade that made me rethink my stance on many other topics.

Coincidentally, the same events caused me to also rethink the way I thought of myself. This was, as it turned out, not very good. It still makes it difficult for me to untangle various consequences, correlated but potentially not directly bound by a cause-effect relation.

To be more blunt: being biologically male, I confessed to someone online about things that things that "men weren't supposed to do": my dissatisfaction with my body, my wish to have a female body, persistent fantasies of a sex change, desires to shave my body, grow long hair and wear women's clothes, and so on and so forth. She listened, and then asked, "Maybe you're transsexual?"

Back then, it would never even occur to me to think of that -- and my first gut response, which I'm not proud of, was denying association with "those freaks". As I understand now, I was relying on a cached thought, and it limited the scope of my reasoning. She used simple intuitive reasoning to arrive at the hypothesis based on what I revealed to her; I didn't know the hypothesis was even there, as I knew nothing about gender identity.

In the events that unfolded, I integrated myself into some LGBT communities and learned about all kinds of people, including those who didn't fit into notions of the gender binary at all. I've learned to view gender as a multidimensional space with two big clusters, rather than as a boolean flag. It felt incredibly heartwarming to be able to mentally call myself by a female name, to go by it on the Internet, to talk to like-minded people who had similar experiences and feelings, and to be referred by the pronoun "she" -- which at first bugged me, because I somehow felt I had "no moral right" or had to "earn that privilege", but quickly I got at ease with it, and soon it just felt ordinary, and like the only acceptable thing to do, the only way of presentation that felt right.

(I'm compressing and simplifying here for the sake of readability -- I'm skipping over the brief period after that conversation when I thought of myself as genderless, not yet ready to accept a fully female gender identity, and carried out thought experiments with imaginary conversations between my "male" and "female selves", before deciding that there was no male self to begin with after all.)

Nowadays, gender-wise, I address people the way they wish to be address. I also have some pretty strong opinions on the legal concept of gender, which I won't voice here. And I've learned a lot, and was able to drive my introspection deeper than I ever managed before... But that's not really relevant.

And yet... And yet.

As gleefully as I embraced a female role, feeling on the way to fulfilling my dream, I couldn't get out the nagging feeling of being somehow "fake". I kept thinking that I don't always "think like a real woman would", and I've had days of odd apathy when I didn't care about anything, including my gender presentation. Some cases happened even before my gender "awakening", and at those days, I felt empty and genderless, a drained shell of a person.

How, in all honesty, can I know if I'm "really a woman on the inside"? What does that even mean? I can speak in terms of desired behavior, in terms of the way I'm seen socially, from the outside. But how can I compare my subjective experience to those of different men and women, without getting into their heads? All I have is empathic inference, which works by building crude, approximate models of other people inside my head, and is so full of ill-defined biases that I have a suspicion I shouldn't rely on it at all and don't say things like "well, a man's subjective experience is way off for me, but a woman's subjective experience only weakly fits".

And yet... transpeople report "feeling like" their claimed gender. I prefer to work with more unambiguous subjective feelings -- like feeling I have a wrong body -- but I have caught myself thinking at different times, "This day I felt like a woman, and that day I didn't feel like a woman, but more like... nothing at all. And that other day my mind was occupied with completely different matters, like writing a Less Wrong post." It helps sometmes to visualize my brain as a system of connected logical components, with an "introspection center" as a separate component, but that doesn't bring me close to solving the mystery.

I want to be seen as a woman, and nothing else. I take steps to ensure that it happens. If I could start from a clean slate, magically get an unambiguously female body, and live somewhere where nobody would know about my past male life, perhaps that would be the end of it -- there would be no need for me to worry about it anymore. But as things stand, my introspection center keeps generating those nagging thoughts: "What if I'm merely a pretender, a man who merely thinks he's a woman, but isn't?" One friend of mine postulated that "wanting to be a gender is the same as being it"; but is it really that simple?

The sheer number of converging testimonies between myself and transpeople I've met and talked to would seem to rule that out. "If I'm fake, then they're fake too, and surely that sounds extremely unlikely." But while discovering similarities makes me generically happy, every deviation from the mean -- for example, I consciously discovered my gender identity at 21, a relatively late age -- stings painfully and brings up the uncertainty again. Could this be a case of failing to properly assign Bayesian weights, of giving evidence less significance than counterevidence? But every time I discovered a piece of counterevidence, my mind interpreted it as a breach of my mental defenses and tried to route around it, in other words, rationalize it away.

Maybe I could just tell myself, "Shut up and live the way you want to."

And yet...

I caught myself in thinking that I really, deeply didn't want to go back, to the point that I didn't want to accept the conclusion "I'm really a man and an impostor", even that time when it looked like evidence weighted that way. (It's no longer the case now that I've learned more facts, but the point still stands.) It was an unthinkable thought, and still is. Even now, I fail to apply the Litany of Tarski. "If I'm really a man, then I desire to bel--" Wait, doesn't compute. If that were true, it would cause my whole system of values to collapse, and it feels like stating an incoherent statement, like "If sexism is morally and scientifically justified, then..." It feels like it would cause my entire system of values to collapse, and I can't bring myself to think that -- but isn't that the danger of "already knowing the answer", rationalizing, etc.?

It also bugs me, I guess, that despite relying on rational reasoning in so many aspects of my daily life, with this one case, about an aspect of myself, I'm relying on some subjective, vague "gut feeling". Granted, I try to approach it in a rational way: someone used my revelations to locate a hypothesis, I found it likely based on the evidence and accepted it, then started updating... or did I? Would I really be able to change my belief even in principle? And even then, the root cause, the very root cause, comes from feelings of uneasiness with my assigned gender role that I cannot rationally explain -- they're just there, in the same way that my consciousness is "just there".

So...

When I heard about p-zombies, I immediately drew parallels. I asked myself if "fake transpeople" were even a coherent concept. Would it be possible to imagine two people who behave identically (and true to themselves, not acting), except one has "real" subjective feelings of gender and the other doesn't? After applying an appropriately tweaked anti-zombie argument, it seems to me that the answer is no, but it's also prossible that the question is too ill-defined for any answer to make sense.

The way it stands now, the so-called gender identity disorder isn't really something that is truly diagnosed, because it's based on self-reporting; you cannot look into someone's head and say "you're definitely transsexual" without their conscious understanding of themselves and their consent. So it seems to me outside the domain of psychiatry in the first place. I've heard some transpeople voice hope that there could be a device that could scan the part of the brain responsible for gender identity and say "yes, this one is definitely trans" and "no, this one definitely isn't". But to me, the prospect of such a device horrifies me even in principle. What if the device conflicts their self-reporting? (I suspect I'm anxious about the possibility of it filtering me, specifically.) What should we consider more reliable -- the machine or self-reporting? On one hand, we know how filled human brains are with cognitive biases, but on the other hand, it seems to me like a truism that "you are the final authority in your own self-identification."

Maybe it's a question of definitions, like the question about a tree making a sound, and the final answer depends on how exactly we define "gender identity". Or maybe -- this thought occurred to me right now -- my decision agent has a gender identity while my introspection center (which operates entirely on abstract knowledge rather than social conventions) doesn't, and that's the cause of the confusion that I get from looking at things in both a gendered and genderless way, in the same way as if I would be able to switch at will between a timed view from inside the timeline and a timeless view of the entire 4D spacetime at once. In any case, so far, for those two years since the realization I've stuck with the identity and role that I at least believe is the only one I won't regret assuming.

View more: Next