Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

I am switching to biomedical engineering and am looking for feedback on my strategy and assumptions

4 Arkanj3l 16 November 2013 03:42AM

I wrote this post up and circulated it among my rationalist friends. I've copied it verbatim. I figure the more rationally inclined people that can critique my plan the better.

--

TL;DR:

* I'm going to commit to biomedical engineering for a very specific set of reasons related to career flexibility and intrinsic interest.
* I still want to have computer science and design arts skills, but biomedical engineering seems like a better university investment.
* I would like to have my cake and eat it too by doing biomedical engineering, while practicing computer science and design on the side.
* There are potential tradeoffs, weaknesses and assumptions in this decision that are relevant and possibly critical. This includes time management, ease of learning, development of problem solving solving abilities and working conditions.

I am posting this here because everyone is pretty clever and likes decisions. I am looking for feedback on my reasoning and the facts in my assumptions so that I can do what's best. This was me mostly thinking out loud, and given the timeframe I'm on I couldn't learn and apply any real formal method other than just thinking it through. So it's long, but I hope that everyone can benefit by me putting this here.

--
So currently I'm weighing going into biomedical engineering as my major over a major in computer science, or the [human-computer interaction/media studies/gaming/ industrial design grab bag] major, at Simon Fraser University. Other than the fact that engineering biology is so damn cool, the relevant decision factors include reasons like:

  1. medical science is booming with opportunities at all levels in the system, meaning that there might be a lot of financial opportunity in more exploratory economies like in SV;
  2. the interdisciplinary nature of biomedical engineering means that I have skills with greater transferability as well as insight into a wide range of technologies and processes instead of a narrow few;
  3. aside from molecular biology, biomedical engineering is the field that appears closest to cognitive enhancement and making cyborgs for a living;
  4. compared to most kinds of engineering, it is more easy to self-teach computer science and other forms of digital value-making (web design or graphical modelling) due to the availability of educational resources; the approaching-free cost of computing power; established communities based around development; and clear measurements of feedback. By contrast, biomedical engineering may require labs to be educated on biological principles, which are increasingly available but scarce for hobbyists; basic science textbooks are strongly variant in quality; and there isn't the equivalent of a Github for biology making non-school collaborative learning difficult.

The two implications here are that even if I am still interested in computer science, which I am, and although biomedical engineering is less upwind than programming and math, it makes more sense to blow a lot of money on a more specialized education to get domain knowledge while doing computer science on the side, than to spend money on an option whose potential cost is so low because of self study. This conjecture, and the assumptions therein, is critical to my strategy.

So the best option combination that I figure that I should take is this:

  1. To get the value from Biomedical Engineering, I will do the biomedical engineering curriculum formally at SFU for the rest of my time there as my main focus.
  2. To get the value from computer science, I will make like a hacker and educate myself with available textbooks and look for working gigs in my spare time.
  3. To get the value from the media and design major, I will talk to the faculty directly about what I can do to take their courses on human computer interaction and industrial design, and otherwise be mentored. As a result I could seize all the real interesting knowledge while ignoring the crap.

Tradeoffs exist, of course. These are a few that I can think of:

  • I don't expect to be making as much as an entry level biomedical engineer as I would as a programmer in Silicon Valley, if that was ever possible; nor do I believe that my income would grow at the same rate. As a counterpoint, my range of potential competencies will be greater than the typical programmer, due to an exposure to physical, chemical, and biological systems, their experimentation, and product development. I feel that this greater flexibility could help with companies or startups that are oriented towards health or technological forecasting, but this is just a guess. In any case that makes me feel more comfortable, having that broader knowledge, but one could argue that programming being so popular and upwind makes it the more stable choice anyway. Don't know.
  • It's difficult to make money as an undergraduate with any of the skills I would pick up in biomedical engineering for at least a few years. This is important to me because I want to have more-than-minimum wages jobs as a way of completing my education on a debit. While web and graphic designers can start forming their own employment almost immediately, and while programmers can walk into a business or a bank and hustle; doing so with physics, chemistry or biology seems a bit more difficult. This is somewhat countered by co-op and work placement, and the fact that it doesn't seem to take too much programming or web design theory and practice before being able to start selling your skills (i.e. on the order of months).
  • Biomedical Engineering has few aesthetic and artistic aspects, the two of which I value. This is what attracted me to the media and design program in the first place. Instead I get to work with technologies which I know will have measurable and practical use, improving the quality of life for the sick and dying. Expressing myself with art and more free-wheeling design is not super urgent, so I'm willing to make this trade. I still hope to be able to orient myself for developing beautiful and useful data visualizations in practical applications, like this guy, and to experiment with maker hacking.

There is still the issue of assuring more-than-dilettante expertise in computer science and design stuff (see Expert Beginner syndrome: http://www.daedtech.com/how-developers-stop-learning-rise-of-the-expert-beginner). I am semi-confident in my ability to network myself into mentorships with members of faculty [at SFU] that are not my own, and if I'm not good at it now I still believe that it's possible. In addition, my dad has recently become a software consultant and is willing to apprentice me, giving a direct education about software engineering (although not necessarily a good one, at least it's somewhat real).

There are potential weaknesses in my analysis and strategy.

  • The time investment in the biomedical engineering faculty as SFU is very high. The requirements are similar to those of being a grad student, complete with a 3.00 minimum GPA and research project. The faculty does everything in its power to allay the burden while still maintaining the standard. However, this crowding out of time reduces the amount of potential time spent learning computer science. This makes the probability of efficient self-teaching go down. (that GPA standard might lead to scholarship access which is good, but more of an externality in this case.)
  • While we're on the conscientiousness load: conscientiousness is considered to be an invariant personality trait, but I'm not buying it. The typical person may experience on average no change in their conscientiousness, but typical people don't commit to interventions that affect the workload they can take on either by strengthening willpower, increasing energy, changing thought patterns (see "The Motivation Hacker") or improving organization through external aids. Still, my baseline level of conscientiousness has historically been quite low. This raises the up front cost of learning novel material I'm not familiar with, unlike computing, of which I have a stronger familiarity due to lifelong exposure; this lets me cruise by in computing courses but not necessarily ace them. Nevertheless, that's a lower downside risk.
  • Although medical problems are interesting and I have a lot of intrinsic interest in the domain knowledge, there are components of research that interest me while others that I don't currently enjoy as much as evidenced from my current exposure. I can seem myself getting into the data processing and visualization, drafting ergonomic wearable tech, and circuit design especially wrt EEGs. Brute force labwork would be less engaging and takes more out of me, despite systems biology principles being tough but engaging. So there's the possibility that I would only enjoy a limited scope of biomedical engineering work, making the major not worth it or unpleasant.
  • Due to the less steep learning curve and more coherent structure of the computer science field, it seems easier to approach the "career satisfaction" or "work passion" threshold with CS than for BME. Feeling satisfied with your career depends on many factors, but Cal Newport argues that the largest factor is essentially mastery, which leads to involvement. Mastery seems more difficult to guage with the noisy and prolonged feedback of the engineering sciences, so the motivations with the greatest relative importance might be the satisfaction of turning out product, satisfying factual curiosity or curiosity about established/canon models (as opposed to curiosity which is more local to your own circumstances or you figuring things out), and in the case of biomed, saving lives by design. With mathematics and programming the problem space is such that you can do math and programming for their own sakes.
  • Most instances of biomedical engineering majors around the world are mainly graduate studies. The most often reported experience is that when you have someone getting a PhD in biomedical engineering, it's in addition to their undergraduate experience as a mechanical engineer, an electrical engineer or a computer scientist. The story goes that these problem solving skills are applied to the biology after being developed - once again a case of some fields being more upwind than others. By contrast, an undergradute in bioengineering would be taking courses where they are not developing these skills, as our current understanding of biology is not strongly predictive. After talking to one of the faculty heads, the person who designed the program, he is very much aware of problems such as these in engineers as they are currently educated. This includes overdoing specialization and under-emphasizing the entire product development process, or a principle of "first, do no harm". He has been working on the curriculum for thirty years as opposed to the seven years of cases like MIT - I consider this moderate evidence that I will not be missing out on the necessary mental toolkit over other engineers.
  • In the case where biomedical engineering is less flexible than I believed, I would essentially have a "jack of all trades" education meaning engineering firms in general would pass over me in favor of a more specialized candidate. This is partially hedged against by learning the computer science as an "out", but in the end it points to the possibility that the way I'm perceiving this major's value is incorrect.

So for this "have cake and eat it to" plan to work there are a larger string of case exceptions in the biomedical option than the computing options, and definitely the media and design option. The reward would be that the larger amount of domain specific knowledge in a field that has held my curiosity for several years now, while hitting on. I would also be playing to one of SFU's comparative advantages: the quality of the biomedical faculty here is high relative to other institutions if the exceptions hold, and potentially the relative quality of the computer science and design faculties as well. (This could be an argument for switching institutions if those two skillsets are a "better fit". However, my intuition is that the cost for such is very high and probably wouldn't be worth it.)

Possible points of investigation:

  • What is hooking me most strongly to biomedical engineering were the potentials of cognitive enhancement research and molecular design (like what they have going on at the bio-nano group at Autodesk: http://www.autodeskresearch.com/groups/nano). If these were the careers I was optimizing towards as an ends, it might make more sense to actual model what skills and people will actually be needed to develop these technologies and take advantage of them. After writing this I feel less strongly about these exact fields or careers. Industry research still seems like a good exercise.
  • I will have to be honest that after my experience doing lab work for chemistry at school, I was frustrated by how exhausted I am at the end of each session, physically and mentally. This doesn't necessarily reflect on how all lab work will be, especially if it's more intimately tied with something else I want to achieve. And granted, the labs are three hours long of standing. It does make me question how I would be like in this work environment, however, and that is worth collecting more information for.
  • To get actual evidence of flexibility in skillset it would be worth polling actual alumni from the program, to see if any of the convictions about the program are true.

--

Thoughts, anyone?

[LINK] 23andme is 99$ now

5 Jabberslythe 12 December 2012 02:31AM

It's been reduced to 99$ and it seems like it is a permanent reduction. I was thinking of buying it at 299$ because it had not been on sale for a while, so I'm very pleased this happened.

Their press release on it:

http://blog.23andme.com/news/one-million-strong-a-note-from-23andmes-anne-wojcicki/

 

Rationality versus Short Term Selves

8 diegocaleiro 24 October 2012 05:19PM

Many of us are familiar with the marshmallow test.If you are not, here.

It is predictive of success, income, level of education, and several other correlated measures.

I'm here to argue for the marshmallow eaters, as a devil's advocate. Contra Ainslie, for instance. I do it out of genuine curiosity, real suspicion, and maybe so that smart people get me back to my original position, pro-long term.

There is also the e-marshmallow test (link is not very relevant), in which children have to face the tough choice between surfing an open connected computer with games, internet etc... and waiting patiently for the experimenter to get back. Upon the experimenter's arrival, they get a pile of marshmallows. I presume it also correlates with interesting things, though haven't found much on it.

I have noticed that rationalists, LessWrongers, Effective Altruists, Singularitarians, Immortalists, X-risk worried folk, transhumanists, are all in favor of taking the long view.  Nick Bostrom starts his TED by saying: "I've been asked to take the long view"

I haven't read most of Less Wrong, but did read the sequences, the 50 top scoring posts and random posts. The overwhelming majority view is that the long view is the most rational view. The long term perspective is the rational way for agents to act.

Lukeprog, for instance, commented:

"[B]ut imagine what one of them could do if such a thing existed: a real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness."

To which I responded:

I fear that in this phrases lies one of the big issues I have with the rationalist people I've met thus far. Why would there be a "one" agent, with "its" desires, that would be fulfilled. Agents are composed of different time-spans. Some time-spans do not desire to diet. Others do (all above some amount of time). Who is to say that the "agent" is the set that would be benefited by those acts, not the set that would be harmed by it.
My view is that picoeconomics is just half the story.
In this video, I talk about picoeconomics from 7:00 to 13:20 I'd suggest to take a look at what I say at 13:20-18:00 and 20:35-23:55, a pyramidal structure of selfs, or agents. 

So you don't have to see the video, let us design a structure of selfhood.

First there is intertemporal conflict, conflict between desires that can be fulfilled at different moments of time. Those reliably fall under a hyperbolic characterization, and the theory that described this is called Picoeconomics, mostly developed by George Ainslie in his Breakdown of Will and elsewhere.

But there is also time-length, or time-span conflict.  The conflict that arises from the fact that you are, at the same time, the entity that will last 200milliseconds, the entity that will last one second, and the entity that will last a year, or maybe, a thousand years.

What do we (humanity) know about personal identity at this point in history? If mainstream anglophone philosophical thought is to be trusted, we have to look for Derek Parfit's work Reasons and Persons, and posterior related work, to get that.

I'll sum it up very briefly: As far as we are concerned, there are facts about continuity of different mental classes. There is continuity of memory, continuity of conscious experience, continuity of psychological traits and tendencies, continuity of character, and continuity of inferential structure (the structure that we use to infer things from beliefs we acquire or access).   

For each of these traits, you can take an individual at two points in time and measure how related It1 and It2 are with respect to that psychological characteristic.  This is how much I at T2 is like himself at T1.

Assign weights for traits according to how much you care (or how important each is in the problem at hand) and you get a composed individual, for which you can do the same exercise, using all of them at once and getting a number between 0 and 1, or a percentage. I'll call this number Self-Relatedness, following the footsteps of David Lewis.

This is our current state of knowledge on Personal Identity: There is Trait-Relatedness, and there is Self-Relatedness. After you know all about those two, there is no extra fact about personal identity. Personal Identity is a confused concept, and when we decompose it into less confused, but more useful, sub-sets, there is nothing left to be the meta-thing "Personal Identity".

Back to the time-length issue, consider how much more me  the shorter term selves are (that is how much more Self-Relatedness there is between any two moments within them). 

Sure if you go all the way down to 10 milliseconds, this stops being true, because there are not even traits to be found. Yet, it seems straightforward that I'm more like me 10 seconds ago than like me 4 months ago, not always, but in the vast majority of cases.

So when we speak of maximizing my utility function, if we overlook what me is made of, we might end up stretching ourselves to as long-term as we possibly can, and letting go of the most instantaneous parts, which de facto are more ourselves than those ones.

One person I met from the LessWrong Singinst cluster claimed: "I see most of my expected utility after the singularity, thus I spend my willpower entirely in increasing the likelihood of a positive singularity, and care little about my current pre-singularity emotions"

Is this an amazing feat of self-control, a proof that we can hope to live according to ideal utility functions after all? Or is it a defunct conception of what a Self is?

I'm not here to suggest a canonical curve of time-lengths of which the Self is composed. Different people are different in this regard. Some time-lengths are stretchable, some can be shortened. Different people will also value the time-lengths differently. 

It would be unreasonable for me to expect that people would, from now on, put on a disclaimer on their writings "I'm assuming 'rational' to mean 'rational to time-lenghts above the X treshold' for this writing". It does, however, seem reasonable to keep an internal reminder when we reason about life choices, decisions, and writings, that not only there are the selves which are praised by the Rationalist cluster, the long term ones, but also, the short term ones.

A decision to eat the marshmallow can, after all, be described as a rational decision, it all depends on how you frame the agent, the child.

So when a superintelligence arises that, despite being Friendly and having the correct goals, does the AGI equivalent of scrolling 9gag, eating Pringles and drinking booze all day long, tell the programmers that the concept of Self, Personal Identity, Agent, or Me-ness was not sufficiently well described, and vit cares too much for vits short-term selves. If they tell you: "Too late, vit is a Singleton already" you just say "Don't worry, just make sure the change is ve-e-e-ery slow..."

On What Selves Are - CEV sequence

0 diegocaleiro 14 February 2012 07:21PM

The CEV Sequence Summary: The CEV sequence consists of three posts tackling important aspects of Coherent Extrapolated Volition (CEV). It covers conceptual, practical and computational problems of CEV's current form. On What Selves Are draws on analytic philosophy methods in order to clarify the concept of Self, which is necessary in order to understand whose volition is going to be extrapolated by a machine that implements the CEV procedure. Troubles with CEV part1 and Troubles with CEV part2 on the other hand describe several issues that will be faced by the CEV project if it is actually going to be implemented. Those issues are not of conceptual nature. Many of the objections shown come from scattered discussions found on the web. Finally, six alternatives to CEV are considered.

 

On What Selves Are Summary: We start by concurring on a Hofstadterian metaphysical view of Selves. We suggest two ways in which to divide the concept of Self, admitting Selves to be mongrel concepts, and cluster concepts. We then proceed to the identification of Selves, in particular, a proposed new method for a machine to identify Self-like entities. In the spirit of Dennettian philosophy, we then ask what we demand of Selves, to better grasp what they are. In conclusion, we present some views of Selves that are worth wanting, and claim that only considering Selves in their full complexity we can truly analyze them.

Note: A draft of the first half of On What Selves Are was published in discussion here, those who read it may want to skip straight to section "Organisms, Superorganisms and Selves".

 

On What Selves Are

 

Background: Symbols Coalesce to Form Selves

 

Some of what is taken for granted in this text is vividly subsumed by pg 204 and 289-290 of Hofstadter's “I Am a Strange Loop”(2007). To those who are still in the struggle relating to monism, dualism, qualia, Mary the neuroscientist, epiphenomenons and ineffable qualities, it is worth it to read through his passage to understand the background metaphysical view of the universe from which it is derived. To those on the other hand who are good willed reductionists of the non-greedy, no-skyhook, no 'design only from Above' kind may skip past this section:

[What makes and “I” come seemingly out of nowhere] is ironically, an inability - namely our [...] inability to see, feel, or sense in any way the constant frenetic, churning and roiling of micro -stuff, all the unfelt bubbling and boiling that underlies our thinking. This, our innate blindness to the world of the tiny, forces us to hallucinate a profound schism between the goal-lacking material world of balls and sticks and sounds and lights, on the one hand, and a goal-pervaded abstract world of hopes and beliefs and joys and fears, on the other, in which radically different sorts of causality seem to reign. [...]

[Your] “I” was not an a priori well-defined thing that was predestined to jump, full-fledged and sharp, in to some just-created empty physical vessel at some particular instant. Nor did your “I” suddenly spring into existence, wholly unanticipated but in full bloom. Rather, your “I” was the slowly emerging outcome of a million unpredictable events that befell a particular body and the brain housed in it. Your “I” is the self-reinforcing structure that gradually came to exist not only in that brain, but thanks to that brain. It couldn't have come to exist in this brain, because this brain went through different experiences that led to a different human being.”

 

We will take for granted that this is the metaphysically correct approach to thinking about mental entities. What will be discussed lies more in the domain of conceptual usage, word meaning, psychological conceptions, symbolic extension, explicit linguistic definition, and less on trying to find underlying substrates or metaphysical properties of Selves.


 

Selves and Persons Are Similar

On the eighth move of your weekly chess game you do what feels same as always: Reflect for a few seconds on the many layers of structure underlying the current game-state, specially regarding changes from your opponent’s last move. It seems reasonable to take his pawn with your bishop. After moving you look at him and see the sequence of expressions: doubt (Why did he do that?), distrust (He must be seeing something I'm not), inquiry (Let me double check this), schadenfreude (No, he actually failed) and finally joy (Piece of cake, I’ll win). He takes your bishop with a knight that from your perspective came out of nowhere. Still stunned, you resign. It is the second time in a row you lose the game due to a simple mistake. The excuse bursts naturally out of your mouth: “I’m not myself today”

 

The functional role (with plausible evolutionary reasons) of this use of the concept of Self is easy to unscramble:

1) Do not hold your model of me as responsible for these mistakes

2) Either (a) I sense something strange about the inner machinery of my mind, the algorithm feels different from the inside. Or (b) at least my now visible mistakes are reliable evidence of a difference which I detected in hindsight.

3) If there is a person watching this game, notice how my signaling and my friend’s not contesting it is reliable evidence I normally play chess better than this

A few minutes later, you see your friend yelling historically at someone in the phone, you explain to the girl who was watching: “He is not that kind of person.”

Here we have a situation where the analogous of 1 and 3 work, but there is no way for you to tell what the algorithm feels from the inside. You still know in hindsight that your friend doesn’t usually yell like that. Though 1, 2, and 3 still hold, 2(a) is not the case anymore.

I suggest the property of 2(a) that blocks interchangeability of the concepts of Self and Person is “having first person epistemic information about X”. Selves have that, people don’t. We use the term ‘person’ when we want to talk only about the epistemically intersubjective properties of someone. Self is reserved for a person’s perspective of herself, including, for instance, indexical facts.

Other than that, Self and Person seem to be interchangeable concepts. This generalization is useful because that means most of the problem of personhood and selfhood can be collapsed into one thing.

Unfortunately, the Self/Person intersection is a concept that is itself a mongrel concept, so it has again to be split apart.

 

Mongrel and Cluster Concepts

When a concept seems to defy easy explanability, there are two potential explanatory approaches. The first would be to assume that the disparate uses of the term ‘Self’’ in ordinary language and science can be captured by a unique, all-encompassing notion of Self. The second is to assume that different uses of ‘Self’’ reveal a plurality of notions of selfhood, each in need of a separate account. I will endorse this second assumption: Self is a mongrelconcept in need of disambiguation. (to strengthen the analogy power of thinking about mongrels, it may help to know that Information, Consciousness and Health are thought to be mongrel concepts as well).

Without using specific tags for the time being, let us assume that there will be 4 kinds of Self, 1,2,3, and 4. To say that Self is a concept that sometimes maps into 1, sometimes into 3 and so on is not to exhaustively frame the concept usage. That is because 1 and 2 themselves may be cluster concepts.

The cluster concept shape is one of the most common shapes of concepts in our mental vocabulary. Concepts are associational structures. Most of the time, instead of drawing a clear line around a set in the world inside of which all X fits, and outside of which none does, concepts present a cluster like structure with nearly all core area members belonging and nearly none farther from the core. Not all of their typical features are logically necessary. The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor. When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category.

Selves are mongrel concepts composed of different conceptual intuitions, each of which is itself a cluster concept, thus Selves are part of the most elusive, abstract, high-level entities entertained by minds. Whereas this may be aesthetically pleasant, presenting us as considerably complex entities, it is also a great ethical burden, for it leaves the domain of ethics, highly dependent on the concepts of selfhood and personhood, with a scattered slippery ground-level notion from which to create the building blocks of ethical theories.

 

Several analogies have been used to convey the concept of cluster concept, these convey images of star clusters, neural networks lighting up, and sets of properties with a majority vote. A particularly well known analogy used by Wittgenstein is the game analogy, in which language games determine prescribe normative meanings which constrict a word’s meaning, without determining a clear cut case. Wittgenstein defended that there was no clear set of necessary conditions that determine what a game is. Bernard Suits came up with a refutation of that claim, stating that there is such a definition (modified from “What is a game” 1967, Philosophy of Science Vol. 34, No. 2 [Jun., 1967], pp. 148-156):


"To play a game is to engage in activity designed to bring about a specific state of affairs, using only means permitted by specific rules, where the means permitted by the rules are more limited in scope than they would be in the absence of such rules, and where the sole reason for accepting the rules is to make possible such activity."

Can we hope for a similar soon to be found understanding of Self? Let us invoke:

The Hidden Variable Hypothesis: There is a core essence which determines the class of Selves from non-Selves, it is just not yet within our current state-of-knowledge reach.

While desirable, there are various resons to be skeptical of The Hidden Variable Hyphotesis: (1) Any plausible candidate core would have to be able to disentangle Selves from Organisms in general, Superorganisms (i.e. insect societies) and institutions (2) We clearly entertain different models of what Selves are for different purposes, as shown below in Section Varieties of Self-Systems Worth Having. (3) Design consideration: Being evolved structures which encompass several resources of a recently evolved mind, that came to being through a complex dual-inheritance evolution of several hundred thousand replicators belonging to two kinds (genes and memes), Selves are among the most complex structures known and thus unlikely to possess a core essence, due to causal design considerations independent of how untractable it would be to detect and describe this essence.

From now on then, I will be assuming as common ground that Selves are mongrel concepts, comprised of some yet undiscussed number of cluster concepts.

 

Organisms, Superorganisms, and Selves

To refine our notions of Selves we ought to be able to distinguish Selves from Organisms, that is, biological coalitions of cells with adaptation-execution functions, and from Superorganisms, biological coalitions of individuals with a group-level behavior that fits the adaptation-executer characterization.

Organisms, Superorganisms and Selves are composed of smaller parts that instantiate simple algorithmic behavior which, in large numbers, brings about complex behavior. One fundamental difference though is that Selves are grammatical. While ants use variegated hidrocarbons to signal things to other ants of the same Superorganism, and cells communicate through potassium and sodium exchanges, we use phonemes composing words composing sentences, we have thoughs which compose our deliberations. Selves are thus different in that we exhibit grammaticality and semantic abstraction capacities unseen in the organismic and superorganismic levels of organization.

 

Persons, the Evidence for Other Selfs

How could we teach a machine to identify people? This is the underlying question that has led me to write this text, and it is a question of utter importance if we are to believe the current cutting edge guesses about when is artificial intelligence going to surpass human intelligence. We have to make sure that what passes the test is not an ant family, nor is it a panda. Luckily, this test has been established already by Alan Turing, the infamous Turing test. While the Turing test was originally thought to establish when a machine has achieved human intelligence, there is no reason to deny it a secondary purpose once a machine has already achieved human intelligence. Once such machine exists, it could use its own human-like intelligence to test other entities and classify them as human-like or not human-like. This would give us a non-personhood indicator, as demanded by Yudkowsky.

This may appear to be a deus ex machina in that I am assuming that the turing test performed by this machine will be able to grasp the essence of humanity, and capture it. Not so. What we should expect of Selves and people is not an essence, as prescribed by The Hidden Variable Hipothesis. We should expect a mongrel built of clusters of identifyable data, with its shape not well delineated on the borders, and we should expect more than a single simple structure. Exactly the kind of thing that is able to pass a turing test, which, itself, is not established with absolute precision, but relies in our linguistic, empathic, commonsensical and conversational skills to be performed.

 

Selves as Utility Increaser Unnatural Clusters

Thusfar we have considered Selves as non-essence-bearing, sets of clusters of linguistic, grammatical entities, but this is missing one important aspect of selfhood, intentionality. Language is mostly intentional, that is, about things that are not themselves, and brains are mostly intentional, that is, integrated into the world in such a way that a convoluted mapping happens between its internal content and the worlds external facts.

The particularity that makes Selves different from Superorganisms and Organisms at this level is that Selves are utility increasing, they have goals, desires, ideals, and thrive to achieve them. Selves act as functions, by rearranging the physical world of which they are a part of from low-utility local configuration to high utility local configuration.These goals, desires and ideals change from time to time without change of Self. This is a naturally occuring process in many cluster concepts. To be a cluster concept includes being the kind of concept that remains same despite change, and possibly dramatic change, as long as this change is “softened” by happening one bit at a time. A Self's goals may shift strongly in ten years, but at any particular time, the goals, desires, grammaticality and intentionality are the defining features of that Self, of that person.


What do We Demand of Selves

A per our chess example above, we demand stability from Selves. We also demand honor, respectability, resilience, accountability. When I say you owe me that money, it implicitly implies that you are the same person as the one to whom I lent that money. When I invite you for a duel, I expect to kill the same you who is listening to the invitation, even if a few days later. Part of our models of people are evolved from the need for accountability. An evolutionary guess: We incorporate a notion of sameness over time for a person because this holds the person accountable. Reciprocal altruism, a form of altruism belonging to many complex social species of animals relies on the assumption that one will pay back, and paying back is only possible if the original giver is still there to receive his payment.

Has our notion of Self followed our demands for accountability or did it happen the other way around? This is a chicken and egg sort of question. Just like eggs obviously came first because dinosaurs layed eggs, accountability came first because many other animals exhibit reciprocal altruism. Yet, just as we can reshape the chicken and egg question in such way that both seem to be determining each other, we can also reshape our accountability question in such way: Has our model of selfhood reinforced our tendencies to demand accountability of others or has our need for accountability created a demand for stronger, stable Selves? Probably both have happened, they are self reinforcing in both directions, in psychological jargon, they perform transactional reinforcement.

Besides sheer accountability, our notions of honor and respect also rely on sameness over time, they are just a bit more convoluted and sophisticated, but this topic is tangent to our interests here.

 

Varieties of Self-Systems Worth Having

Not all animals have a notion of Self (From Varieties of Self Systems Worth Having):

“According to Povinelli and colleagues, one possibility is that a sense of the embodiment of Self—as opposed to mere proprioception—a sense of ownership of one's own body, may have evolved in some primates as a consequence of arboreal locomotion (Barth et al., 2004). Orangutans need subtle appreciation of their own body position, posture, and weight to brachiate and support themselves on flimsy branches. It is not as though they can navigate by trial and error, since a fall will likely prove fatal. The behavior and the required capacity are less developed in chimpanzees and even less in gorillas. This would suggest a complicated history for this kind of Self-representation, having been lost by the primate branch that led to chimpanzees, and developed in the hominine lineage.

“We speak of ‘‘Self-systems worth having’’ to reflect four characteristics of the recent literature on the Self. First, most models imply that the Self is supported by a federation of specialized processes rather than a single integrated cognitive function. Second, most researchers think that the phenomenology of selfhood results from the aggregate of the functions performed by these different information-processing devices. Third, most of the information-processing is construed as sub-personal, hence inaccessible to conscious inspection. Fourth, we talk about systems worth having to emphasize that there is nothing inevitable about the functioning of any of these systems.”

“Neisser made conceptual and empirical distinctions between five domains of Self-knowledge, namely: an ecological Self, a sense of ones own location in and distinctness from the environment; an interpersonal Self, a sense of oneself as a locus of emotion and social interaction; an extended Self, a sense of oneself as an individual existing over time; a private Self, a sense of oneself as the subject of introspectively accessible experience; and a conceptual Self, comprising all those representations that constitute a Self-image, including representations of one's social role and personal autobiography (Neisser, 1988)“

The ecological Self is our notion of our location, both as a whole (hippocampus) and proprioception, that is, the relative position and movement of our body parts (frontal lobe). The interpersonal Self is salient in our blushing and teasing, laughing and crying. The extended Self is widely discussed in the philosophical literature, most famously by Derek Parfit in Reasons and Persons;, it is that which remains when time elapses, the sense of constancy and of sameness that one feels. The private Self talks inside our heads all the time, it is the nagging inner voice that remains active when we introspect and look inwards. The conceptual Self is an honorable, respectable individual, with all the special abilities we know ourselves to have, from lawful to honorable, from noble to the example above: Don't hold me responsible for act X, claims the conceptual Self, I'm not myself today.

Neisser's analysis is a fine grained one, distinct from a coarse grained one like Gallaghers:

Gallagher distinguishes broadly between the ‘‘minimal’’ and the ‘‘narrative’’ Self. The former supplies the ecological sense of bodily ownership and agency associated with active behavior, while the latter supports the Self-image that associates our identity with various episodes (Gallagher, 2000).”

The analysis of selfhood, or of personhood can be done in other ways too, after all, we are dealing with a strange construction. We are trying to carve reality at its joints, but the joints of mongrel cluster concepts are a fuzzy structure, and we are given many choices on how to carve them, any analysis of Selves is going to look at least as complex as this one, and we should learn to abandon physics envy, stop thinking that Selves come in one sentence, and learn to deal with the full complexities involved.

 

Sources:

 

http://lesswrong.com/lw/53z/the_nature_of_self/

http://lesswrong.com/lw/4e/cached_selves/

http://the-mouse-trap.com/2009/11/01/five-kinds-of-selfself-knowledge/ (comes from Neisser 1988)

http://www.scholarpedia.org/article/Self_models (Kept by Thomas Metzinger)

http://www.ncbi.nlm.nih.gov/pubmed/16257234

Conscious Cogn. 2005 Dec;14(4):647-60. Epub 2005 Oct 27.

http://plato.stanford.edu/entries/identity-personal/

Varieties of self-systems worth having. Boyer P, Robbins P, Jack AI.

 

Personal research update

4 Mitchell_Porter 29 January 2012 09:32AM

Synopsis: The brain is a quantum computer and the self is a tensor factor in it - or at least, the truth lies more in that direction than in the classical direction - and we won't get Friendly AI right unless we get the ontology of consciousness right.

Followed by: Does functionalism imply dualism?

Sixteen months ago, I made a post seeking funding for personal research. There was no separate Discussion forum then, and the post was comprehensively downvoted. I did manage to keep going at it, full-time, for the next sixteen months. Perhaps I'll get to continue; it's for the sake of that possibility that I'll risk another breach of etiquette. You never know who's reading these words and what resources they have. Also, there has been progress.

I think the best place to start is with what orthonormal said in response to the original post: "I don't think anyone should be funding a Penrose-esque qualia mysterian to study string theory." If I now took my full agenda to someone out in the real world, they might say: "I don't think it's worth funding a study of 'the ontological problem of consciousness in the context of Friendly AI'." That's my dilemma. The pure scientists who might be interested in basic conceptual progress are not engaged with the race towards technological singularity, and the apocalyptic AI activists gathered in this place are trying to fit consciousness into an ontology that doesn't have room for it. In the end, if I have to choose between working on conventional topics in Friendly AI, and on the ontology of quantum mind theories, then I have to choose the latter, because we need to get the ontology of consciousness right, and it's possible that a breakthrough could occur in the world outside the FAI-aware subculture and filter through; but as things stand, the truth about consciousness would never be discovered by employing the methods and assumptions that prevail inside the FAI subculture.

Perhaps I should pause to spell out why the nature of consciousness matters for Friendly AI. The reason is that the value system of a Friendly AI must make reference to certain states of conscious beings - e.g. "pain is bad" - so, in order to make correct judgments in real life, at a minimum it must be able to tell which entities are people and which are not. Is an AI a person? Is a digital copy of a human person, itself a person? Is a human body with a completely prosthetic brain still a person?

I see two ways in which people concerned with FAI hope to answer such questions. One is simply to arrive at the right computational, functionalist definition of personhood. That is, we assume the paradigm according to which the mind is a computational state machine inhabiting the brain, with states that are coarse-grainings (equivalence classes) of exact microphysical states. Another physical system which admits the same coarse-graining - which embodies the same state machine at some macroscopic level, even though the microscopic details of its causality are different - is said to embody another instance of the same mind.

An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way. The level of software and hardware power implied by the capacity to do reliable whole-brain simulations means you're already on the threshold of singularity: if you can simulate whole brains, you can simulate part brains, and you can also modify the parts, optimize them with genetic algorithms, and put them together into nonhuman AI. Uploads won't come first.

But the idea of explaining consciousness this way, by simulating Daniel Dennett and David Chalmers until they agree, is just a cartoon version of similar but more subtle methods. What these methods have in common is that they propose to outsource the problem to a computational process using input from cognitive neuroscience. Simulating a whole human being and asking it questions is an extreme example of this (the simulation is the "computational process", and the brain scan it uses as a model is the "input from cognitive neuroscience"). A more subtle method is to have your baby AI act as an artificial neuroscientist, use its streamlined general-purpose problem-solving algorithms to make a causal model of a generic human brain, and then to somehow extract from that, the criteria which the human brain uses to identify the correct scope of the concept "person". It's similar to the idea of extrapolated volition, except that we're just extrapolating concepts.

It might sound a lot simpler to just get human neuroscientists to solve these questions. Humans may be individually unreliable, but they have lots of cognitive tricks - heuristics - and they are capable of agreeing that something is verifiably true, once one of them does stumble on the truth. The main reason one would even consider the extra complication involved in figuring out how to turn a general-purpose seed AI into an artificial neuroscientist, capable of extracting the essence of the human decision-making cognitive architecture and then reflectively idealizing it according to its own inherent criteria, is shortage of time: one wishes to develop friendly AI before someone else inadvertently develops unfriendly AI. If we stumble into a situation where a powerful self-enhancing algorithm with arbitrary utility function has been discovered, it would be desirable to have, ready to go, a schema for the discovery of a friendly utility function via such computational outsourcing.

Now, jumping ahead to a later stage of the argument, I argue that it is extremely likely that distinctively quantum processes play a fundamental role in conscious cognition, because the model of thought as distributed classical computation actually leads to an outlandish sort of dualism. If we don't concern ourselves with the merits of my argument for the moment, and just ask whether an AI neuroscientist might somehow overlook the existence of this alleged secret ingredient of the mind, in the course of its studies, I do think it's possible. The obvious noninvasive way to form state-machine models of human brains is to repeatedly scan them at maximum resolution using fMRI, and to form state-machine models of the individual voxels on the basis of this data, and then to couple these voxel-models to produce a state-machine model of the whole brain. This is a modeling protocol which assumes that everything which matters is physically localized at the voxel scale or smaller. Essentially we are asking, is it possible to mistake a quantum computer for a classical computer by performing this sort of analysis? The answer is definitely yes if the analytic process intrinsically assumes that the object under study is a classical computer. If I try to fit a set of points with a line, there will always be a line of best fit, even if the fit is absolutely terrible. So yes, one really can describe a protocol for AI neuroscience which would be unable to discover that the brain is quantum in its workings, and which would even produce a specific classical model on the basis of which it could then attempt conceptual and volitional extrapolation.

Clearly you can try to circumvent comparably wrong outcomes, by adding reality checks and second opinions to your protocol for FAI development. At a more down to earth level, these exact mistakes could also be made by human neuroscientists, for the exact same reasons, so it's not as if we're talking about flaws peculiar to a hypothetical "automated neuroscientist". But I don't want to go on about this forever. I think I've made the point that wrong assumptions and lax verification can lead to FAI failure. The example of mistaking a quantum computer for a classical computer may even have a neat illustrative value. But is it plausible that the brain is actually quantum in any significant way? Even more incredibly, is there really a valid apriori argument against functionalism regarding consciousness - the identification of consciousness with a class of computational process?

I have previously posted (here) about the way that an abstracted conception of reality, coming from scientific theory, can motivate denial that some basic appearance corresponds to reality. A perennial example is time. I hope we all agree that there is such a thing as the appearance of time, the appearance of change, the appearance of time flowing... But on this very site, there are many people who believe that reality is actually timeless, and that all these appearances are only appearances; that reality is fundamentally static, but that some of its fixed moments contain an illusion of dynamism.

The case against functionalism with respect to conscious states is a little more subtle, because it's not being said that consciousness is an illusion; it's just being said that consciousness is some sort of property of computational states. I argue first that this requires dualism, at least with our current physical ontology, because conscious states are replete with constituents not present in physical ontology - for example, the "qualia", an exotic name for very straightforward realities like: the shade of green appearing in the banner of this site, the feeling of the wind on your skin, really every sensation or feeling you ever had. In a world made solely of quantum fields in space, there are no such things; there are just particles and arrangements of particles. The truth of this ought to be especially clear for color, but it applies equally to everything else.

In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism, but shall proceed to the second stage of the argument, which does not seem to have appeared even in the philosophy literature. If we are going to suppose that minds and their states correspond solely to combinations of mesoscopic information-processing events like chemical and electrical signals in the brain, then there must be a mapping from possible exact microphysical states of the brain, to the corresponding mental states. Supposing we have a mapping from mental states to coarse-grained computational states, we now need a further mapping from computational states to exact microphysical states. There will of course be borderline cases. Functional states are identified by their causal roles, and there will be microphysical states which do not stably and reliably produce one output behavior or the other.

Physicists are used to talking about thermodynamic quantities like pressure and temperature as if they have an independent reality, but objectively they are just nicely behaved averages. The fundamental reality consists of innumerable particles bouncing off each other; one does not need, and one has no evidence for, the existence of a separate entity, "pressure", which exists in parallel to the detailed microphysical reality. The idea is somewhat absurd.

Yet this is analogous to the picture implied by a computational philosophy of mind (such as functionalism) applied to an atomistic physical ontology. We do know that the entities which constitute consciousness - the perceptions, thoughts, memories... which make up an experience - actually exist, and I claim it is also clear that they do not exist in any standard physical ontology. So, unless we get a very different physical ontology, we must resort to dualism. The mental entities become, inescapably, a new category of beings, distinct from those in physics, but systematically correlated with them. Except that, if they are being correlated with coarse-grained neurocomputational states which do not have an exact microphysical definition, only a functional definition, then the mental part of the new combined ontology is fatally vague. It is impossible for fundamental reality to be objectively vague; vagueness is a property of a concept or a definition, a sign that it is incomplete or that it does not need to be exact. But reality itself is necessarily exact - it is something - and so functionalist dualism cannot be true unless the underdetermination of the psychophysical correspondence is replaced by something which says for all possible physical states, exactly what mental states (if any) should also exist. And that inherently runs against the functionalist approach to mind.

Very few people consider themselves functionalists and dualists. Most functionalists think of themselves as materialists, and materialism is a monism. What I have argued is that functionalism, the existence of consciousness, and the existence of microphysical details as the fundamental physical reality, together imply a peculiar form of dualism in which microphysical states which are borderline cases with respect to functional roles must all nonetheless be assigned to precisely one computational state or the other, even if no principle tells you how to perform such an assignment. The dualist will have to suppose that an exact but arbitrary border exists in state space, between the equivalence classes.

This - not just dualism, but a dualism that is necessarily arbitrary in its fine details - is too much for me. If you want to go all Occam-Kolmogorov-Solomonoff about it, you can say that the information needed to specify those boundaries in state space is so great as to render this whole class of theories of consciousness not worth considering. Fortunately there is an alternative.

Here, in addressing this audience, I may need to undo a little of what you may think you know about quantum mechanics. Of course, the local preference is for the Many Worlds interpretation, and we've had that discussion many times. One reason Many Worlds has a grip on the imagination is that it looks easy to imagine. Back when there was just one world, we thought of it as particles arranged in space; now we have many worlds, dizzying in their number and diversity, but each individual world still consists of just particles arranged in space. I'm sure that's how many people think of it.

Among physicists it will be different. Physicists will have some idea of what a wavefunction is, what an operator algebra of observables is, they may even know about path integrals and the various arcane constructions employed in quantum field theory. Possibly they will understand that the Copenhagen interpretation is not about consciousness collapsing an actually existing wavefunction; it is a positivistic rationale for focusing only on measurements and not worrying about what happens in between. And perhaps we can all agree that this is inadequate, as a final description of reality. What I want to say, is that Many Worlds serves the same purpose in many physicists' minds, but is equally inadequate, though from the opposite direction. Copenhagen says the observables are real but goes misty about unmeasured reality. Many Worlds says the wavefunction is real, but goes misty about exactly how it connects to observed reality. My most frustrating discussions on this topic are with physicists who are happy to be vague about what a "world" is. It's really not so different to Copenhagen positivism, except that where Copenhagen says "we only ever see measurements, what's the problem?", Many Worlds says "I say there's an independent reality, what else is left to do?". It is very rare for a Many World theorist to seek an exact idea of what a world is, as you see Robin Hanson and maybe Eliezer Yudkowsky doing; in that regard, reading the Sequences on this site will give you an unrepresentative idea of the interpretation's status.

One of the characteristic features of quantum mechanics is entanglement. But both Copenhagen, and a Many Worlds which ontologically privileges the position basis (arrangements of particles in space), still have atomistic ontologies of the sort which will produce the "arbitrary dualism" I just described. Why not seek a quantum ontology in which there are complex natural unities - fundamental objects which aren't simple - in the form of what we would presently called entangled states? That was the motivation for the quantum monadology described in my other really unpopular post. :-) [Edit: Go there for a discussion of "the mind as tensor factor", mentioned at the start of this post.] Instead of saying that physical reality is a series of transitions from one arrangement of particles to the next, say it's a series of transitions from one set of entangled states to the next. Quantum mechanics does not tell us which basis, if any, is ontologically preferred. Reality as a series of transitions between overall wavefunctions which are partly factorized and partly still entangled is a possible ontology; hopefully readers who really are quantum physicists will get the gist of what I'm talking about.

I'm going to double back here and revisit the topic of how the world seems to look. Hopefully we agree, not just that there is an appearance of time flowing, but also an appearance of a self. Here I want to argue just for the bare minimum - that a moment's conscious experience consists of a set of things, events, situations... which are simultaneously "present to" or "in the awareness of" something - a conscious being - you. I'll argue for this because even this bare minimum is not acknowledged by existing materialist attempts to explain consciousness. I was recently directed to this brief talk about the idea that there's no "real you". We are given a picture of a graph whose nodes are memories, dispositions, etc., and we are told that the self is like that graph: nodes can be added, nodes can be removed, it's a purely relational composite without any persistent part. What's missing in that description is that bare minimum notion of a perceiving self. Conscious experience consists of a subject perceiving objects in certain aspects. Philosophers have discussed for centuries how best to characterize the details of this phenomenological ontology; I think the best was Edmund Husserl, and I expect his work to be extremely important in interpreting consciousness in terms of a new physical ontology. But if you can't even notice that there's an observer there, observing all those parts, then you won't get very far.

My favorite slogan for this is due to the other Jaynes, Julian Jaynes. I don't endorse his theory of consciousness at all; but while in a daydream he once said to himself, "Include the knower in the known". That sums it up perfectly. We know there is a "knower", an experiencing subject. We know this, just as well as we know that reality exists and that time passes. The adoption of ontologies in which these aspects of reality are regarded as unreal, as appearances as only, may be motivated by science, but it's false to the most basic facts there are, and one should show a little more imagination about what science will say when it's more advanced.

I think I've said almost all of this before. The high point of the argument is that we should look for a physical ontology in which a self exists and is a natural yet complex unity, rather than a vaguely bounded conglomerate of distinct information-processing events, because the latter leads to one of those unacceptably arbitrary dualisms. If we can find a physical ontology in which the conscious self can be identified directly with a class of object posited by the theory, we can even get away from dualism, because physical theories are mathematical and formal and make few commitments about the "inherent qualities" of things, just about their causal interactions. If we can find a physical object which is absolutely isomorphic to a conscious self, then we can turn the isomorphism into an identity, and the dualism goes away. We can't do that with a functionalist theory of consciousness, because it's a many-to-one mapping between physical and mental, not an isomorphism.

So, I've said it all before; what's new? What have I accomplished during these last sixteen months? Mostly, I learned a lot of physics. I did not originally intend to get into the details of particle physics - I thought I'd just study the ontology of, say, string theory, and then use that to think about the problem. But one thing led to another, and in particular I made progress by taking ideas that were slightly on the fringe, and trying to embed them within an orthodox framework. It was a great way to learn, and some of those fringe ideas may even turn out to be correct. It's now abundantly clear to me that I really could become a career physicist, working specifically on fundamental theory. I might even have to do that, it may be the best option for a day job. But what it means for the investigations detailed in this essay, is that I don't need to skip over any details of the fundamental physics. I'll be concerned with many-body interactions of biopolymer electrons in vivo, not particles in a collider, but an electron is still an electron, an elementary particle, and if I hope to identify the conscious state of the quantum self with certain special states from a many-electron Hilbert space, I should want to understand that Hilbert space in the deepest way available.

My only peer-reviewed publication, from many years ago, picked out pathways in the microtubule which, we speculated, might be suitable for mobile electrons. I had nothing to do with noticing those pathways; my contribution was the speculation about what sort of physical processes such pathways might underpin. Something I did notice, but never wrote about, was the unusual similarity (so I thought) between the microtubule's structure, and a model of quantum computation due to the topologist Michael Freedman: a hexagonal lattice of qubits, in which entanglement is protected against decoherence by being encoded in topological degrees of freedom. It seems clear that performing an ontological analysis of a topologically protected coherent quantum system, in the context of some comprehensive ontology ("interpretation") of quantum mechanics, is a good idea. I'm not claiming to know, by the way, that the microtubule is the locus of quantum consciousness; there are a number of possibilities; but the microtubule has been studied for many years now and there's a big literature of models... a few of which might even have biophysical plausibility.

As for the interpretation of quantum mechanics itself, these developments are highly technical, but revolutionary. A well-known, well-studied quantum field theory turns out to have a bizarre new nonlocal formulation in which collections of particles seem to be replaced by polytopes in twistor space. Methods pioneered via purely mathematical studies of this theory are already being used for real-world calculations in QCD (the theory of quarks and gluons), and I expect this new ontology of "reality as a complex of twistor polytopes" to carry across as well. I don't know which quantum interpretation will win the battle now, but this is new information, of utterly fundamental significance. It is precisely the sort of altered holistic viewpoint that I was groping towards when I spoke about quantum monads constituted by entanglement. So I think things are looking good, just on the pure physics side. The real job remains to show that there's such a thing as quantum neurobiology, and to connect it to something like Husserlian transcendental phenomenology of the self via the new quantum formalism.

It's when we reach a level of understanding like that, that we will truly be ready to tackle the relationship between consciousness and the new world of intelligent autonomous computation. I don't deny the enormous helpfulness of the computational perspective in understanding unconscious "thought" and information processing. And even conscious states are still states, so you can surely make a state-machine model of the causality of a conscious being. It's just that the reality of how consciousness, computation, and fundamental ontology are connected, is bound to be a whole lot deeper than just a stack of virtual machines in the brain. We will have to fight our way to a new perspective which subsumes and transcends the computational picture of reality as a set of causally coupled black-box state machines. It should still be possible to "port" most of the thinking about Friendly AI to this new ontology; but the differences, what's new, are liable to be crucial to success. Fortunately, it seems that new perspectives are still possible; we haven't reached Kantian cognitive closure, with no more ontological progress open to us. On the contrary, there are still lines of investigation that we've hardly begun to follow.

Would you like to give me feedback for "Troubles With CEV"

-9 diegocaleiro 24 December 2011 09:22PM

Hi, I'm going to publish soon here a study of CEV composed of two texts "On What is a Self" and "Troubles with CEV"  Would you like to give feedback prior to publication? 

If so, please provide your e-mail address and I will send you the text.

Merry Newtonmas

Would you like to give me feedback for "On What is a Self"

-8 diegocaleiro 24 December 2011 09:21PM

Hi, I'm going to publish soon here a study of CEV composed of two texts "On What is a Self" and "Troubles with CEV"  Would you like to give feedback prior to publication? 

If so, please provide your e-mail address and I will send you the text.

Merry Newtonmas

Should You Make a Complete Map of Every Thought You Think?

1 Arkanj3l 07 November 2011 02:20AM
Related to: Living Luminously

Well? Should you?

Linked is a treatise on exactly this concept. If the effects of recording and classifying every thought pan out like the author says they'll pan out... well, read a (limited) excerpt (from the Introduction), and I'll let you decide whether it's worth your time.

If you do the things described in this book, you will be IMMOBILIZED for the duration of your commitment.The immobilization will come on gradually, but steadily. In the end, you will be incapable of going somewhere without your cache of notes, and will always want a pen and paper w/ you. When you do not have pen and paper, you will rely on complex memory pegging devices, described in "The Memory Book''. You will NEVER BE WITHOUT RECORD, and you will ALWAYS RECORD.

YOU MAY ALSO ARTICULATE. Your thoughts will be clearer to you than they have ever been before. You will see things you have never seen before. When someone shows you one corner, you'll have the other 3 in mind. This is both good and bad. It means you will have the right information at the right time in the right place. It also means you may have trouble shutting up. Your mileage may vary.

You will not only be immobilized in the arena of action, but you will also be immobilized in the arena of thought. This appears to be contradictory, but it's not really. When you are writing down your thoughts, you are making them clear to yourself, but when you revise your thoughts, it requires a lot of work - you have to update old ideas to point to new ideas. This discourages a lot of new thinking. There is also a "structural integrity'' to your old thoughts that will resist change. You may actively not-think certain things, because it would demand a lot of note keeping work. (Thus the notion that notebooks are best applied to things that are not changing.)

The full text is written in a stream-of-consciousness style, which is why I hesitated to post this topic in the first place. But there are probably note-taking junkies, or luminosity junkies, or otherwise interested folk amongst LW. So why not?

(Incidentally I'm reminded of Buckminster Fuller's Dymaxion Chronofile. I wonder how he managed it, or what benefits/costs it wrought?)

Part 1 On What is a Self Discussion

1 diegocaleiro 08 August 2011 09:55AM

 

 

In Nonperson Predicates Eliezer said:

"Build an AI?  Sure!  Make it Friendly?  Now that you point it out, sure!  But trying to come up with a "nonperson predicate"?  That's just way above the difficulty level they signed up to handle.

But a longtime Overcoming Bias reader will be aware that a blank map does not correspond to a blank territory.  That impossible confusing questions correspond to places where your own thoughts are tangled, not to places where the environment itself contains magic.  That even difficult problems do not require an aura of destiny to solve.  And that the first step to solving one is not running away from the problem like a frightened rabbit, but instead sticking long enough to learn something.

So I am not running away from this problem myself."

Me neither. When entering the non-existent gates of bayesian Heaven, I don't want to have to admit that I have located a sufficiently small problem in problem-space that seems solvable and  that unsolved constitutes an existential risk, that was not being tackled by anyone I met in the Singularity Institute, and I just ran away from it.

So, would you mind helping me? In the course of writing my CEV text, I noticed that discussing what are people/selves was a necessary previous step. I've written the first part of that text, and would like to know what is excessive/unclear/improvable/vague.

On What Is a Self




Selves and Persons

In the eight movement of your weekly chess game you do what feels same as always: Reflect for a few seconds on the many layers of structure underlying the current game-state, specially regarding changes from your opponent’s last move. It seems reasonable to eat his pawn with your bishop. After moving you look at him and see the sequence of expressions: Doubt “Why did he do that?”, distrust “He must be seeing something I don’t”, inquiry “Let me double check this”, Schadenfreud “No, he actually failed” and finally joy “Piece of cake, I’ll win”. He takes your bishop with a horse that from your perspective could only be coming from neverland. Still stunned, you resign. It is the second time in a row you lose the game due to a simple mistake. The excuse bursts naturally out of your mouth: “I’m not myself today”

The functional role (with plausibly evolutionary reasons) of this use of the concept of Self is easy to unscramble.
1) Do not hold your model of me as responsible for these mistakes
2)  Either (a) I sense something strange about the inner machinery of my mind, the algorithm feels different from the inside. Or (b) at least my now visible mistakes are realiable evidence of a difference which I detected in hindsight.
3) If there is a person watching this game, notice how my signaling and my friend’s not contesting it is reliable evidence I normally play chess better than this

A few minutes later, you see your friend yelling histerically at someone in the phone, you explain to the girl who was watching: “He is not that kind of person”

Here we have a situation where the analogous of 1 and 3 work, but there is no way for you to tell what the algorithm feels from the inside. You still know in hindsight that your friend doesn’t usually yell like that. Though 1, 2, and 3 still hold, 2(a) is not the case anymore.

I suggest the property of 2(a) that blocks interchangeability of the concepts of Self and Person is “having first person epistemic information about X”. Selves have that, people don’t. We use the term ‘person’ when we want to talk only about the epistemically intersubjective properties of someone. Self is reserved for a person’s perspective of herself, including, for instance,  indexical facts.

Other than that, Self and Person seem to be interchangeable concepts. This generalization is useful because that means most of the problem of personhood and selfhood can be collapsed into one thing.

Unfortunately, the Self/Person intersection is a concept that is itself a Mongrel Concept, so it has again to be split apart.

Mongrel and Cluster Concepts

When a concept seems to defy easy explanability, there are two interesting possibilities of how to interact with it. The first would be to  assume that the disparate uses of the term ‘Self’’ in ordinary language and science can be captured by a unique, all-encompassing notion of Self. The second is to assume that different uses of ‘Self’’ reveal a plurality of notions of Selfhood, each in need of a separate account. I will endorse this second assumption: Self is a mongrel

concept in need of disambiguation. (to strenghten the analogy power of thinking about mongrels, it may help to know that Information, Consciousness and Health are thought to be mongrel concepts as well)
    Without using specific tags for the time being, let us assume that there will be 4 kinds of Self, 1,2,3, and 4. To say that Self is a concept that sometimes maps into 1, sometimes into 3 and so on is not to exaustivelly frame the concept usage. That is because 1 and 2 themselves may be cluster concepts.
    The cluster concept shape is one of the most common shapes of concepts in our mental vocabulary. Concepts are associational structures. Most of the times, instead of drawing a clear line around a set in the world inside of which all X fits, and outside of which none does, concepts present a cluster like structure with nearly all core area members belonging and nearly none in the far fetched radius belonging.  Not all of their typical features are logically necessary.  The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor.  When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category.
    Selves are mongrel concepts composed of different conceptual intuitions, each of which is itself a cluster concept, thus Selves are part of the most elusive, abstract, high-level entities entertained by minds. Whereas this may be aesthetically pleasant, presenting us as considerably complex entities, it is also a great ethical burden, for it leaves the domain of ethics, highly dependant on the concepts of Selfhood and Personhood, with a scattered slippery ground-level notion from which to create the building blocks of ethical theories.

    Several analogies have been used to convey the concept of Cluster Concept, these convey images of star clusters, neural networks lighting up, and sets of properties with a majority vote. A particularly well known analogy used by Wittgenstein is the game analogy, in which language games determine prescribe normative meanings which constrict a word’s meaning, without determining a clear cut case. Wittgenstein defended that there was no clear set of necessary conditions that determine what a game is. Bernard Suits came up with a refutation of that claim, stating that there is such a definition (modified from “What is a game” 1967, Philosophy of Science Vol. 34, No. 2 [Jun., 1967], pp. 148-156):

"To play a game is to engage in activity designed to bring about a specific state of affairs, using only means permitted by specific rules, where the means permitted by the rules are more limited in scope than they would be in the absence of such rules, and where the sole reason for accepting the rules is to make possible such activity."



    Can we hope for a similar soon to be found understanding of Self? Let us invoke:

The Hidden Variable Hypothesis: There is a core essence which determines the class of selves from non-selves, it is just not yet within our current state-of-knowledge reach.  


    While desirable, there are various resons to be skeptical of The Hidden Variable Hyphotesis: (1) Any plausible candidate core would have to be able to disentangle selves from Organisms in general, Superorganisms (i.e. insect societies) and institutions (2) We clearly entertain different models of what selves are for different purposes, as shown below in Section Varieties of Self-Systems Worth Having. (3) Design consideration: Being evolved structures which encompass several resources of a recently evolved mind, that came to being through a complex dual-inheritance evolution of several hundred thousand replicators belonging to two kinds (genes and memes), Selves are among the most complex structures known and thus unlikely to possess a core essence, due to causal design considerations independant of how untractable it would be to detect and describe this essence.  

From now on then, I will be assuming as common ground that Selves are Mongrel concepts, comprised of some yet indiscussed number of Cluster Concepts.

Not Yet Written Following Topics:

Organisms, Superorganisms, and Selves
Selves and Sorites
Selves Beyond Sorites
Persons, the Evidence of Other Selfs
Selves as Utility Increaser Unnatural Clusters
What do We Demand of Selves
Varities of Self-Systems Worth Having
Drescher: Personhood Is An Ethical Predicate
What Matters About Selves?

 

The Phobia or the Trauma: The Probem of the Chcken or the Egg in Moral Reasoning.

1 analyticsophy 15 June 2011 04:16AM

Introduction:

Today there is an almost universal prejudice against individuals with a certain sexual orientation. I am not talking about common homophobia; the prejudice I would like to bring to your attention is so rarely considered a prejudice that it has no particular name. Though the following words will most likely be met with harsh criticism, the prejudice referenced above is the prejudice that almost all of us have against pedophiles. At first thought, it may seem that having a phobia of pedophiles is no more a prejudice for a mother, than having a fear of lions is a prejudice for a mother chimpanzee, but I hope at least to show that the issue is not so clear.

This text does not at any point argue that pedophiles are regular people like you and I, they may well not be. If the hypothesis to be presented is true, however, it follows that the trauma children experience when molested would not happen if we didn't hold the moral judgments towards pedophiles that we do. If this is true then the best thing for us to do as a species for our children is, paradoxically, to stop making the moral judgements we make towards pedophiles. Of course, intuition would have us believe that we hold those moral judgements towards pedophiles precisely because of how traumatic a molestation is for children; this is an attempt to show that that causal interaction goes both ways and forms a loop.

This isn't a defense of pedophilia, nor is it a suggestion that we should stop morally judging pedophiles as a culture, it's an analysis of how circularity can enter the domain of social morality undetected and spread rapidly. We will take a memetic approach to figuring this out, and always ask "how it is useful for the meme to have such and such property?" rather than "how is it useful for us to have a belief with such and such property?".

I will apologize here and now for the graphic nature of this text's subject. But know that part of what I claim is that the reason the following considerations are so rarely even heard is precisely because of their graphic nature. Nowhere in this text is there an argument that can even be loosely interpreted as a defense of individual acts of pedophilia, but the reader may well conclude that in the end, less children would have been seriously hurt if we had refrained from involving our moral attitudes in our dealings with pedophiles.

Inherently Traumatic?:


Let's ask a simple question: "would a feral child be traumatized if molested at a young age?" Notice there was no mention of sodomy in that question. Sodomy is clearly as traumatic to a child as any intense pain caused by another would be. But what about molestation? How can an infant tell the difference between being cleaned and being molested? These two actions could be made to appear behaviorally identical to the child. How does the brain know to get traumatized from one and not from the other? Clearly, children are more frequently traumatized by molestation than by being cleaned. They must somehow make the distinction, either during the act, soon after the event, or retroactively upon remembering the event in adulthood. 

In any case, that distinction must either be learned or inherited. Though we are genetically designed to avoid certain stimuli, e.g., fire, sharp things, bitter chemicals, etc. it is unlikely that getting your genitals touched is one of those stimuli. There might be genes which give you a predisposition to being traumatized when molested as a child, but it is unlikely that we have a sense built into our bodies that distinguishes between acceptable and unacceptable genital touching before puberty. Again, any molestation that causes pain does not apply, we are considering only those cases of molestation which don't cause any physical pain.

If we somehow conclude that any given human does indeed react in a neurologically distinct way when touched on the genitals before puberty by an adult that isn't one of that human's parents, then certainly that sort of molestation would be out of the question. But at the risk of being far to graphic, the fact is that an infant or even a very young child would be largely incapable of distinguishing between grabbing a finger and grabbing an adult male genital. There is clearly nothing inherently evil about the foreskin of a male compared to the skin on his finger. The only difference is the adults intention, which children, or at least infants, are largely insensitive to. What then is the justification for not allowing pedophiles to come to our houses and have our infants reach out and grab their genitals as our infant's instincts would have them do?

It could be argued that children might be traumatized simply by being forced to do something that they do not want to do, and that is certainly likely. But does that mean that we should allow our children to be involved in sexual acts with adults if they are consenting? If we were to argue that children cannot consent, then we would have to ask "can they be non-consenting?" What we generally mean by saying that "children cannot consent." is that they can't consent responsibly because they lack the information to do so. This is granted, but they can simply consent. Children can be made to be the main actors in cases of molestation and even consensual sex. Again, at the risk of being far to graphic: it is not uncommon for one child to molest another, nor is it uncommon for young friends of the same gender to naively engage in games of a sexual nature. Even in the case of molestation from an adult to an infant: if the adult presents his/her genitals the infant will naturally grab. How this grabbing is to be distinguished by the infant from the thousands of other skin covered objects that he/she will grab through out his/her life remains a mystery to me.

Hypotheses:

Infants and children are not designed by evolution to avoid being involved in non-painful forms of sexual encounters which they are willing participants in. By "willing participant" all that is meant is not being forced to engage in the sexual act. The trauma that often follows sexual encounters with adults for children is caused by the reactions of the children's parents. There would be no trauma in the children if the parents and other role-models of said children saw sex with children as a routine part of growing up.

Experiments to Falsify:

(1): Take two appropriately large and randomized samples of infants and children. Have the control monitored by a brain imaging device while cleaned by their parents. Have the variable do the same only have researchers dressed in normal clothes do the cleaning as opposed to the parents. If there is a difference observed in the neurological behavior of these two groups which is larger than the difference between a group of children that are simply looking at their parents and looking at strangers, then there is likely a mechanism from birth which identifies sexual acts. All subjects must be sufficiently young so as to have no learned association with their genitals and sex.

(2): Find a closed population which has no concept of sex as a demonized act or of children as being too young to have sex with. Determine this by extensive interviews with the adult population designed to get them to be contradictory. After finding this population if it exists, show that the stability of those children which were involved in non-painful sexual acts with adults is lower than those children which were not involved. If this is accomplished it will suggest that the behaviors of parents of victims of molestation is not the source of the trauma caused in children after being molested.

Experiments to Verify:

(1): Setup the same control and variable as in (1) above. If we get the result that there is no significant difference between the neurological behavior of the control and the variable, then it becomes less likely that there is anything in children which allows them to tell the difference between non-painful acts of molestation, and cleaning of the genitals.

(2): Find a population as described in (2). Show that those individuals which engaged in sexual acts at a young age have no lower stability than those which did not. 

A Meme not a Gene:

If molestation is not inherently traumatic, why do we feel the need to protect our children from it? There are many possible reasons, but one of the most biological might be our jealousy. We are built to not let others have sex with loved ones, yes. But are we really biologically built to not let others have sex with our children? It'd be a strange adaptation to say the least. Why have children, and prevent them from reproducing? It might well be a side-effect of our evolved jealously. 

But more seems to be at play here then a confusion of jealousy. As my evidence for this I propose that you recall how salacious and downright offensive you found it when I mentioned that an infant would instinctively grab a genital if presented. It doesn't have to be your own infant in your mind to be repulsed by imagining the situation. It is a repulsive situation to imagine for almost anyone I have met that is not a pedophile, and even most pedophiles. If it is not our child we're imagining, just some random token child, and it is just some token child molester we are imagining, the image still repulses us greatly, which suggests that it does not come from biological design since our genetic fitness is not at all increased by worrying about the children of others.

We likely started demonizing pedophiles well after the development of language if the hypothesis stated above is correct. If trauma isn't caused in children from sexual acts with adults before learning about the taboo nature of sex, then it is likely the taboo nature of sex that causes such events to be traumatic. But sex is not taboo because of our genetic history, sex is taboo because of our memtic history.

Why the Meme is such a Success (Imagining Patient Zero):

Let's imagine a hypothetical culture which has demonized sex but doesn't really have an accepted attitude towards pedophiles. Suppose one parent catches another adult engaged in sexual behavior with his/her children. The parent, confused by and scared of sexual action, quickly pulls away the child while attacking the other adult and tells the child that he/she is not to do that anymore or go near that person. The child reacts negatively to this, now knowing that sex is demonic. We have all seen this sort of behavior before, if a child bumps his head and his/her parents say "Oh that's ok, come on, we gotta get going." in a lovely mommy voice the child is more likely to get up and keep on trucking. But if the parents react with "Oh God! Grab the ice pack, grab the ice pack!" yelling urgently, the child cries and may well act is if he/she is much more hurt than he/she really is.

When this hypothetical parent next sees his/her fellow parent friends he/she tells them of the event and how horrific it was for him/her, and how traumatic it was for his/her child. The other parents then warn their children of the strange man/woman that lured the first child and tell their own children never to go near that man/woman's house. The children of course need to find out why for themselves and go there anyway. Another child gets involved in acts of a sexual nature with the town pedophile. This catches the attention of a passerby, who by now knows of what goes on in that house, and how evil it is. This passerby alerts the others that it is happening again. At this point the town decides to do something about it. They lynch the pedophile. This becomes the talk of the town and of the local ruling government body.

Now all of the adults in the town know how to react to pedophilia: as if it would be a demonizing traumatic event for their children. Acting as such when one of their children is inevitably molested, causes that child to find it traumatic. News of the trauma it caused to the child spreads and the whole process is repeated, strengthening the believe that children become traumatized when molested. 

This thought experiment is likely not very much like what really happened to produce this meme in the first place. To actually understand how that happened we would have to trace the memetic evolution of our ancestors for much further than we have the ability to do now. But this hypothetical does at least give us a way of imagining how a belief like "Sexual acts with children and adults causes trauma in the children involved." might start off false and become truer as it becomes more widely accepted, and more widely accepted as it becomes truer. In the end holding that belief is going to cause more suffering in our children than if we didn't hold it provided the hypothesis above is correct. But we believe it anyway, and our moral judgements stray that way anyway, regardless of whether or not we have any benefit from the belief.

The true benefactor here is the meme itself. The meme of fearing and hating pedophiles need not be useful for us as a species, it needs only to be good at getting itself spread. Luckily for the meme, as it gets itself spread the belief associated with it becomes truer. This meme has a belief built in that is a self-fulfilling prophecy so that the more widespread the meme becomes the better its chances of replicating. It's a feedback loop, the meme predisposes us to act a certain way towards molested children, acting towards molested children this way makes them find the event traumatic, the observed trauma of the molested children enforces the meme.

Conclusion:

We can and do hold very basic moral attitudes as a culture which are completely unexamined. Even the most basic moral judgements that we make, like "pedophilia is wrong" are not on as firm of footing as we would like to believe them to be. But when we sharpen the issue and we are faced with the bluntness of the situation, things can become even more difficult. Our biases are very firmly rooted in us. Even I, who will tell you that I'm on the fence about the utility of demonizing pedophilia, am absolutely repulsed and ethically offended upon the thought of such an act. But I consider it important that we think sharply about the utility involved in such basic and unquestioned moral judgements and report our progress. If we find that those most basic moral judgements haven't been beneficial to us as a whole, we should start to wonder about whether or not ensuring utility really is the point of our moral system. Alternatively, our moral system might have little benefit to us and evolve only because it benefits the memes which it is. Our whole theory of ethics, might be the result of nothing more than the continued warfare of memes for our brains. Sometimes the memes convince us to adopt them by being beneficial, sometimes they just trick us into thinking they are right, and other times they make themselves true by the mere virtue of spreading themselves. This last class of memes we can call "self-proving memes" and it is this class of memes that the hypotheses above suggests the fearing and hating pedophiles meme belongs to. If that hypotheses is falsified by any of the suggested experiments or any other applicable experiment, we should still consider that the hypothesis has never even been suggested outside this text. Is this more likely because the hypotheses is so stupid, or because it is so rooted in us not to question such simple facts?


 


 

 


Track Your Happiness

5 Matt_Simpson 04 May 2011 02:59AM

Track your happiness using your iphone:

For thousands of years, people have been trying to understand the causes of happiness. What is it that makes people happy? Yet it wasn’t until very recently that science has turned its attention to this issue.

Track Your Happiness.org is a new scientific research project that aims to use modern technology to help answer this age-old question. Using this site in conjunction with your iPhone, you can systematically track your happiness and find out what factors – for you personally – are associated with greater happiness. Your responses, along with those from other users of trackyourhappiness.org, will also help us learn more about the causes and correlates of happiness.

Seems like a no-brainer to use this to me, at least if you have an iphone. For those with a droid, according to their twitter feed:

the next item on the roadmap is to make track your happiness available to as many people/phones as possible.

Despite being a really cool app for managing your happiness, this is just a great idea for doing research. Now I want to take advantage of the large iphone/droid user base to learn about people in some way. Any ideas?

How I applied useful concepts from the personal growth seminar "est" and MBTI

3 suecochran 10 April 2011 11:49PM

I have encountered personally in conversations, and also observed in the media over the past couple of decades, a great deal of skepticism, scorn, and ridicule, if not merely indifference or dismissal, from many people in reaction to the est training, which I completed in 1983, and the Myers-Briggs Type Indicator tool, which I first took in 1993 or 1994. I would like to share some concrete examples from my own life where information and perspective that I gained from these two sources have improved my life, both in my own way of conceptualizing and approaching things, and also in my relationships with others. I do this with the hope and intention of showing that est and MBTI have positive value, and encouraging people to explore these and other tools for personal growth.

One important insight that I gained from the est training is an understanding and the experience that I am not my opinions, and my opinions are not me. Opinions are neutral things, and they may be something I hold, or agree with, but I can separate my self from them, and I can discuss them, and I can change or discard them, but I am still the same "me". I am not more or less "myself" in relation to what I think or believe. Before I did the est training, whenever someone would question an opinion I held, I felt personally attacked. I identified my self with my opinion or belief. My emotional response to attack, like for many other people, is to defend and/or to retreat, so when I perceived of my "self" being "attacked", I gave in to the standard fight or flight response, and therefore I did not get the opportunity to explore the opinion in question to see if the person who questioned me had some important new information or a perspective that I had not previously considered. It is not that I always remember this or that it is my first response, but once I notice myself responding in the old way, I can then take that step back and remember the separation between self and opinion. That choice is now available to me, where it wasn't before. When I find myself in conversations with another person or people who disagree with me, my response now is to draw them out, to ask them about what they believe and why they believe it. I regard myself as if I were a reporter on a fact-finding mission. I step back and I do not feel attacked. I learn sometimes from this, and other times I do not, but I no longer feel attacked, and I find that I can more easily become friends with people even if we have disagreements. That was not the case for me prior to doing est.

Another valuable tool that I got from est and still use in my life is the ability to accept responsibility without attaching blame to it, even if someone is trying to heap blame upon me. This is similar to what I said above about basically not identifying my self with what I think. I do not have to feel or think of myself as a "bad person" because I made a mistake. I have come to the belief that guilt is an emotion that I need not wallow in. If I feel guilt about doing or not doing something, saying or not saying something, I take that feeling of guilt as a sign that I either need to take some action to rectify the situation, and/or I need to apologize to someone about it, and/or I need to learn from the situation so that hopefully I will not repeat it, and then forgive myself, and move on. Hanging on to guilt is something I see many people doing, and it not only holds them up and blocks them off from taking action, they often pull that feeling in and create a scenario or self-definition that involves beating themselves up about it, or they wallow around in feeling guilty in a way that serves as a self-indulgent excuse for not improving things. "I'm so awful, I'm such a screw-up, I can't do anything right." That kind of negative self-esteem can affect a person for their entire life if they allow it to. There are many ways to come to these realizations, and I make no claim that est is some kind of "cure-all". One of the characters on the tv show "SOAP" called est "The McDonald's of Psychiatry". That's amusing, but it denigrates a very useful and powerful experience. I believe in an eclectic approach to life. I look at many things, explore many ideas and experiences, and I take what works and leave the rest. est is only one of many helpful experiences I have had in my 49 years.

I took the Myers-Briggs Personality Index at a science fiction convention in the early years of my marriage, when I was living in Alexandria, VA, in 1993 and 1994. It was given as part of a panel, and I also took it again when I read "Do What You Are", which is a book about finding employment/a profession based on your MBTI personality type. The basics, if you have not encountered MBTI before are: There are 4 "continuums" in how people tend to interact with the world. Most people use both sides of each continuum, but are most comfortable on one side. The traits are Extrovert/Introvert, Sensing/Intuiting, Thinking/Feeling, and Judging/Perceiving. (The use of these words in the MBTI context is not exactly the same as their dictionary definitions). I am a strong ENFP. My husband was an ISTP. Understanding the differences between how we approached the world was very helpful to me in learning why we were so different about socializing with other people, and about our communication style with each other. As an "I", John (as they put it in the book), "got his batteries charged" by mostly being alone. I, as an "E", got mine charged by being with other people. We went to conventions and parties, but he often wanted to leave well before I felt ready to go. Once we had two cars, we would each take our own to events. Even though I felt it wasted gas, it gave him the opportunity to "flee" once he had had enough of being with others, while I could then come home at my leisure, and neither of us had to give up on what made us happier and more comfortable. It also explained why he would not always respond immediately to a question. "I "people tend to figure out in their own mind first what they want to say before they say anything aloud. "E" people often start talking right away, and as they speak, what they think becomes clearer to them. This is also a very useful data point for teachers. If they know about it, they can realize that the "I" kids need more time to come up with their answers, while the "E" kids put their hands in the air more immediately. They can then allow the "I" kids the time they need to respond to questions without thinking they are not good students, or are not as intelligent or knowledgeable as they "E" kids are.

My boyfriend is an ENTJ. The source of some of the friction in our relationship became clear to me after I asked him to find out his Myers-Briggs type, which he had never done before. Gerry often asks me to give him a list of what I want to do in the course of my day, and how much time things will take. These are reasonable requests. However, the rub comes from the fact that as a "J", he is uncomfortable not knowing the answer to these things. I, as a "P", am uncomfortable stating these things in advance, in nailing things down. I prefer to leave things open-ended. He regarded what I said as more concrete, whereas I regarded it more as a guideline, but not a definite plan or promise. In addition, I have always had a hard time judging how long things will take, and as a person with ADD, I also get distracted easily, so it was making me upset when he would come home and ask me what I'd gotten done, and then he would get upset when I hadn't done what I had said I wanted to, or if things took longer than I said they would. Understanding the differences in our types has helped me to understand more about why this has been an area of friction. That leaves room for us to discuss it without feeling the need to blame each other for our preferred method of dealing with things. I feel clearer about stating goals for the day, but not necessarily promising to do specific things, and working on figuring out how to allocate enough time for things. He understands that just because I tell him what I would like to do, it is not necessarily what I will end up doing. It's still a work in progress.

I want to be clear that I am not talking about using the types as excuses to get out of doing things, or for taking what other people feel is "too long" to get things done. It's merely another "tool in my tool box" that helps me to process how I and my loved ones function, and to figure out how to improve.

I am curious to know how other people feel about their experiences, if they have done a personal growth seminar such as est and/or taken the MBTI, if they feel that they have also taken tools from those experiences that have had an ongoing positive impact on their lives and relationships. I look forward to hearing what people have to say in response to this article.

The Nature of Self

3 XiXiDu 05 April 2011 10:52AM

In this post I try to fathom an informal definition of Self, the "essential qualities that constitute a person's uniqueness". I assume that the most important requirement for a definition of self is time-consistency. A reliable definition of identity needs to allow for time-consistent self-referencing since any agent that is unable to identify itself over time will be prone to make inconsistent decisions.

Data Loss

Obviously most humans don't want to die, but what does that mean? What is it that humans try to preserve when they sign up for Cryonics? It seems that an explanation must account and allow for some sort of data loss.

The Continuity of Consciousness

It can't be about the continuity of consciousness as we would have to refuse general anesthesia due to the risk of "dying" and most of us will agree that there is something more important than the continuity of consciousness that makes us accept a general anesthesia when necessary.

Computation

If the continuity of consciousness isn't the most important detail about the self then it very likely isn't the continuity of computation either. Imagine that for some reason the process evoked when "we" act on our inputs under the control of an algorithm halts for a second and then continues otherwise unaffected, would we don't mind to be alive ever after because we died when the computation halted? This doesn't seem to be the case.

Static Algorithmic Descriptions

Although we are not partly software and partly hardware we could, in theory, come up with an algorithmic description of the human machine, of our selfs. Might it be that algorithm that we care about? If we were to digitize our self we would end up with a description of our spatial parts, our self at a certain time. Yet we forget that all of us possess such an algorithmic description of our selfs and we're already able back it up. It is our DNA.

Temporal Parts

Admittedly our DNA is the earliest version of our selfs, but if we don't care about the temporal parts of our selfs but only about a static algorithmic description of a certain spatiotemporal position, then what's wrong with that? It seems a lot, we stop caring about past reifications of our selfs, at some point our backups become obsolete and having to fall back on them would equal death. But what is it that we lost, what information is it that we value more than all of the previously mentioned possibilities? One might think that it must be our memories, the data that represents what we learnt and experienced. But even if this is the case, would it be a reasonable choice?

Indentity and Memory

Let's just disregard the possibility that we often might not value our future selfs and so do not value our past selfs either for that we lost or updated important information, e.g. if we became religious or have been able to overcome religion.

If we had perfect memory and only ever improved upon our past knowledge and experiences we wouldn't be able to do so for very long, at least not given our human body. The upper limit on the information that can be contained within a human body is 2.5072178×1038 megabytes, if it was used as a perfect data storage. Given that we gather much more than 1 megabyte of information per year, it is foreseeable that if we equate our memories with our self we'll die long before the heat death of the universe. We might overcome this by growing in size, by achieving a posthuman form, yet if we in turn also become much smarter we'll also produce and gather more information. We are not alone either and the resources are limited. One way or the other we'll die rather quickly.

Does this mean we shouldn't even bother about the far future or is there maybe something else we value even more than our memories? After all we don't really mind much if we forget what we have done a few years ago.

Time-Consistency and Self-Reference

It seems that there is something even more important than our causal history. I think that more than everything we care about our values and goals. Indeed, we value the preservation of our values. As long as we want the same we are the same. Our goal system seems to be the critical part of our implicit definition of self, that which we want to protect and preserve. Our values and goals seem to be the missing temporal parts that allow us to consistently refer to us, to identify our selfs at different spatiotempiral positions.

Using our values and goals as identifiers also resolves the problem of how we should treat copies of our self that are featuring alternating histories and memories, copies with different causal histories. Any agent that does feature a copy of our utility function ought to be incorporated into our decisions as an instance, as a reification of our selfs. We should identify with our utility-function regardless of its instantiation.

Stable Utility-Functions

To recapitulate, we can value our memories, the continuity of experience and even our DNA, but the only reliable marker for the self identity of goal-oriented agents seems to be a stable utility function. Rational agents with an identical utility function will to some extent converge to exhibit similar behavior and are therefore able to cooperate. We can more consistently identify with our values and goals than with our past and future memories, digitized backups or causal history.

But even if this is true there is one problem, humans might not exhibit goal-stability.