In response to Understanding Agency
Comment author: Dagon 17 December 2014 08:31:50AM 7 points [-]

I completely buy into the basic idea that accepting responsibility for outcomes, and predicting in terms of actions and consequences (future-looking) rather than blame and justification (past-looking) is transformative and powerful.

I'm not sold on the complicated framework or 21-step ranking or linear approach to it.

If it works for you, great. It doesn't seem universal to me.

In response to comment by Dagon on Understanding Agency
Comment author: gworley 18 December 2014 06:44:10AM 0 points [-]

On the question of universality, it actually seems to apply pretty broadly and pretty accurately to many folks. In fact, I'd actually be very interested if you (or someone) could present one or more groups of humans who cannot be well understood within this framework.

I've thought about several types of mental abnormalities humans sometimes possess, but at most they just seem to require you to understand additional complications introduced by abnormalities rather than break the constructive development theory model. However I can only think up so many things so fast, so perhaps I have missed a case in my thinking that would not yield so easily.

In response to Understanding Agency
Comment author: gjm 17 December 2014 11:52:35AM 10 points [-]

You have a link to an "article on constructive development", which you repeat no fewer than six times to encourage readers to go and read it.

However, the thing at the far end of the link is not an article on constructive development. It is an article about (1) two ways of responding to one's own misdeeds and (2) a notation for describing stages in the transition between two modes of thinking. (The notation is called "subject-object notation" but appears to have nothing specifically to do with the subject/object distinction. This doesn't seem to me like a good sign that the author is thinking clearly about things.)

There is a link from there to a summary of constructive-developmental theory by Peter Pruyn. It seems ... OK, I guess. I'm rather put off by the patronizing mealy-mouthedness with which the author disclaims the very idea that the later stages might be thought "better" -- in the same article in which he says that later stages indicate their capacity to cope with difficult situations, suggests that those at earlier stages are unfit for senior roles at work, calls the later stages "higher levels of consciousness", and of course classifies them as developmental stages which on its own pretty much gives the game away.

Still, congratulations on reaching level 4. (Though it seems to me there's something rather inappropriate about saying that.)

In response to comment by gjm on Understanding Agency
Comment author: gworley 18 December 2014 06:36:15AM 0 points [-]

I agree that one of the problems with constructive development theory, as you seem to hint at, is that it sets off certain alarm bells in your mind because it matches the same patterns as things which we have now concluded to be incorrect or just instruments for abuse of power.

Explicit levels are a common tactic taken to try to give rationalizations of why this person or that person of higher status "deserves" that status against human egalitarian norms, so naturally any theory that includes something like them feels a bit icky, and nothing in the material presented here does much to clear that up.

I've also not read a good explanation of constructive development theory that would make sense to someone who doesn't understand the subject-objection notation (that is, I've seen nothing that does a great job of explaining subject-object notation to someone who doesn't immediately grasp the concept and then just needs some details filled in) or who hasn't started thinking at least some of the time at level 4.

However constructive development theory certainly hasn't been as vigorously researched and written about as many other topics in psychology, and lacking any strong disconfirming evidence I'm inclined to suspect we might find good evidence and helpful explanations if we spend some more time digging into it.

I'm woefully under-skilled for the task of both rigorous scientific studies and clear explanations that will satisfy a wide audience, so my main hope is that my insights spur on a few folks who are appropriately skilled to dig deeper.

In response to Understanding Agency
Comment author: someonewrongonthenet 17 December 2014 10:53:56PM 0 points [-]
Comment author: gworley 18 December 2014 06:15:30AM 0 points [-]

metacognition is certainly a related thing, but i think discussion of it is generally too abstract to be directly useful for modeling human thinking without first applying it to real brains. this doesn't mean it's inappropriate, only that i think it needs to be more concrete than what is typically discussed under the topic of metacognition to be useful.

In response to Understanding Agency
Comment author: VAuroch 18 December 2014 02:59:58AM 5 points [-]

This comes off as purely bragging and applause lights. And several of the applause lights aren't relevant. (that chapter of HPMoR and the 12th virtue of rationality)

Additionally, I disagree with one of the premises, which is that CDT is meaningful and useful but can only be well-understood by people who have reached a certain level in its hierarchy. I feel competent to make this judgment, because by its own description, I uncomfortably sit in level 4 (and have a very poor model of what it means to be level 3; I suspect some unpleasant circumstances and mild neuroatypicality made me leapfrog 3). And the theory does not seem to have explanatory power,

Or in short: Scrap this post and come back when you can explain it better, and don't use excuses like 'this needs higher-order thinking' ; I'll believe you have an insight but you're not conveying it now, and those excuses are just excuses.

Comment author: gworley 18 December 2014 06:12:21AM -1 points [-]

I mean no insult to you, but if you don't understand level 3, you're almost certainly then actually spending your time thinking at levels 2 and 3 but mistaking it for level 4. This seems to be rather strongly backed up by assessment data.

Comment author: gworley 15 June 2014 09:43:50PM 2 points [-]

I suspect boredom to be another thing that can result in willpower depletion: it's hard to stay engaged in something when it's boring. It may be possible on less difficult tasks to keep going longer, but it eventually begins to drain on you (although maybe this is covered by wanting to do some other specific thing, but I suspect it was distinct in that you can be bored without having something you would rather be doing).

Comment author: gworley 19 August 2012 11:41:02PM *  13 points [-]

Let me just toss out some caution here.

I'm all for getting excited and making stuff happen. Maybe it really is that there have not yet been any LW startups because we all just failed to coordinate on it and in hindsight we'll all say "why the hell did we all wait for so long". That said, let's not forget a few key things here.

  • Most startups fail
  • even when the principals are smart and motivated
  • even when the idea is really good
  • even when [x] is [y]

And, as I already said, for some reason we haven't already had a bunch of successful LW startups. It's certainly not for lack of smart people, entrepreneurs, or technical skills.

If a LW startup is going to succeed, I think we would benefit from understanding first why we don't already have successful LW startups (not even one).

Comment author: gworley 20 August 2012 12:07:15AM 9 points [-]

Just to toss in my own strongest suspicion. Among LWers under 25, they probably see themselves as young and still learning and not yet brave enough to throw themselves all in to something. For those over 25, they (myself included) probably see themselves as already busy doing something and would need some pretty strong motivation to do something else, even if it does align with core values.

Comment author: gworley 19 August 2012 11:41:02PM *  13 points [-]

Let me just toss out some caution here.

I'm all for getting excited and making stuff happen. Maybe it really is that there have not yet been any LW startups because we all just failed to coordinate on it and in hindsight we'll all say "why the hell did we all wait for so long". That said, let's not forget a few key things here.

  • Most startups fail
  • even when the principals are smart and motivated
  • even when the idea is really good
  • even when [x] is [y]

And, as I already said, for some reason we haven't already had a bunch of successful LW startups. It's certainly not for lack of smart people, entrepreneurs, or technical skills.

If a LW startup is going to succeed, I think we would benefit from understanding first why we don't already have successful LW startups (not even one).

Comment author: HughRistik 23 June 2010 08:40:41PM *  9 points [-]

This post addresses the subject of the appropriate human data compression format. Though an interesting idea, I think that the proposed method is too low in resolution. You acknowledge the lossiness, but I think it's just going to be too much.

Although the method you advocate is probably the best thing short of cryonics, I doubt there is any satisfactory compression method that can make anyone that's more similar to you than a best friend or a family member who gets stuck with your memories. It's better to have too much data than too much.

Because we share the same evolutionary past as all of our conspecifics, the biology and psychology of our brains is statistically the same. We each have our quirks of genetics and development, but even those are statistically similar among people who share our quirks. Thus with just a few bits of data we can already record most of what makes you who you are.

I'm not confident in this part. Although a large percentage of human biology and psychology are identical, the devil is in the details. From a statistical perspective, aren't humans and chimps practical identical also? Percentage similarity of traits is probably the wrong metric, since small quantitative differences can have large qualitative impact.

Your idea of a generic human template, with various subtemplates for quirks, is also interesting, but still too low resolution.

Under what metric do we say that you and I have the "same" quirk, even if our phenotypes look superficially similar? How much data is discarded by the aggregation?

Even if we assume that the notion of a generic human template is meaningful, there are almost as many ways that people can deviate from the generic human template as there are people, and there would have to be that many quirky subtemplates. It's possible that we could compress human phenotypic deviation into a lower number of templates than the number of people, but I don't think we are anywhere near having a satisfactory way to do so. In the least, storing the deltas of human phenotypes might cut down on the data we all have in common.

The problem with lossy measures of phenotype such as memories and our current ability to measure quirky deviations from the average is that they discard too much information: the genotype, and other low-level aspects of phenotype.

Let's start with the genotype problem. In the future, we synthesize a human with the same phenotype as our crude records of you (your memories, and your quirks according to the best psychometric indexes currently available). We will call this phenotype set X. Yet since multiple genotype can converge towards the same phenotype (particularly for crude measures of phenotype), the individual created is not guaranteed to have the same genotype as you, and probably won't. Due to having a different genotype, this individual could end up having traits outside X that you didn't have. They will have the same set of phenotypic traits as you that were recorded, but they may lack phenotypic traits that weren't recorded (because you method of recording discarded the data), and they may have extra phenotypic traits that you didn't have because keeping those traits out wasn't in the spec.

Fundamentally, I think it's problematic to try to reverse-engineer a mind based on a record of its outputs. That seems like trying to reverse-engineer a computer and its operating system based on a recording of everything it displayed on its monitor while being used. Even if you know how computers and operating systems work in general, you will still be unable to eliminate lots of guesswork in choosing between multiple constructions that would lead to the same output.

If you know a system's construction, you may be able to predict its outputs, but knowing the outputs of a system doesn't necessarily allow you to work backwards to its construction, even when you know the principles by which systems of that type are constructed.

I think the best reconstruction you will get through this method won't be substantially more similar to you than taking your best friend (who has highly similar personality and interests) and implanting him with your memories. At best, you would get someone similar enough to be a relative of yours.

We really need to preserve your genotype; otherwise, future scientists could make an individual with all your memories and crudely measured personality traits, but a different personality and value system (in ways that weren't measured), who wakes up and wonders what they heck you were doing all your life. We would need a solution that has all the phenotypic traits recorded for you, with the constraint that the individual created has the same genotype as you.

Yet even such a solution would still be discarding too much information about biological traits that influence your phenotype yet are not recorded in your genetic code, such as your prenatal development. It's been shown that prenatal factors influence your brain structure, personality, and interests. So we need to record your prenatal environment to be able to create a meaningful facsimile of you. Otherwise, we could end constructing someone with the same genotype, memories, and psychometric measures as you, who nevertheless had a different brain structure; such a person would probably be less like you than a twin of yours who was implanted with your memories, because your twin shares a similar prenatal environment to yours, while your copy does not. A different brain structure would create a similar problem to having a different genotype: a different brain that has the same recorded phenotype as you will differ from you in unrecorded aspects of phenotype.

I would worry that a record of every single thought and behavior you have, both from yourself and observers, would still not be enough to reverse-engineer "you" in any meaningful way. Adding the constraint of matching your genotype would help, but we still don't have a way to compress biological factors other than genotype, such as prenatal development. We have no MP3 format for humans.

The best record of your prenatal development that we have is your body and brain structure, so these would have to be stored along with your memories. Preferably at a rather cold temperature.

Comment author: gworley 24 June 2010 04:05:32PM 2 points [-]

At best, you would get someone similar enough to be a relative of yours.

Even if that's all the better we can do, that's much better than the nothing that will befall those who would otherwise have been totally lost because they didn't sign up for cryonics.

Comment author: [deleted] 23 June 2010 07:11:51PM *  3 points [-]

This is also a primary plot point of the Battlestar Galactica prequel Caprica. This also comes in Charles Stross's Accelerando up when some evil AIs decide to do this to most of humanity based on our historical records. The full text of the novel is at http://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html Search for "Frequently Asked Questions" to find the relevant section.

There is also another similarly interesting plot thread in the story which can be summed up by this excerpt:

The Church of Latter-Day Saints believes that you can't get into the Promised Land unless it's baptized you – but it can do so if it knows your name and parentage, even after you're dead. Its genealogical databases are among the most impressive artifacts of historical research ever prepared. And it likes to make converts.

The Franklin Collective believes that you can't get into the future unless it's digitized your neural state vector, or at least acquired as complete a snapshot of your sensory inputs and genome as current technology permits. You don't need to be alive for it to do this. Its society of mind is among the most impressive artifacts of computer science. And it likes to make converts.

Comment author: gworley 23 June 2010 08:46:01PM 0 points [-]

I have no doubt that this sort of thing has been occasionally explored in fiction. That said, there's a big difference between considering an idea in fiction and considering acting on an idea in real life.

Comment author: Morendil 23 June 2010 03:40:34PM *  6 points [-]

How many hours do you estimate you'll be putting into your autobiography for the resulting record to be "good enough"?

Next question, what is your hourly pay rate?

Comment author: gworley 23 June 2010 08:44:10PM 3 points [-]

I see where this is going, so I'll go ahead and let you run an economic analysis on me. But, keep in mind that cost is not the only factor, only the main one for most of the world's population. For me it has far more to do with the social costs I would have to pay to sign up for cryonics.

That said, I estimate I'll be putting about 1 hour a week into writing myself into the future. I am currently paid at a rate of approximately $18 an hour. I'm not sure what my lifetime average pay rate will be, but let's go ahead and estimate it at $60 per hour in 2010 USD (I have two M.S. degrees, one in computer science and one in mathematics, and I'm willing to do work with questionable ethical outcomes, like "defense" contracting).

View more: Prev | Next