Comment author: Lara_Foster2 16 September 2008 05:17:01PM 1 point [-]

Sorry, this I realize is entirely off topic. Where should I move the discussion to? Ppl can take it to email with me if they like (cingulate2000@gmail.com).

Hmm... musing again on the psycho-social development of children and the role of adult approval. Scott suggested that being rewarded by adults for academic development may have impeded his social development.

I wonder if there are any social psychology studies in which a child is chosen at random to be favored by an adult authority figure, an what happens to that child's interactions with peers, and self perception. I wonder if gender has been used as a variable. Anyone have any references?

Personally, I have long asserted that the main reason I put any effort into school was to gain the approval and attention of my male teachers. My mother pointed out that I loved all my male teachers and usually despised the female ones, and thus did much better under male tutelage, even switching me into a male teacher's classroom in 4th grade after a 'personality conflict' with a female one. Now, for a woman, learning how to gain the approval of male authority figures is a transitive skill from childhood to adulthood... The girls at the lab I worked at in Germany joked that I was 'Herr Doctors kleine Freundin,' because he showed a disproportionately great interest in my relatively unremarkable project and would always pop into my room to chat (an apparently aberrant behavior for this very serious man).

Now, for boys, learning how best to get the approval of female authority figures doesn't seem to translate into adulthood. Maybe there is a subtle sexual tension between young female students and their male teachers (hence crushes and the like) but not for boys and their female teachers, who they might view more like mommies than girlfriends. Thus, at some point boys are going to need to break away from the adult-approval schematic if they are to be romantically successful and not turn into man-children. The psycho-social-sexual development of children seems very interesting to me, and I would be very grateful to be directed to some thoughtful literature and/or studies on the topic.

Comment author: Lara_Foster2 15 September 2008 08:54:11PM 0 points [-]

Interesting, Scott. What priorities do the intelligence-centric type have that make you unhappy? Though I might not necessarily fit into this group, I am confident that I am of above-average intelligence, and I do not believe my litany of worldly woes are attributable to that, so much as to specific personality traits independent of intelligence.

Comment author: Lara_Foster2 15 September 2008 08:31:04PM 0 points [-]

Michael, Your question is very ill-defined. I regularly partake in a drug that lowers my IQ in exchange for other utility... It's called alcohol. If you are talking about permanent IQ reductions, I would need to have some sense of what losing one IQ point felt like before I could evaluate a trade. Is it like taking one shot? Would I even notice it missing?

Many psychotropic drugs, especially antipsychotics, 'slow' down the people that take them and thus could be associated with lowering IQ, yet many people choose to take them and lower their IQ for the utility gained by not hearing demonic voices or being allowed to leave a mental institution.

Comment author: Lara_Foster2 15 September 2008 06:52:28PM 1 point [-]

As long as you are sharing your development with us, I'd be curious to know why the young Eliezer valued intelligence so highly as to make it a terminal value. He must have enjoyed what he thought was 'intelligence' tremendously, and seen that people who did not share in this intelligence, did not share in his enjoyment and felt sorry for them. Moreover, he must not have been jealous of any enjoyments his less intelligent brethren seemed to partake in that he did not. He probably also did some sort of correlative analysis observing people he considered having more and less intelligence and determined the mores were betteroff than the morons. What traits would he have used to establish this correlation?

Heck- not having experienced qualitatively what young Eliezer did, I can't be certain he's not right about how great it is to be that smart. But that argument can go in any direction. I was quite a busy teen myself, and I'm not so sure I'd trade my ups for a few more IQ points.

In response to Optimization
Comment author: Lara_Foster2 14 September 2008 07:42:40PM -1 points [-]

It's not about resisting temptation to meddle, but about what will, in fact, maximize human utility. The AI will not care whether utility is maximized by us or by it, as long as it is maximized (unless you want to program in 'autonomy' as an axiom, but I'm sure there are other problems with that). I think there is a high probability that, given its power, the fAI will determine that it *can* best maximize human utility by taking away human autonomy. It might give humans the *illusion* of autonomy in some circumstances, and low and behold these people will be 'happier' than non-delusional people would be. Heck, what's to keep it from putting everyone in their own individual simulation? I was assuming some axiom that stated, 'no wire-heading', but it's very hard for me to even know what that means in a post-singularity context. I'm very skeptical of handing over control of my life to any dictatorial source of power, no matter how 'friendly' it's programmed to be. Now, if Eliezer is conviced it's a choice between his creation as dictator vs someone else's destroying the universe, then it is understandable why he is working towards the best dictator he can surmise... But I would rather not have either.

In response to Optimization
Comment author: Lara_Foster2 14 September 2008 06:55:36PM 0 points [-]

John Maxwell- I thought the security/adventrure example was good, but that the way I portrayed it might make it seem that ever-alternating IS the answer. Heregoes: A man lives as a bohemian out on the street, nomadically day to day solving his problems of how to get food and shelter. It seems to him that he would be better off looking for a secure life, and thus gets a job to make money. Working for money for a secure life is difficult and tiring and it seems to him that he will be better off once he has the money and is secure. Now he's worked a long time and has the money and is secure, which he now finds is boring both in comparison to working and living a bohemian life with uncertainty in it. People do value uncertainty and 'authenticity' to a very high degree. Thus Being Secure is > Working to be secure > Not being secure > being secure.

Now, Eliezer would appropriately point out that the man only got trapped in this loop, because he didn't actually know what would make him happiest, but assumed without having the experience. But, that being said, do we think this fellow would have been satisfied being told to start with 'Don't bother working son, this is better for you, trust me!' There's no obvious reason to me why the fAI will allow people the autonomy they so desire to pursue their own mistakes unless the final calculation of human utility determines that it wins out, and this is dubious... I'm saying that I don't care if what in truth maximizes utility is for everyone to believe they're 19th century god-fearing farmers, or to be on a circular magic quest the memory of the earliest day of which disappears each day, such that it replays for eternity, or whatever simulation the fAI decides on for post-singularity humanity, I think I'd rather be free of it to fuck up my own life. Me and many others.

I guess this goes to another more important problem than human nonlinear preference- Why should we trust an AI that maximizes human utility, even if it understands what that means? Why should we, from where we sit now, like what human volition (a collection of non-linear preferences) extrapolates to, and what value do we place on our own autonomy?

In response to Optimization
Comment author: Lara_Foster2 13 September 2008 10:07:14PM 1 point [-]

Eliezer, this particular point you made is of concern to me: "* When an optimization process seems to have an inconsistent preference ranking - for example, it's quite possible in evolutionary biology for allele A to beat out allele B, which beats allele C, which beats allele A - then you can't interpret the system as performing optimization as it churns through its cycles. Intelligence is efficient optimization; churning through preference cycles is stupid, unless the interim states of churning have high terminal utility."

You see, it seems quite likely to me that humans evaluate utility in such a circular way under many circumstances, and therefore aren't performing any optimizations. Ask middle school girls to rank boyfriend prenference and you find Billy beats Joey who beats Micky who beats Billy... Now, when you ask an AI to carry out an optimization of human utility based on observing how people optimize their own utility as evidence, what do you suppose will happen? Certainly humans optimize somethings, sometimes, but optimizations of some things are at odds with others. Think how some people want both security and adventure. A man might have one (say security), be happy for a time, get bored, then move on to the other and repeat the cycle. Is opimization a flux of the two states? Or the one that gives the most utility over the other? I suppose you could take an integral of utility over time and find which set of states = max utility over time. How are we going to begin to define utility? "We like it! But it has to be real, no wire-heading." Now throw in the complication of different people having utility functions at odds with each other. Not everyone can be king of the world, no matter how much utility they will derive from this position. Now ask the machine to be efficient- do it as easily as possible, so that easier solutions are favored over more difficult "expensive" ones.

Even if we avoid all the pitfalls of 'misunderstanding' the initial command to 'optimize utility', what gives you reason to assume you or I or any of the small, small subsegment of the population that reads this blog is going to like what the vector sum of all human preferences, utilities, etc. coughs up?

Comment author: Lara_Foster2 05 August 2008 05:24:22PM 0 points [-]

Actually, if you want a more serious answer to your question, you should contact Sydney Brenner or Marty Chalfie, who actually worked on the C. elegans projects. Brenner is very odd and very busy, but Chalfie might give you the time of day if you make him feel important and buy him lunch.... Marty is an arrogant sonuvabitch. Wouldn't give me a med school rec, because he claimed not to know anything about me other than that I was the top score in his genetics class. I was all like, "Dude! I was the one who was always asking questions!" And he said, "Yes, and then class would go overtime." Lazy-Ass-Sonuvabitch... But still a genius.

Comment author: Lara_Foster2 04 August 2008 11:26:10PM 2 points [-]

Eliezer.... This post terrifies me. How on earth can humans overcome this problem? Everyone is tainted. Every group is tainted. It seems almost fundementally insurrmountable... What are your reasons for working on fAI yourself and *not* trying to prevent all others working on gAI from succeeding? Why could *you* succeed? Life extesnion technologies are progressing fairly well without help from anything as dangerous as an AI.

Regarding anthropomorphism of non-human creatures, I was thoroughly fascinated this morning by a fuzzy yellow catepillar in central park that was progressing rapidly (2 cm/s) across a field, over, under, and around obstacles, in a seemingly straight line. After watching its pseudo-sinusoidal body undulations and the circular twisting of its moist, pink head with two tiny black and white mouth parts for 20 minutes, I moved it to another location, after which it changed its direction to crawl in another staight line. After forward projecting where the two lines would intersect, I determined the catepillar was heading directly towards a large tree with multi-rounded-point leaves about 15 feet in the distance. I moved the catepillar on a leaf (not easy, the thing moved very quickly, and I had to keep rotating the leaf) to behind the tree, and sure enough, it turned around, crawled up the tree, into a crevice, and burrowed in with the full drilling force of its furry, little body.

Now, from a human point of view, words like 'determined,' 'deliberate,' 'goal-seeking,' might creep in, especially when it would rear its bulbous head in a circle and change directions, yet I doubt the catepillar had any of these menal constructs. It was, like the moth it must turn into, probably sensing some chemoattractant from the tree... maybe it's time for it to make a crysalis inside the tree and become a moth or butterfly, and some program just kicks in when it's gotten strong and fat enough, as this thing clearly was. But 'OF COURSE' I thought. C. elegans, a much simpler creature, will change its direction and navigate simple obstacles when an edible proteinous chemoattractant is put in its proximity. The cattepillar is just more advanced at what it does. We know the location and connections of every one of C. elegans 213 neurons... Why can't we make a device that will do the same thing yet? Too much anthropomorphism?

View more: Prev