Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: sdr 02 April 2017 03:51:02AM *  7 points [-]

(I'm not sure which part of this is "armchair-theorizing-sociology piece", so let me share impressions:

  • The 3 specific examples are all observations: 2 on a CFAR event, 1 on a bay-lesswrong event
  • The "people putting other's needs ahead of their own" comes from 2 persons who both bounced from the Bay for this reason
  • The "attempting value-pumping" / lack-of-dealcraft is ubiquitous everywhere where people are Getting Stuff Done; the only novel thing in the Bay is high turnaround / people onboarding allows this to be done systematically
  • The "let's make stuff suck less" -> "let's all of you do my stuff" headfake is a non-profit-special; 2 attempts so far on me
  • The part where instead of attempting to "forbid parasiting", I turn it around and ask "how can we make these parasites profitable?" is a special of mine, and has so far been very profitable, in a number of contexts.

If you see none of these, I am happy for you. )

Comment author: Lara_Foster 02 April 2017 08:55:26PM 3 points [-]

I agree that this is an important issue we may have to deal with. I think it will be important to separate doing things for the community from doing things for individual members of the community. For example, encouraging people to bring food to a pot luck or volunteer at solstice is different from setting expectations that you help someone with their webpage for work or help out members of the community who facing financial difficulties. I've been surprised by how many times I've had to explain that expecting the community to financially support people is terrible on every level and should be actively discouraged as a community activity. This is not an organized enough community with high enough bars to membership to do things like collections. I do worry that people will hear a vague 'Huffelpuff!' call to arms and assume this means doing stuff for everyone else whenever you feasilbly can -- It shouldn't. It should be a message for what you do in the context of the public community space. What you choose to do for individuals is your own affair.

In response to The Level Above Mine
Comment author: Lara_Foster 27 September 2008 08:30:00PM 2 points [-]

Eliezer, Komponisto,

I understand the anxiety issues of, 'Do I have what it takes to accomplish this..."

I don't understand why the existence of someone else who can would damage Eliezer's ego. I can observe that many other people's sense of self is violated if they find out that someone else is better at something they thought they were the best at-- the football champion at HS losing their position at college, etc. However, in order for this to occur, the person needs to 1) in fact misjudge their relative superiority to others, and 2) value the superiority for its own sake.

Now, Eliezer might take the discovery of a better rationalist/fAI designer as proof that he misjudged his relative superiority-- but unless he thinks his superiority is itself valuable, he should not be bothered by it. His own actual intelligence, afterall, will not have changed, only the state of his knowledge of others' intelligence relative to his own.

Eliezer must enjoy thinking he is superior for loss of this status to bother his 'ego'.

Though I suppose one could argue that this is a natural human quality, and Eliezer would need to be superhuman or lying to say otherwise.

In response to The Level Above Mine
Comment author: Lara_Foster 27 September 2008 05:51:00PM 4 points [-]

Again, I have difficulty understanding why so many people place such a high value on 'intelligence' for its own sake, as opposed to a means to an end. If Eliezer is worried that he does not have enough mathematical intelligence to save the universe from someone else's misdesigned AI, than this is indeed a problem for him, but only because the universe will not be saved. If someone else saves the universe instead, Eliezer should not mind, and should go back to writing sci-fi novels. Why should Eliezer's ego cry at the thought of being upstaged? He should *want* that to happen if he's such an altruist.

I don't really give a damn where my 'intelligence' falls on some scale, so long as I have enough of it to accomplish those things I find satisfying and important TO ME. And if I don't, well, hopefully I have enough savvy to get others who do to help me out of a difficult situation. Hopefully Eliezer can get the help he needs with fAI (if such help even exists and such a problem is solvable).

Also, to those who care about intelligence for its own sake, does the absolute horsepower matter to you, or only your abilities relative to others? IE, would you be satisfied if you were considered the smartest person in the world by whatever scale, or would that still not be enough because you were not omniscient?

Comment author: Lara_Foster 16 September 2008 02:01:06AM 0 points [-]

Scott: "You have a separate source of self-worth, and it may be too late that you realize that source isn't enough."

Interesting theory of why intelligence might have a negative correlation with interpersonal skills, though it seems like a 'just so story' to me, and I would want more evidence. Here are some alternatives: 'Intelligent children find the games and small-talk of others their own age boring and thus do not engage with them.' 'Stupid children do not understand what intelligent children are trying to tell them or play with them, and thus ignore or shun them.' In both of these circumstances, the solution is to socialize intelligent children with each other or with an older group in general. I had a horrible time in grade school, but I socialized with older children and adults and I turned out alright (well, I think so). I suppose without *any* socialization, a child will not learn how to interpret facial expressions, intonations, and general emotional posturing of others. I'm not certain that this can't be learned with some effort later in life, though it might not come as naturally. Still, it would seem worth the effort.

Comment author: Lara_Foster 15 September 2008 04:38:47PM 0 points [-]

I'm uncertain whether Eliezer-1995 was equating intelligence with the ability to self-optimize for utility (ie intelligence = optimization power) or if he was equating intelligence with utility (intelligence is great in and of itself). I would agree with Crowly that intelligence is just one of many factors influencing the utility an individual gets from his/her existence. There are also multiple kinds of intelligence. Someone with very high interpersonal intelligence and many deep relationships but abyssmal math skills may not want to trade places with the 200 IQ point math wiz who's never had a girlfriend and is still trying to compute the ultimate 'girlfriend maximizing utility equation". Just saying...

Anyone want to provide links to studies correlating IQ, ability, and intelligences in various areas with life-satisfaction? I'd hypothesize that people with slightly above average math/verbal IQs and very above average interpersonal skills probably rank highest on life-satisfaction scales.

Unless, of coures, Eliezer-1995 didn't think utility could really be measured by life satisfaction, and by his methods of utility calculation, Intelligence beats out all else. I'd be interested in knowing what utility meant to him under this circumstance.

In response to Optimization
Comment author: Lara_Foster 15 September 2008 02:54:34AM 0 points [-]

Oh, come on, Eliezer, of course you thought of it. ;) However, it might not have been something that bothered you, as in- A) You didn't believe actually having autonomy mattered as long as people feel like they do (ie a Matrix/Nexus situation). I have heard this argued. Would it matter to you if you found out your whole life was a simulation? Some say no. I say yes. Matter of taste perhaps?

B) OR You find it self evident that 'real' autonomy would be extrapolated by the AI as something essential to human happiness, such that an intelligence observing people and maximizing our utility wouldn't need to be told 'allow autonomy.' This I would disagree with.

C) OR You recognize that this is a problem with a non-obvious solution to an AI, and thus intend to deal with it somehow in code ahead of time, before starting the volition extrapolating AI. Your response indicates you feel this way. However, I am concerned even beyond setting an axiomatic function for 'allow autonomy' in a program. There are probably an infinite number of ways that an AI can find ways to carry out its stated function that will somehow 'game' our own system and lead to suboptimal or outright repugnant results (ie everyone being trapped in a permanent quest- maybe the AI avoids the problem of 'it has to be real' by actually creating a magic ring that needs to be thrown into a volcano every 6 years or so). You don't need me telling you that! Maximizing utility while deluding us about reality is only one. It seems impossible that we could axiomatically safeguard against all possibilities. Assimov was a pretty smart cookie, and his '3 laws' are certainly not sufficient. 'Eliezer's million lines of code' might cover a much larger range of AI failures, but how could you ever be sure? The whole project just seems insanely dangerous. Or are you going to address safety concerns in another post in this series?

Comment author: Lara_Foster 05 September 2008 08:17:00PM 2 points [-]

Ah! I just thought of a great scenario! The Real God Delusion. Talk about wireheading...
So the fAI has succeeded and it actually understands human psychology and their deepest desires and it actually wants to maximize our positive feelings in a balanced way, etc. It has studied humans intently and determines that the best way to make all humans feel best is to create a system of God and heaven- humans are prone to religiosity, it gives them a deep sense of meaning, etc. So our friendly neighbohrhood AI reads all religious texts and observes all rituals and determines the best type of god(s) and heaven(s) (it might make more than one for different people)... So the fAI creates God, gives us divine tasks that we feel very proud to accomplish when we can (religiosity), gives us rules to balance our internal biological conflicting desires, and uploads us after death into some fashion of paradise where we can feel eternal love...

Hey- just saying that even *IF* the fAI really understood human psychology, doesn't mean that *we* will like it's answer... We might NOT like what most other people do.

Comment author: Lara_Foster 05 September 2008 08:00:00PM 1 point [-]

Cocaine-
I was completely awed by how just totally-mind-blowing-amazing this stuff was the once and only time I tried it. Now, I *knew* the euphoric-orgasmic state I was in had been induced by a drug, and this knowledge would make me classify it as 'not real happiness,' but if someone had secretly dosed me after saving a life or having sex, I probably would have interpreted it as happiness proper. Sex and love make people happy in a very similar way as cocaine, and don't seem to have the same negative effects as cocaine, but this is probably a dosage issue. There are sex/porn addicts whose metabolism or brain chemistry might be off. I'm sure that if you carefully monitored the pharmacokinetics of cocaine in a system, you could maximize cocaine utility by optimizing dosage and frequency such that you didn't sensitize to it or burn out endogenous seretonin.

Would it be wrong for humans to maximize drug-induced euphoria? Then why not for an AI to?

What about rewarding with cocaine after accomplishing desired goals? Another million in the fAI fund... AHHH... Maybe Eliezer should become a sugar-daddy to his cronies to get more funds out of them. (Do this secretly so they think the high is natural and not that they can buy it on the street for $30)

The main problem as I see it is that humans DON'T KNOW what they want. How can you ask a superintelligence to help you accomplish something if you don't know what it is? The programmers want it to tell them what they want. And then they get mad when it turns up the morphine drip...

Maybe another way to think about it is we want the superintelligence to think like a human and share human goals, but be smarter and take them to the next level through extrapolation.

But how do we even know that human goals are indefinitely extrapolatable? Maybe taking human algorithms to an extreme DO lead to everyone being wire-headed in one way or another. If you say, "I can't just feel good without doing anything... here are the goals that make me feel good- and it CAN'T be a simulation,' then maybe the superintelligence will just set up a series of scenarios in which people can live out their fantasies for real... but they will still all be staged fantasies.


Comment author: Lara_Foster 03 September 2008 04:19:13AM 0 points [-]

Eliezer,

Excuse my entrance into this discussion so late (I have been away), but I am wondering if you have answered the following questions in previous posts, and if so, which ones.

1) Why do you believe a superintelligence *will* be necessary for uploading?

2) Why do you believe there possibly ever *could* be a safe superintelligence of any sort? The more I read about the difficulties of friendly AI, the more hopeless the problem seems, especially considering the large amount of human thought and collaboration that will be necessary. You yourself said there are no non-technical solutions, but I can't imagine you could possibly believe in a magic bullet that some individial super-genius will *eurekia* have an epiphany about by himself in his basement. And this won't be like the cosmology conference to determine how the universe began, where everyone's testosterone riddled ego battled for a victory of no consequence. It won't even be a manhattan project, with nuclear weapons tests in barren waste-lands... Basically, if we're not right the first time, we're fucked. And how do you expect you'll get that many minds to be that certain that they'll agree it's worth making and starting the... the... whateverthefuck it ends up being. Or do you think it'll just take one maverick with a cult of loving followers to get it right?

3) But really, why don't you just focus all your efforts on preventing *any* superintelligence from being created? Do you really believe it'll come down to *us* (the righteously unbiased) versus *them* (the thoughtlessly fame-hungry computer scientists)? If so, who are *they*? Who are *we* for that matter?

4) If fAI will be that great, why should this problem be dealt with immediately by flesh, blood, and flawed humans instead of improved-upoloaded copies in the future?

Comment author: Lara_Foster 08 August 2008 07:00:00PM -1 points [-]

Ok- Eliezer- you are just a human and therefore prone to anger and reaction to said anger, but *you,* in particular, have a professional responsibility not to come across as excluding people who disagree with you from the discussion and presenting yourself as the final destination of the proverbial buck. We are all in this together. I have only met you in person once, have only had a handful of conversations about you with people who actually know you, and have only been reading this blog for a few months, and yet I get a distinct impression that you have some sort of narcissistic Hero-God-Complex. I mean, what's with dressing up in a robe and presenting yourself as the keeper of clandestine knowledge? Now, whether or not you actually *feel* this way, it is something you project and should endeavor *not* to, so that people (like sophiesdad) take your work more seriously. "Pyrimid head," "Pirate King," and "Emperor with no clothes" are NOT terms of endearment, and this might seem like a ridiculous admonission coming from a person who has self-presented as a 'pretentious slut,' but I'm trying to be provocative, not leaderly. YOU are asking all of these people to trust YOUR MIND with the dangers of fAI and the fate of the world and give you money for it! Sorry to hold you to such high standards, but if you present with a personality disorder any competent psychologist can identify, then this will be very hard for you... unless of course you want to go the "I'm the Messiah, abandon all and follow me!" route, set up the Church of Eliezer, and start a religious movement with which to get funding... Might work, but it will be hard to recruit serious scientists to work with you under those circumstances...

View more: Next