iceman comments on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set - Less Wrong

31 Post author: malo 25 July 2012 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (231)

You are viewing a single comment's thread.

Comment author: iceman 25 July 2012 09:08:02PM 32 points [-]

I know that this article is more than a bit sensationalized, but it covers most of the things that I donate to the SIAI despite, like several members' evangelical polyamory. Such things don't help the phyg pattern matching, which already hits us hard.

Comment author: mej10 26 July 2012 05:33:46PM *  15 points [-]

The "evangelical polyamory" seems like an example of where Rationalists aren't being particularly rational.

In order to get widespread adoption of your main (more important) ideas, it seems like a good idea to me to keep your other, possibly alienating, ideas private.

Being the champion of a cause sometimes necessitates personal sacrifice beyond just hard work.

Comment author: Jack 26 July 2012 05:57:48PM 21 points [-]

Probably another example: calling themselves "Rationalists"

Comment author: private_messaging 28 July 2012 10:15:47AM *  -2 points [-]

Yeah.

Seriously, why should anyone think that SI is anything more than "narcissistic dilettantes who think they need to teach their awesome big picture ideas to the mere technicians that are creating the future", to paraphrase one of my friends?

This is pretty damn illuminating:

http://lesswrong.com/lw/9gy/thesingularityinstitutesarroganceproblem/5p6a

re: sex life, nothing wrong with it per se, but consider that there's things like psychopathy checklist where you score points for basically talking people into giving you money, for being admired beyond accomplishments, and for sexual promiscuity also. On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn't, just outlining how many people think.

Comment author: Risto_Saarelma 28 July 2012 12:14:33PM 1 point [-]

On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn't, just outlining how many people think.

This doesn't seem to happen when people note that when you look at corporations as intentional agents, they behave like human psychopaths. The reasoning is even pretty similar to the case for AIs, corporations exhibit basic rational behavior but mostly lack whatever special sauce individual humans have that makes them be a bit more prosocial.

Comment author: private_messaging 28 July 2012 01:01:18PM -2 points [-]

Well, the intelligence in general can be much more alien than this.

Consider an AI that, given any mathematical model of a system and some 'value' metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that's necessary including figuring out how to simulate those equations efficiently.

Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.

That's the sort of thing that scientists tend to see as 'intelligent'.

The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.

Comment author: Risto_Saarelma 28 July 2012 02:05:38PM 1 point [-]

Those are narrow AIs. Their behavior doesn't involve acquiring resources from the outside world and autonomously developing better ways to do that. That's the part that might lead to psychopath-like behavior.

Comment author: private_messaging 28 July 2012 02:31:21PM *  1 point [-]

Specializing the algorithm to outside world and to particular philosophy of value does not make it broader, or more intelligent, only more anthropomorphic (and less useful, if you dont believe in friendliness).

Comment author: Risto_Saarelma 28 July 2012 02:48:02PM 2 points [-]

The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn't.

(I'm a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)

Comment author: private_messaging 28 July 2012 03:02:37PM *  1 point [-]

Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of 'mathematical system'.

Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.

Omohundro Basic AI Drives stuff

If the sufficiently advanced technology is indistinguishable from magic, the arguments about "sufficiently advanced AI system" in absence of actual definition what it is, are indistinguishable from magical thinking.

Comment author: Larks 26 July 2012 09:01:11PM 12 points [-]

Agreed. I don't want to have to hedge my exposure to crazy social experiments; I want pure-play Xrisk reduction.

Comment author: Sly 26 July 2012 03:18:50AM 14 points [-]

"evangelical polyamory"

Very much agree with this in particular.

Comment author: Alicorn 26 July 2012 07:26:59PM 9 points [-]

Who's being evangelical about it?

Comment author: iceman 26 July 2012 10:05:55PM *  47 points [-]

Maybe the word "evangelical" isn't strictly correct. (A quick Google search suggests that I had cached the phrase from this discussion.) I'd like to point out an example of an incident that leaves a bad taste in my mouth.

(Before anyone asks, yes, we’re polyamorous – I am in long-term relationships with three women, all of whom are involved with more than one guy. Apologies in advance to any 19th-century old fogies who are offended by our more advanced culture. Also before anyone asks: One of those is my primary who I’ve been with for 7+ years, and the other two did know my real-life identity before reading HPMOR, but HPMOR played a role in their deciding that I was interesting enough to date.)

This comment was made by Eliezer under the name of this community in the author's notes to one of LessWrongs's largest recruiting tools. I remember when I first read this, I kind of flipped out. Professor Quirrell wouldn't have written this, I thought. It was needlessly antagonistic, it squandered a bunch of positive affect, there was little to be gained from this digression, it was blatant signaling--it was so obviously the wrong thing to do and yet it was published anyway.

A few months before that was written, I had cut a fairly substantial cheque to the Singularity Institute. I want to purchase AI risk reduction, not fund a phyg. Blocks of text like the above do not make me feel comfortable that I am doing the former and not the later. I am not alone here.

Back when I only lurked here and saw the first PUA fights, I was in favor of the PUA discussion ban because if LessWrong wants to be a movement that either tries to raise the sanity waterline or maximizes the probability of solving the Friendly AI problem, it needs to be as inclusive as possible and have as few ugh fields that immediately drive away new members. I now think an outright ban would do more harm than good, but the ugh field remains and is counterproductive.

Comment author: juliawise 27 July 2012 06:11:11PM 13 points [-]

When you decide to fund research, what are your requirements for researchers' personal lives? Is the problem that his sex life is unusual, or that he talks about it?

Comment author: Bugmaster 27 July 2012 10:20:33PM 18 points [-]

My feelings on the topic are similar to iceman's, though possibly for slightly different reasons.

What bothers me is not the fact that Eliezer's sex life is "unusual", or that he talks about it, but that he talks about it in his capacity as the chief figurehead and PR representative for his organization. This signals a certain lack of focus due to an inability to distinguish one's personal and professional life.

Unless the precise number and configuration of Eliezer's significant others is directly applicable to AI risk reduction, there's simply no need to discuss it in his official capacity. It's unprofessional and distracting.

(in the interests of full disclosure, I should mention that I am not planning on donating to SIAI any time soon, so my points above are more or less academic).

Comment author: iceman 27 July 2012 09:19:12PM 21 points [-]

My biggest problem is more that he talks about it, sometimes in semiofficial channels. This doesn't mean that I wouldn't be squicked out if I learned about it, but I wouldn't see it as a political problem for the SIAI.

The SIAI isn't some random research think tank: it presents itself as the charity with the highest utility per marginal dollar. Likewise, Eliezer Yudkowsky isn't some random anonymous researcher: he is the public face of the SIAI. His actions and public behavior reflect on the SIAI whether or not it's fair, and everyone involved should have already had that as a strongly held prior.

If people ignore lesswrong or don't donate to the SIAI because they're filtered out by squickish feelings, then this is less resources for the SIAI's mission in return for inconsequential short term gains realized mostly by SIAI insiders. Compound this that talking about the singularity already triggers some people's absurdity bias; there needs to be as few other filters as possible to maximize usable resources that the SIAI has to maximize the chance of positive singularity outcomes.

Comment author: juliawise 27 July 2012 09:48:21PM 1 point [-]

It seems there are two problems: you trust SIAI less, and you worry that others will trust it less. I understand the reason for the second worry, but not the first. Is it that you worry your investment will become worth less because others won't want to fund SIAI?

Comment author: private_messaging 28 July 2012 07:06:15PM *  9 points [-]

That talk was very strong evidence that the SI is incompetent at PR, and furthermore, irrational. edit: or doesn't possess stated goals and beliefs. If you believe the donations are important for saving your life (along with everyone else's), then you naturally try to avoid making such statements. Though I do in some way admire straight up in your face honesty.

Comment author: AndrewH 27 July 2012 04:55:59AM 7 points [-]

I can only give you one upvote, so please take my comment as a second.

Comment author: philh 28 July 2012 02:59:01AM 8 points [-]

On the other hand - while I'm also worried about other people's reaction to that comment, my own reaction was positive. Which suggests there might be other people with positive reactions to it.

I think I like having a community leader who doesn't come across as though everything he says is carefully tailored to not offend people who might be useful; and occasionally offending such people is one way to signal being such a leader.

I also worry that Eliezer having to filter comments like this would make writing less fun for him; and if that made him write less, it might be worse than offending people.