This is a reply to a comment by Yvain and everyone who might have misunderstood what problem I tried to highlight.

Here is the problem. You can't estimate the probability and magnitude of the advantage an AI will have if you are using something that is as vague as the concept of 'intelligence'.

Here is a case that bears some similarity and might shed light on what I am trying to explain:

At his recent keynote speech at the New York Television Festival, former Star Trek writer and creator of the re-imagined Battlestar Galactica Ron Moore revealed the secret formula to writing for Trek.

He described how the writers would just insert "tech" into the scripts whenever they needed to resolve a story or plot line, then they'd have consultants fill in the appropriate words (aka technobabble) later.

"It became the solution to so many plot lines and so many stories," Moore said. "It was so mechanical that we had science consultants who would just come up with the words for us and we'd just write 'tech' in the script. You know, Picard would say 'Commander La Forge, tech the tech to the warp drive.' I'm serious. If you look at those scripts, you'll see that."

Moore then went on to describe how a typical script might read before the science consultants did their thing:

La Forge: "Captain, the tech is overteching."

Picard: "Well, route the auxiliary tech to the tech, Mr. La Forge."

La Forge: "No, Captain. Captain, I've tried to tech the tech, and it won't work."

Picard: "Well, then we're doomed."

"And then Data pops up and says, 'Captain, there is a theory that if you tech the other tech ... '" Moore said. "It's a rhythm and it's a structure, and the words are meaningless. It's not about anything except just sort of going through this dance of how they tech their way out of it."

The use of 'intelligence' is as misleading and dishonest in evaluating risks from AI as the use of 'tech' in Star Trek.

It is true that 'intelligence', just as 'technology' has some explanatory power. Just like 'emergence' has some explanatory power. As in "the morality of an act is an emergent phenomena of a physical system: it refers to the physical relations among the components of that system". But it does not help to evaluate the morality of an act or in predicting if a given physical system will exhibit moral properties.

New Comment
15 comments, sorted by Click to highlight new comments since:

People do sometimes use philosophical terms like "intelligence" imprecisely, in confusion- and error-generating ways. But that is not the same sort of meaninglessness as Star Trek's mad libs. If you think something is that meaningless, and it didn't come from fiction, then you're probably missing something important.

"The morality of an act is an emergent phenomena of a physical system" means "It is possible in principle to produce a model and definition of morality in terms of a physical system". This is useless to people who want details of that model which could be used to classify actions as moral or not. It's kind of like answering "Where are my keys?" with "Your keys exist!". Useless, but certainly not meaningless; if someone instead told you, "Your keys do not exist!", then you'd infer the interesting-and-important fact that your keys had been destroyed. The implications of "morality is not an emergent phenomena of a physical system" would be considerably more abstract, philosophical, and difficult to translate into action, but there would be interesting implications.

[-][anonymous]20

.

You can't estimate the probability and magnitude of the advantage an AI will have if you are using something that is as vague as the concept of 'intelligence'.

So: why not use and supply a more precise definition?

I still think you should just read the sequences before posting dozens of top-level posts like this one.

I agree that telling newcomers to " go read the sequences" is rude and counter-productive, but according to Yvain you've written more than 80 top-level posts!

I sometimes consider LessWrong to be a discussion forum for people who have read the sequences" not that having read the sequences turns one into an übermensch, but it gives a set of common vocabulary and expected knowledge that prevents the discussion from going over and over basic stuff. Just like there could be a discussion forum for people who have read Marx: whether or not the members agree with each other, at least they wouldn't be going over basic stuff, talk right past each other because of radically different interpretations of some terms. And in such a forum, arguing against marxism without bothering to read any Marx would indeed be kinda rude.

That being said, I greatly respect your work emailing AI researchers!

I was going to upvote this comment until I got to the last line. XiXiDu's email campaign is almost certainly doing more harm than good.

That's surprising; I found having a set of views from outside the LW/SIAI cluster quite refreshing. What do you think was bad about those? My only quibble would be that I found some of the questions awkward/guiding/irrelevant; I would have prefered a better set of questions. But Xixidu improved them with time.

Those questionnaires are not a particularly good introduction to the LW/SI memespace. I worry that he is therefore making a poor first impression on our behalf, reducing the odds that these people will end up contributing to existential risk reduction and/or friendliness research.

What WrongBot says. I approve of the project of getting outside views, and I approve of the idea of making more AI researchers aware of AI as a possible existential risk. (From what I've heard, SIAI is quietly doing this themselves with some of the more influential groups.) But XiXiDu doesn't understand SIAI's actual object-level claims, let alone the arguments that link them, and he writes AI researchers in a style that looks crankish.

I can hardly think of a better way to prejudice researchers against genuinely examining their intuitions than a wacky letter asking their opinions on an array of absurd-sounding claims with no coherent structure to them, which is exactly what he presents them with.

But XiXiDu doesn't understand SIAI's actual object-level claims, let alone the arguments that link them, and he writes AI researchers in a style that looks crankish.

Agreed - some of his questions were cringe-inducing, but overall, I appreciated that series of posts because it's interesting to hear what a broad range of AI researchers have to say about the topic; some of the answers were insightful and well-argued.

I agree that sounding crankish could be a problem, but I don't think Xixidu was presenting himself as writing in LW/SIAI's name. Crankiness from some lesswrongers tarring the reputation of Eliezer's writings is hard to avoid anyway: the main problem is that there's no clear way to refer to Eliezer's writings; "The Sequences" is obscure and covers too much stuff, some of which isn't Eliezer; "Overcoming Bias" worked at the time, and "Less Wrong" is a name that wasn't even used when most of the core Sequences were written, and now mostly refers to the community.

[-]XiXiDu-10
  • 1) Luke Muehlhauser, excutive director of SIAI, listed my attempt to interview AI researchers in his post 'Useful Things Volunteers Can Do Right Now'
  • 2) Before writing anyone I asked for feedback and continued to ask for feedback and improved my questions.
  • 3) I directly contacted SIAI via email several times asking them about how to answer replies I got from researchers.
  • 4) All the interviews got highly upvoted.
  • 5) I never claimed to be associated with SIAI.

XiXiDu doesn't understand SIAI's actual object-level claims...

I understand them well enough for the purpose of asking researchers a few questions. My karma score has been 5700+ at some point. Do you think that would have been possible without having a basic understanding of some of the underlying ideas?

...a wacky letter asking their opinions on an array of absurd-sounding claims with no coherent structure to them, which is exactly what he presents them with.

I think this is just unfair. I do not think that my email, or the questions I asked were wrong. There is also no way to ask a lot of researchers about this topic without sounding a bit wacky.

All you could do is 1) tell them to read the Sequences 2) not to ask them at all and just trust Eliezer Yudkowsky. 1) Won't work since they have no reason to suspect that Eliezer Yudkowsky knows some incredible secret knowledge they don't. 2) Is no option for me. He could tell me anything about AI and I would have no way to tell if he knows what he is talking about.

I understand them well enough for the purpose of asking researchers a few questions. My karma score has been 5700+ at some point. Do you think that would have been possible without having a basic understanding of some of the underlying ideas?

Yes. I attribute my 18k karma to excessive participation. If I didn't have a clue what I was talking about it would have taken longer but I would have collected thousands of karma anyway just by writing many comments with correct grammar.

Karma - that is, total karma of users - means very little.

Karma - that is, total karma of users - means very little.

I'd kinda like to see it expressed as (total karma/total posts), that might help a little bit...

I still think you should just read the sequences before posting dozens of top-level posts like this one.

XiXiDu hasn't read the sequences? How is he able to quote-mine Eliezer so effectively?

XiXiDu hasn't read the sequences? How is he able to quote-mine Eliezer so effectively?

To be fair, he doesn't mine Eliezer particularly effectively. Just often. I could quote mine to make Eliezer look like a dick far more effectively (having actually read Eliezer's posts and understood what he says well enough to be able to pick out the ones that can be twisted the worst).

See this recent discussion with Yvain:

Xixidu:

I am not sure how much I have read. Maybe 30 posts? I haven't found any flaws so far. But I feel that there are huge flaws.

Vladimir (not Yvain, my mistake):

Given the amount of activity you've applied to arguing about these topics (you wrote 82 LW posts during the last 1.5 years), I must say this is astonishing!