In response to My true rejection
Comment author: orthonormal 14 July 2011 11:06:49PM 5 points [-]

Your proposals are the kind of strawman utilitarianism that turns out to be both wrong and stupid, for several reasons.

Also, I don't think you understand what the SIAI argues about what an unFriendly intelligence would do if programmed to maximize, say, the personal wealth of its programmers. Short story, this would be suicide or worse in terms of what the programmers would actually want. The point at which smarter-than-human AI could be successfully abused by a selfish few is after the problem of Friendliness has been solved, rather than before.

Comment author: dripgrind 15 July 2011 12:06:58AM 0 points [-]

Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it's programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.

Comment author: Raemon 14 July 2011 11:16:37PM *  2 points [-]

It kind of seems like at the moment, you mainly want to find post-hoc reasons why the exercise was "useful".

I did use all of those reasons to justify why I thought I should do it beforehand. But I have noticed myself repeating those reasons to make myself feel more justified. (Also possible that my primary motivation in doing so in the first place was the social-skill development one)

In any case, I think your recommendations for how to proceed are good ones.

Comment author: dripgrind 14 July 2011 11:41:00PM 2 points [-]

Another idea - if you can't find someone skilled in market research to do this for you at a discount or free, read a textbook about how to assess potential new brands to help with designing the survey.

In response to My true rejection
Comment author: timtyler 14 July 2011 10:54:27PM *  -1 points [-]

Long before a Friendly AI is developed, some research team is going to be in a position to deploy an unFriendly AI that tries to maximise the personal wealth of the researchers, or the share price of the corporation that employs them, or pursues some other goal that the rest of humanity might not like.

And who's going to stop that happening?

That is a fairly likely outcome. It would represent business as usual. The entire history of life is one of some creatures profiting at the expense of other ones.

In response to comment by timtyler on My true rejection
Comment author: dripgrind 14 July 2011 11:36:11PM -1 points [-]

My point, then, is that as well as heroically trying to come up with a theory of Friendly AI, it might be a good idea to heroically stop the deployment of unFriendly AI.

In response to My true rejection
Comment author: Nick_Tarleton 14 July 2011 11:15:49PM 9 points [-]

assassinate any researchers who look like they're on track to deploying an unFriendly AI, then destroy their labs and backups.

You need to think much more carefully about (a) the likely consequences of doing this (b) the likely consequences of appearing to be a person or organization that would do this.

See also.

Comment author: dripgrind 14 July 2011 11:34:58PM 0 points [-]

Oh, I'm not saying that SIAI should do it openly. Just that, according to their belief system, they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve). The absence of such false-flag cells indicates that SIAI aren't doing it - although their presence wouldn't prove they were. That's the whole idea of "false-flag".

If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions. You know, "shut up and multiply", trillion specks, and all that.

In response to My true rejection
Comment author: orthonormal 14 July 2011 11:06:49PM 5 points [-]

Your proposals are the kind of strawman utilitarianism that turns out to be both wrong and stupid, for several reasons.

Also, I don't think you understand what the SIAI argues about what an unFriendly intelligence would do if programmed to maximize, say, the personal wealth of its programmers. Short story, this would be suicide or worse in terms of what the programmers would actually want. The point at which smarter-than-human AI could be successfully abused by a selfish few is after the problem of Friendliness has been solved, rather than before.

Comment author: dripgrind 14 July 2011 11:26:45PM 4 points [-]

I freely admit there are ethical issues with a secret assassination programme. But what's wrong with lobbying politicians to retard the progress of unFriendly AI projects, regulate AI, etc? You could easily persuade conservatives to pretend to be scared about human-level AI on theological/moral/job-preservation grounds. Why not start shaping the debate and pushing the Overton window now?

I do understand what SIAI argues what an unFriendly intelligence would do if programmed to maximize some financial metric. I just don't believe that a corporation in a position to deploy a super-AI would understand or heed SIAI's argument. After all, corporations maximise short-term profit against their long-term interests all the time - a topical example is News International.

In response to My true rejection
Comment author: falenas108 14 July 2011 10:37:43PM 1 point [-]

You don't need to have solved the AGI problem to have solved friendliness. That issue can be solved separately far before AGI even begins to become a threat, and then FAI and UFAI will be on basically equal footing.

Comment author: dripgrind 14 July 2011 11:17:14PM 3 points [-]

Can you give me some references for the idea that "you don't need to have solved the AGI problem to have solved friendliness"? I'm not saying it's not true, I just want to improve this article.

Let's taboo "solved" for a minute.

Say you have a detailed, rigorous theory of Friendliness, but you don't have it implemented in code as part of an AGI. You are racing with your competitor to code a self-improving super-AGI. Isn't it still quicker to implement something that doesn't incorporate Friendliness?

To me, it seems like, even if the theory was settled, Friendliness would be an additional feature you would have to code into an AI that would take extra time and effort.

What I'm getting at is that, throughout the history of computing, the version of a system with desirable property X, even if the theoretical benefits of X are well known by the academy, has tended to be implemented and deployed commercially after the version without X. For example, it would have been better for the general public and web developers if web browsers obeyed W3C specifications and didn't have any extra proprietary tags - but in practice, commercial pressures meant that companies made grossly non-compliant browsers for years until eventually they started moving towards compliance.

The "Friendly browser" theory was solved, but compliant and non-compliant browsers still weren't on basically equal footing.

(Now, you might say that CEV will be way more mathematical and rigorous than browser specifications - but the only important point for my argument is that it will take more effort to implement than the alternative).

Now you could say that browser compliance is a fairly trvial matter, and corporations will be more cautious about deploying AGI. But the potential gain from deploying a super-AI first would surely be much greater than the benefit of supporting the blink tag or whatever - so the incentive to rationalise away the perceived dangers will be much greater.

In response to My true rejection
Comment author: Armok_GoB 14 July 2011 10:10:33PM -2 points [-]

I'm generally operating under the assumption that this is clearly not an issue and seems so only for reasons that should obviously not be talked about given these assumptions. If you know what I mean.

Comment author: dripgrind 14 July 2011 10:51:55PM 5 points [-]

I really don't know what you mean.

Comment author: Raemon 14 July 2011 09:44:19PM *  7 points [-]

I actually mostly agree with you. I hesitated a long time before posting this because I didn't think I had enough/the-right-kind of work done to justify sharing. But ultimately, the reason I posted it is the same reason I still think it's a good idea: Action is better than inaction, and a big problem I think people in our demographic face is overthinking and underdoing. Michaelos' recent post in another thread strikes me as very true. (It may not, in fact, be true, but it definitely matches up with other things I know). If I'm taking actions to solve a problem, I can learn from my mistakes, get feedback and try new approaches. (Thank you for your feedback, by the way.)

There are already half-baked efforts to "expand the rationality movement" underway. A half-baked attempt to figure out if that's even the right goal is not ideal, but I think it's better than nothing.

I didn't spend otherwise important, productive time doing this. I was converting useless time in an elevator into:

1) Some new information about what people think about rationality 2) Some new information about how to ask people questions and get productive answers 3) Practice at talking to random people in general 4) Practice talking about rationality without evangelizing (yes, I realize I didn't do a great job at it, but it's something that I can only improve at with practice)

(I didn't see the definition as important so I could start deliberately evangelizing, but so that if the conversation went in a particular direction we'd have something ready to say)

I DID spend "potential productive" time writing up this report and setting up the google doc, but that was time that taught me how to write up a Less Wrong post and your feedback has given me things to think about to improve for next time, so thank you for that.

We talked about hiring real researchers at our meetups. We didn't end up doing it, mostly because from everything we knew, the official channels to do so were expensive and we had no idea what nonofficial channels we might successfully work with. If you do have recommendations on how to actually go about this, that'd be great.

But regardless, I think this is was a useful exercise for me and I think it would be a useful exercise for many people here. The current data set is near useless, but the experience acquiring it was not, and I think as I/we got better at acquiring data it could become less useless. Even if we got a more scientifically useful polling company to answer the specific question "What do people think about the word rationality?" I think it would still be useful for us to practice talking to people about it.

Writing everything down on a google doc might not actually be useful for the purpose of evaluating the information accurately, but it gets us into the practice of recording and checking over data.

Comment author: dripgrind 14 July 2011 10:46:56PM 12 points [-]

Action can be way worse than inaction, if what you end up doing is misleading yourself or doing harm to your cause.

I don't think what you've done is necessarily misleading or harmful, as long as you don't consider it anything more than incomplete, qualitative research into the range of responses the word "rationality" gets from random people.

But you really, really need to decide what the point of this exercise is. Are you trying to gather useful data, or make people feel more positive about rationality, or just get comfortable talking to random people? It kind of seems like at the moment, you mainly want to find post-hoc reasons why the exercise was "useful".

Here's my suggestion: if you're trying to do a survey, decide on your demographic(s) of interest. Get everyone on Less Wrong to ask around until they find a sympathiser who works in a branding/marketing survey organisation, and can slip in an extra question in a survey, asking how people respond to the term "rationality".

Failing that, collaboratively draw up a proper survey protocol and get Less Wrongers to administer it to a random sample of a people. Think it through before you do it: e.g. stopping people outside on the street would be more representative than limiting it to a certain building. You could signal that you're an official survey person by carrying a clipboard (not by wielding a recording device). You could improve participation by stating initially that you only have one question which will take 15 seconds, then not trying to start a discussion. You could improve participation among younger women by making it clear that you're doing a survey, so they're not concerned you're trying to start an abstract philosophical conversation as a pretext to get them into bed.

I think this could have great potential, especially if you comparatively test alternative terms to "rationality". Richard Dawkins tried to popularise the term "Brights" for people who don't believe in the supernatural. If he'd done even the amount of field testing you have already done, he would have realised it sounds unsufferably smug. So I think your impulse to do market research is a good one.

Comment author: Vladimir_Golovin 14 July 2011 06:49:05AM 25 points [-]

Upvoted for actually going out and asking real people.

Comment author: dripgrind 14 July 2011 08:47:42PM 2 points [-]

Putting up a poll on Livejournal would also constitute "asking real people". Obviously an LJ poll isn't going to deliver a representative sample or actionable information - but then again, neither is asking 9 people who work in your building in New York.

Comment author: dripgrind 14 July 2011 08:40:56PM 20 points [-]

It's definitely a good idea to do this.

But the way you've set about doing it isn't going to produce any worthwhile data.

I'm no expert on branding and market research, but I'm pretty sure that the best practice in the field isn't having conversations with 9 non-random strangers in a lift (asking different leading questions each time) then bunging it in Google Docs and getting other people to add more haphazard data in the hope that someone will make a website that sorts it all out.

First you need to define the question you're asking. Exactly which sub-population are you interested in? You start off asking about "the average person"'s attitude to rationality, suggesting that maybe you want to gauge attitudes across the whole (US?) population. But then you decide that the 60+ man is "outside our demographic bracket", although your 70+ grandmother apparently isn't.

Either way, the set of [people who work in your office building plus your grandmother] might not constitute a representative sample of the population of the USA, let alone everyone in the world. Getting people who frequent Less Wrong to ask people they cross paths with isn't going to be a representative sample of all people - you can see that, right?

The most efficient way to answer your question is likely to be piggybacking on existing polling organisations. Now, it's probably true that corporate marketing/branding "researchers" have a bias towards confirming what the bosses want to hear - I was just reading this Robin Hanson article about how people don't evaluate the quality of predictions after the fact: http://www.cato-unbound.org/2011/07/13/robin-hanson/who-cares-about-forecast-accuracy/ - but still, I think it would be better to at least consider that there are organisations whose job it is to find the general public reaction to a "brand".

You could find someone who works for such an organisation and suborn them to add an extra question to a proper survey. That way you could gather the reactions of 1000 or 10,000 demographically-representative people in a single action. Let's not waste our time dicking around uploading meaningless data to Google Docs.

A good target in the UK would be YouGov.

I also think it's pointless to worry about a concise definition of rationality until it's been determined that "rationality" is in fact a good brand for public consumption. What if it turns out that the term "rationality" makes 60% of people instantly hostile? Do the research first, then start proselytising.

I find it interesting that the response to this article hasn't overwhelmingly been about criticising Raemon's methodology. Is that because LessWrong members fallaciously assume that attempting to measure the public's subjective, irrational responses to a word doesn't need to be carried out in an objective, rational manner? Or is it, as I increasingly suspect as I edit and re-edit this comment, that I'm a total dick?

View more: Prev | Next