Comment author: rebellionkid 03 August 2013 08:38:10PM 7 points [-]

I hope it is false.

I think this is the most interesting sentence in the whole discussion.

Let's be clear. Racial groupings are really very significant pieces of evidence. There's huge amounts of genetics that correlates, huge amounts of culture that correlates, huge amounts of wider environment that correlates. It would be frankly astonishing if things like IQ, reaction time, hight, life expectancy, and rates of disease didn't also correlate.

So, we ought to expect to see a correlation, and in fact a whole bunch of studies say we do. ... And then those studies are put under far more than average pressure. See people below wanting to dismiss Raven's Progressive Matrices as culturally biased. Why on earth do we want there to be no such correlation with IQ.

We're very happy to say there's a correlation between race and hight, between race and life expectancy, between race and disease, between race and income. Why not race and IQ? Why do we want that to be false?

Comment author: Epiphany 03 August 2013 10:19:16PM *  1 point [-]

Let's be clear. Racial groupings are really very significant pieces of evidence. There's huge amounts of genetics that correlates, huge amounts of culture that correlates, huge amounts of wider environment that correlates. It would be frankly astonishing if things like IQ, reaction time, hight, life expectancy, and rates of disease didn't also correlate.

Culture and environment are not race. Therefore, if you're studying race, those influences should be taken out of your scientific experiment. It's extremely difficult to remove things like culture and environment from a study on IQ. The fact that so much is correlated with it doesn't mean the results of studies intended to determine racial differences are significant so much as it means they're a tangled mess of cause and effect which we likely haven't sorted out adequately.

Why on earth do we want there to be no such correlation with IQ.

A. We don't want black people to suffer needlessly.

B. We don't want to encourage ourselves and others to be prejudiced against people when, regardless of what the average African's IQ is, it is still both logically incorrect (hasty generalization) and ethically wrong to prejudge individual Africans. However, knowing how humans behave, we figure that if people believe Africans have lower IQs, that will result in an increase in prejudice.

We're very happy to say there's a correlation between race and hight, between race and life expectancy, between race and disease, between race and income. Why not race and IQ? Why do we want that to be false?

Actually, I bet some people are not happy saying that there are correlations there. This is one of those notions you might want to double check.

Comment author: Muhd 02 August 2013 11:44:04PM *  1 point [-]

This is an interesting point, but let's try a thought experiment to see if it holds up. Consider the following statements you could make about yourself

  1. You are an X-level black belt in a martial art.
  2. Your top bowling score is X.
  3. You can benchpress X amount of weight.
  4. You have an IQ of X.

Where X is some value that is impressive and/or noteworthy. How strong of a negative reaction do you think each of these would get?

Here's what my intuition says:

  1. Probably no negative reaction.
  2. Probably no negative reaction.
  3. Possibly somewhat negative, sounds like bragging.
  4. Strong negative reaction.

Looking for a pattern in the results, I have a theory: it seems like what is most unacceptable is making it sound that you are superior to the other people in the room in an objective sense. The reason martial arts and bowling are acceptable is that skill in those pursuits is not relevant to the other people in the room who do not engage in them. On the other hand, bragging about your weightlifting is somewhat more annoying since it seems like you are saying you are more healthy/fit/muscular than other people in the room--traits which are more broadly valuable.

Claiming high intelligence gets the worst response of all because it is the most absolute and broad claim of superiority one can make, since being intelligent generally makes you better at a broad range of tasks in the modern world, all else being equal. Also, IQ is associated with controversy and suffers from addtional negativity from that -- just like if you say you are for/against abortion. I think Andy may be right that the objective number makes it worse in some way. If you said "I am really smart" that wouldn't be quite as offensive, since it is less objective.

If someone can think of counterexamples to my theory, replies are welcome.

Comment author: Epiphany 03 August 2013 04:51:02AM *  -1 points [-]

I don't think it's superiority. A counterpoint in thought experiment form:

  1. "Hi, I'm the president of the United States"
  2. "Hi, I run my own business."
  3. "Hi, I'm a model."
  4. "Hi, I'm Albert, the guy who came up with E equals MC squared."
  5. "Hi, I'm a genius."

I think the numbers do make statements sound bad (I couldn't figure out a way to word the above using a number without making it sound like bragging) but that's irrelevant to the question I'm trying to answer, so it's essentially one of those factors that should be removed from an experiment. I added an additional statement in the same format (an introduction using an identity of some type) about intelligence which does not include a number so that we've got a comparable intelligence-related option.

Here's what my intuition says:

  1. No negative reaction (more likely a positive reaction like excitement).
  2. No negative reaction (admiration seems as likely as jealousy).
  3. Potentially some amount of negative feelings from jealous females, and some amount of excitement from males or lesbians.
  4. No negative reaction (more likely a positive reaction like excitement).
  5. Strong negative reaction.

What's interesting here is that 1 and 4 are not only some of the biggest claims of superiority that you can make, but have also referred to something verifiable, which should theoretically intensify the reaction. If making a claim of superiority was the problem, those should trigger much worse reactions.

I think the difference between the genius claim and the others in my thought experiment is that all the others are claiming to be doing something constructive. This makes the superiority less threatening. Another possibility is that the claims to genius and high IQ are not verifiable with LinkedIn or other research, so they're not as believable.

Here's a thought experiment on with some non-verifiable claims, where there are varying levels of superiority and threat:

  1. Hi, I'm a secret government agent.
  2. Hi, I'm very powerful.
  3. Hi, I'm an elite computer hacker.
  4. Hi, I'm highly gifted.

I think the reaction to 1-3 would be curiosity while the reaction to the fourth would be extreme dislike. I'm interested in other people's reactions because I think my own are too influenced by having thought about this previously. Interestingly:

  1. Secret agents are probably far less common than gifted people. If I remember right, the entire government is 3% of the population whereas gifted people are 2% and I doubt that 2/3 of the government consists of secret agents.

  2. Not all gifted people are powerful, as giftedness does not automatically lead to any type of success. Claiming to be gifted is not claiming as much power as "powerful" is.

My current idea is that if a person with a high IQ makes any type of claim to this, they are more likely to be accused of lying or regarded as a threat than is sensible, and that the negative reactions provoked are disproportionate when compared with reactions to other claims that are comparable but don't involve IQ / giftedness / genius.

I found your comment refreshing and thoughtful. +1 karma.

If you can think of any good counterpoints, I'd like to read them. (:

Comment author: Epiphany 03 August 2013 04:26:19AM *  1 point [-]

I'm looking for a reading recommendation on the topic of perverse incentives, especially incentives that cause people to do unethical things. Yes, I checked "The Best Textbooks on Every Subject" thread and have recorded all the economics recommendations of interest. However, as interested as I am in reading about economics in general, my specific focus is on perverse incentives, especially ones that cause people to do unethical things. I was wondering if anyone has explored this in depth or happens to know a term for "perverse incentives that cause people to do unethical things", (regardless of whether it's part of economics or some other subject), as I can't seem to find one.

Comment author: AndyCossyleon 02 August 2013 09:23:42PM 3 points [-]

I think an obvious difference between the last one and the first two is that the last one includes a number. There is no uncertainty when comparing numbers, no wriggle room for subjectivity. A real number is either smaller, bigger, or equal to another real number. Period. This rigidity does not mesh well with the flexibility that comfortable social interaction requires. I don't think this is the only reason why the third is so inappropriate, but it definitely contributes.

Comment author: Epiphany 03 August 2013 03:51:34AM 2 points [-]

An unexpected point. Thank you.

Comment author: RolfAndreassen 01 July 2013 02:18:11AM 2 points [-]

Each of the P are vulnerable to the same objection: What is special about robots?

P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.

Why does this not apply to rifles?

P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.

Again, why isn't this isomorphic to "Human equipped with weapon X" versus "unarmed human"?

P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.

Once more: Why are "Killer Robots" different from "machine guns" in this sentence?

P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.

s/Killer Robot/military unit.

Comment author: Epiphany 01 July 2013 05:12:18PM *  -2 points [-]

Why does this not apply to rifles? / Again, why isn't this isomorphic to "Human equipped with weapon X" versus "unarmed human"?

Killer robots pose a threat to democracy that rifles do not. Please see "Near-Term Risk: Killer Robots a Threat to Freedom and Democracy" and the TED Talk link therein "Daniel Suarez: The kill decision shouldn't belong to a robot". You might also like to check out his book "Daemon" and it's sequel.

Once more: Why are "Killer Robots" different from "machine guns" in this sentence?

Machine guns are wielded by humans, the humans can make better ethical decisions than robots currently can.

Comment author: Kaj_Sotala 01 July 2013 01:04:47PM 0 points [-]

Do you know if anyone has written an article yet on obviousness as a meta semantic stop sign, or obviousness as a false supportive argument? If not, I'll do it.

Not that I could recall.

Comment author: Epiphany 01 July 2013 05:02:22PM *  -1 points [-]

Ok, I'll post about this in the open thread to gauge interest / see if anyone else knows of a pre-existing LW post on these specific obviousness problems.

Comment author: Alicorn 01 July 2013 06:32:49AM 4 points [-]

Yvain has graduated medical school; he is concentrating in psychiatry but it's still an MD.

Comment author: Epiphany 01 July 2013 06:45:40AM 0 points [-]

Ah, okay. I'll edit my comment then.

Comment author: fowlertm 01 July 2013 05:37:43AM 2 points [-]

Thanks for your comments, I'm inclined to basically agree with what you've said. Bans are almost never the answer and probably wouldn't work anyway. Which, if that's true, means machine ethics is even more important, because the only solution is to make these autonomous technologies as absolutely safe as possible.

Comment author: Epiphany 01 July 2013 06:44:17AM *  1 point [-]

Thanks for your comments, I'm inclined to basically agree with what you've said.

I am glad to know that my comments have made a difference and that they were welcome. I think LessWrong could benefit a lot from The Power of Reinforcement, so I am glad to see someone doing this.

the only solution is to make these autonomous technologies as absolutely safe as possible.

Actually, I don't think that approach will work in this scenario. When it comes to killer robots, the militaries will make them as dangerous as possible (but controllable, of course). However, the biggest problem isn't that they'll shoot innocent people - that's a problem, but there's a worse one. The worst one is that we may soon live in an age where anyone can decide to make themselves an army. Making killer robots safe is an oxymoron. There needs to be a solution that's really out of the box.

Comment author: Alicorn 01 July 2013 06:16:17AM 1 point [-]

As a non-medical doctor having a discussion with a fellow non-medical doctor to humor curiosities:

Are you talking to someone other than Yvain, about whom you wrote this remark?

Comment author: Epiphany 01 July 2013 06:26:55AM *  0 points [-]

It was written to Yvain. I was under the impression that Yvain was studying psychology, not medicine. Now that his website link has changed, I'm not sure there's a way for me to look this up.

Comment author: Epiphany 01 July 2013 03:07:54AM *  2 points [-]

My purpose with this is not to argue, but to get people to really think about the measures he suggests because I think we can have a more realistic view than the one presented by Peter at the Conscious Entities blog.

P1 - Restricting killer robot production would come at great cost, would pose risks, and isn't likely to happen.

Great Cost:

To ban killer robots, you would also have to ban:

  • 3-D printers (If they can't make parts for killer robots now, they'll probably be able to make them later.)

  • Personal robots (If they can hold a gun then people could pull some Kevlar over them and make any modifications needed.)

  • Anything that can be controlled by a computer and also hold a deadly payload (toy and hobby items like airplanes and quad copters may be able to be fashioned into assassination tools with the addition of something like a spray bottle full of chemicals or dart shooter.)

  • Computer controlled vehicles. Seem unwieldy or expensive? Consider how many pounds of explosives they can conceal, how far they can go, and how much damage they could do for the price, and the possibility of choosing a cheap used vehicle to offset cost (and the used cars of the future may be computer capable).

The number of technologies that could potentially be used to make lethally autonomous "killer robot" weapons is limited only by our imaginations. Pretty much anything with the ability to see, process visual data, identify targets, and physically move could become deadly with modification. As technology progresses, it would become harder and harder to make anything new without it getting banned due to it's potential for lethal autonomy. The amount of future technologies we'd have to ban could become ridiculous.

Bans pose risks:

As is said about gun control: "If guns are illegal, only the criminals will have them" - Eliezer agrees with the spirit of this in the context of killer robots.

Consider these possibilities:

  1. People will be able to steal from these approved companies, they'll be able to bribe these companies, and organized crime groups like mafias and gangs will be able to use tactics like blackmail and intimidation to get 3-D printers and other technologies. Criminals will therefore still have access to those things.

  2. Anybody who wants to become a bloodthirsty dictator would only have to start the right kind of company. Then they'd have access to all the potential weapons they want, and assuming they could amass enough robots to take on an army (in some country, if not in their own)... they could fulfill that dream.

  3. If we did ban them for the average person but let companies have them, we'd be upgrading those companies to an empowered class of potential warlords. Imagine if companies today - the same ones that are pulling various forms of B.S. (like the banks and the recession) also had enough firepower to kill you.

Isn't likely to happen:

I don't think we're likely to ban all 3-D printers, personal robots, computer-controlled cars, computer-controlled toys / electronics and everything else that could possibly be used as a lethally autonomous weapon. Such widespread bans would have a major impact on economic growth. Consider how much we feel a need to compete with other countries - and other countries may not have bans. Especially consider the relationship between our economic growth and our military power - we can't defend ourselves against other countries without funding our military, and we can't fund our military without taxes, and without sufficient economic growth, we won't be able to collect sufficient taxes. If any other countries do not also have such bans, and any of those ban-less countries might in the future decide to make war against us, we'd be sitting ducks if we let such bans slow economic growth.

Even if we did ban possession of these items for the average person (which would seriously impact economic growth, seeing as how the average person's purchases are a large part of that, and those purchases can be taxed), we'd probably not ban them for manufacturers and other professionals else technological progress may be seriously crippled. If we do not ban them for companies, this means that the risk was not eliminated (see "bans pose risks" above).

If the people realize how these technologies could cause the power balances to shift - and Daniel Suarez is working on getting them to realize that - they may begin to demand to be allowed 3-D printers and personal robots and so on as an extension of their right to bear arms. They may realistically need to have defenses against the gangs, wayward companies and would-be dictators of the future, and if they're concerned about it, they'll be looking to get a hold of those weapons in whatever way possible. If the people believe that they have a right to, or a need for 3-D printers and robot body-guards, then a ban on these types of technologies would be about as effective as prohibition.

P2. - Ensuring hypothetical human soldiers will not protect democracy.

If sufficient killer robots exist to match or overpower human soldiers, then at that point, the government can do what it likes because nobody will be able to fight back. This means the checks and balances on the government's power are gone. No checks and balances means that the government does not even have to follow it's own rules - nobody is powerful enough to enforce them. (Imagine the supreme court screaming at the executive branch in front of the executive branch's killer robot army. Not practical.) If that happens, you'll be at the mercy of those in power and will just have to cross your fingers that every single president you elect until the end of time (don't forget the one in office at the time) chooses not to be a dictator for life. Game over. We fail.

P3. - Avoiding unpredictable circumstances is not possible.

A. If unpredictable circumstances are a killer robot army's weakness, the enemy of said killer robot army will most certainly realize that this can be exploited. If any types of unpredictable circumstances at all are useful, the enemy will likely be forced to exploit them in order to survive.

B. Since when is regular life predictable, let alone a situation as chaotic as war? Sorry, ensuring a predictable circumstance in the event of war is not possible.

P4. - Restricting killer robot abilities may prove anti-strategic and therefore deemed lame.

Since war is an unpredictable and chaotic situation in which your enemy - who is a conscious, thinking entity - will probably get creative and throw at you exactly what you did not plan for, versatility is a must. It may be that failing to arm the robot in every way possible makes it totally ineffective, meaning that if people choose to fight with them at all, they will view it as an absolute necessity to arm all killer robots to the teeth, and will justify that with "Well you want to survive the war, don't you?"

P4. - Adding remote shut down isn't practical.

A. Imagine how a remote shut down situation would actually play out in reality. Your robots are fighting. Oops there's a bug. There are enemies everywhere. You shut them down. The enemy goes "WOOT FREE KILLER ROBOTS!" takes them to their base, hacks them. and reverse engineers them. Not only did you just lose your killer robot army, your enemy just gained a killer robot army, and will be able to make better weapons from now on. When is remote shut down ever going to actually be used on a killer robot army during combat? I think the answer to that is: If the person controlling the robots has half a brain, never. They will never use this feature outside of a test environment, and if their computer security expert has half a brain, the remote shut down feature will be removed from them after they leave the test environment (See B).

B. Successful offense is easier than successful defense - this also applies to the computer hacking world. This is why there are so many police stations and government offices that do not connect computers with sensitive data to the internet or don't have the internet at all! They can't be certain of preventing their computers from being hacked. If you put remote shut down into the killer robot armies, that's just a super sweet target for your enemy's hackers. In order to be hacker-proof, they'll have to make these robots truly autonomous - meaning no remote control whatsoever, and no special button or voice command or sequence that shuts them down, period. If their computer security expert has half a brain, killer robots will not be made with the remote shutdown "feature". Well, okay, I suppose the government could put in a remote shut down feature they want to send them to shoot at people in developing countries with no hackers - but the remote shutdown feature would be a serious flaw against a technologically advanced enemy. Actually, scratch that. There are a lot of criminal hacking organizations out there and technology companies that may be interested in hacking the remote shutdown feature in order to usurp their very own robot army. Creating an army of killer robots with a shutdown feature in a world where there are multiple parties that may be interested in usurping this army could be an extremely poor decision, even if your original intention was to expose that robot army only to third-world combatants.

machine ethics is going to become especially important very soon

Thank you very much for taking time to talk about this issue. I'm very glad to see that people are taking it seriously and are talking about it. I hope you do not take offense at my comment, as my purpose with this is not to make you feel argued with but to encourage people to think realistically about these dangers.

View more: Prev | Next