Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

bigjeff5 comments on Applause Lights - Less Wrong

91 Post author: Eliezer_Yudkowsky 11 September 2007 06:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: bigjeff5 31 January 2011 04:45:36PM 2 points [-]

o We need to balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities correctly and

o We shouldn't balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.

I don't get that at all. If "We shouldn't balance the risks and opportunities of AI" means they are being assessed incorrectly, isn't that a part of balancing the risks and opportunities of AI? I don't see how you can get that out of the statement. If they are being done incorrectly, then in the discussion of the risks and opportunities you say "No, you're doing it wrong, you need to look at it like this blah blah blah".

When you say "We shouldn't balance the risks and opportunities of AI" it means to stop making an assessment altogether. It says nothing about continuing to go forward with the project or not. It doesn't say "Stop the project! This is all wrong!" That would fall under balancing the risks and opportunities - an assessment that came against AI.

That's foolishness, which is why no one would ever utter the phrase in the first place. That makes the prior phrase an applause phrase, because it is obvious to anyone involved that such an assessment is necessary. You're only saying it because you know people will nod their head in agreement and possibly clap.

Comment author: Polymeron 23 February 2011 03:38:48PM 2 points [-]

It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.

A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea. A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it's too late.

Either position would have a substantive position that you don't need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.

Comment author: bigjeff5 23 February 2011 06:06:35PM 0 points [-]

Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?

It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.

Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?

It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.

Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can't really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.

Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.

I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don't act before X, the world blows up - obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.

Comment author: Polymeron 23 February 2011 06:39:38PM 1 point [-]

Of course you can cease argument without consideration - if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can't afford to weigh whether the idea would result in a worse situation somehow - beyond whatever assessment you have already made. At that point, the time for assessment is over.

For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement "We need to balance the risks and opportunities of AI"; just that they can sincerely say it, and even argue for it. This was a response to you saying that "no one would ever utter the phrase in the first place". This just strikes me as false.

Never underestimate the power of human stupidity ;)

Comment author: bigjeff5 23 February 2011 10:25:25PM 0 points [-]

You're right, in that regard I was certainly mistaken.