Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

bigjeff5 comments on Applause Lights - Less Wrong

91 Post author: Eliezer_Yudkowsky 11 September 2007 06:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: bigjeff5 23 February 2011 06:06:35PM 0 points [-]

Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?

It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.

Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?

It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.

Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can't really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.

Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.

I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don't act before X, the world blows up - obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.

Comment author: Polymeron 23 February 2011 06:39:38PM 1 point [-]

Of course you can cease argument without consideration - if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can't afford to weigh whether the idea would result in a worse situation somehow - beyond whatever assessment you have already made. At that point, the time for assessment is over.

For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement "We need to balance the risks and opportunities of AI"; just that they can sincerely say it, and even argue for it. This was a response to you saying that "no one would ever utter the phrase in the first place". This just strikes me as false.

Never underestimate the power of human stupidity ;)

Comment author: bigjeff5 23 February 2011 10:25:25PM 0 points [-]

You're right, in that regard I was certainly mistaken.