You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

stcredzero comments on Stupid Questions Open Thread Round 3 - Less Wrong Discussion

8 Post author: OpenThreadGuy 07 July 2012 05:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (208)

You are viewing a single comment's thread.

Comment author: stcredzero 08 July 2012 09:33:17PM 2 points [-]

Isn't it almost certain that super-optimizing AI will result in unintended consequences? I think it's almost certain that super-optimizing AI will have to deal with their own unintended consequences. Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?

Comment author: faul_sname 08 July 2012 11:11:45PM *  6 points [-]

Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?

Which god? If by "God" you mean "something essentially perfect and infallible," then yes. If by "God" you mean "that entity that killed a bunch of Egyptian kids" or "that entity that's responsible for lightning" or "that guy that annoyed the Roman empire 2 millennia ago," then no.

Also, essentially infallible to us isn't necessarily essentially infallible to it (though I suspect that any attempt at AGI will have enough hacks and shortcuts that we can see faults too).

Comment author: stcredzero 10 July 2012 01:20:10AM 0 points [-]

Which god? If by "God" you mean "something essentially perfect and infallible," then yes.

That one. Big man in sky invented by shepherds does't interest me much. Just because I'm a better optimizer of resources in certain contexts than an amoeba doesn't make me perfect and infallible. Just because X is orders of magnitude a better optimizer than Y doesn't make X perfect and infallible. Just because X can rapidly optimize itself doesn't make it infallible either. Yet when people talk about the post-singularity super-optimizers, they seem to be talking about some sort of Sci-Fi God.

Comment author: faul_sname 10 July 2012 01:30:41AM *  0 points [-]

Y'know, I'm not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I've never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).

Comment author: thomblake 10 July 2012 06:22:27PM 0 points [-]

but I've never seen a convincing argument that intelligence scales with optimization power

"optimization power" is more-or-less equivalent to "intelligence", in local parlance. Do you have a different definition of intelligence in mind?

Comment author: faul_sname 10 July 2012 10:09:06PM 0 points [-]

One that doesn't classify evolution as intelligent.

Comment author: thomblake 11 July 2012 01:49:22PM 0 points [-]

So the nonapples theory of intelligence, then?

Comment author: faul_sname 11 July 2012 03:52:54PM 1 point [-]

More generally, a theory that requires modeling of the future for something to be intelligent.

Comment author: DanArmak 14 July 2012 10:33:08PM 0 points [-]

What's unintended consequences? An imperfect ability to predict the future? Read strictly, any finite entity's ability to predict the future is going to be imperfect.

Comment author: stcredzero 16 July 2012 09:44:48PM 0 points [-]

What if the AI are advanced over us as we are over cockroaches, and the superintelligent AI find us just as annoying, disgusting, and hard to kill?

Comment author: DanArmak 16 July 2012 10:03:21PM 0 points [-]

What reason is there to expect such a thing?

(Not to mention that, proverbs notwithstanding, humans can and do kill cockroaches easily; I wouldn't want the tables to be reversed.)

Comment author: stcredzero 17 July 2012 04:05:40PM *  0 points [-]

Reason: Cockroaches and the behavior of humans. We can and do kill individuals and specific groups of individuals. We can't kill all of them, however. If humans can get into space, the lightspeed barrier might let far-flung tribes of "human fundamentalists," to borrow a term from Charles Stross, to survive, though individuals would often be killed and would never stand a chance in a direct conflict with a super AI.

Comment author: DanArmak 19 July 2012 08:52:12PM 0 points [-]

Cockroaches and the behavior of humans.

In itself that doesn't seem to be relevant evidence. "There exist species that humans cannot eradicate without major coordinated effort". It doesn't follow that either the same would hold for far more powerful AIs, nor that we should model AI-human relationship on humans-cockroaches rather than humans-kittens or humans-smallpox.

If humans can get into space, the lightspeed barrier might let far-flung tribes of "human fundamentalists," to borrow a term from Charles Stross, to survive

It's easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don't have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness.

I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically. (For one thing, there's still no evidence human space colonization or even solar system colonization will happen anytime soon. And unlike AI it's not going to happen suddenly, without lots of advanced notice.)

Comment author: stcredzero 20 July 2012 07:36:10PM *  0 points [-]

It's easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don't have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness. ... I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically.

A summary of your points is that: while conceivable, there's no reason to think it's at all likely. Ok. How about, "Because it's fun to think about?"

Actually, lasers might not be practical against maneuverable targets because of the diffraction limit and the lightspeed limit. In order to focus a laser at very great distances, one would need very large lenses. (Perhaps planet sized, depending on distance and frequency.) Targets could respond by moving out of the beam, and the lightspeed limit would preclude immediate retargeting. Compensating for this by making the beam wider would be very expensive.

Comment author: DanArmak 21 July 2012 12:56:22PM 2 points [-]

Regarding lasers: I could list things the attackers might do to succeed. But I don't want to discuss it because we'd be speculating on practically zero evidence. I'll merely say that I would rather that my hopes for the future do not depend on a failure of imagination on part of an enemy superintelligent AI.

Comment author: stcredzero 21 July 2012 06:11:36PM 0 points [-]

You're assuming that there's always an answer for the more intelligent actor. Only happens that way in the movies. Sometimes you get the bear, and sometimes the bear gets you.

Sometimes one can pin their hopes on the laws of physics in the face of a more intelligent foe.

Comment author: DanArmak 20 July 2012 09:19:01PM 0 points [-]

It's more fun to me to think about pleasant extremely improbable futures than unpleasant ones. To each their own.

Comment author: stcredzero 20 July 2012 10:03:26PM 0 points [-]

There's lots of scope for great adventure stories in dystopian futures.