wedrifid comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 02 November 2010 12:14:40AM 7 points [-]

they simply do not disagree with the arguments per se but their likelihood

But you don't get to simply say "I don't think that's likely", and call that evidence. The general thrust of the Foom argument is very strong, as it shows there are many, many, many ways to arrive at an existential issue, and very very few ways to avoid it; the probability of avoiding it by chance is virtually non-existent -- like hitting a golf ball in a random direction from a random spot on earth, and expecting it to score a hole in one.

The default result in that case isn't just that you don't make the hole-in-one, or that you don't even wind up on a golf course: the default case is that you're not even on dry land to begin with, because two thirds of the earth is covered with water. ;-)

and also consider the possibility that it would be more dangerous to impede AGI.

That's an area where I have less evidence, and therefore less opinion. Without specific discussions of what "dangerous" and "impede AGI" mean in context, it's hard to separate that argument from an evidence-free heuristic.

we don't know that 1.) the fuzziness of our brain isn't a feature that allows us to stumble upon unknown unknowns, e.g. against autistic traits

I don't understand why you think an AI couldn't use fuzziness or use brute force searches to accomplish the same things. Evolutionary algorithms reach solutions that even humans don't come up with.

Further it is in my opinion questionable to argue that it is easy to create an intelligence which is able to evolve a vast repertoire of heuristics, acquire vast amounts of knowledge about the universe, dramatically improve its cognitive flexibility

I don't know what you mean by "easy", or why it matters. The Foom argument is that, if you develop a sufficiently powerful AGI, it will foom, unless for some reason it doesn't want to.

And there are many, many, many ways to define "sufficiently powerful"; my comments about human-level AGI were merely to show a lower bound on how high the bar has to be: it's quite plausible that an AGI we'd consider sub-human in most ways might still be capable of fooming.

and yet somehow really hard to limit the scope of action that it cares about.

I don't understand this part of your sentence - i.e., I can't guess what it is that you actually meant to say here.

I'm also not convinced that intelligence bears unbounded payoff. There are limits to what any kind of intelligence can do, a superhuman AI couldn't come up with a faster than light propulsion or would disprove Gödel's incompleteness theorems.

Of course there are limits. That doesn't mean orders of magnitude better than a human isn't doable.

The point is, even if there are hitches and glitches that could stop a foom mid-way, they are like the size of golf courses compared to the size of the earth. No matter how many individual golf courses you propose for where a foom might be stopped, two thirds of the planet is still under water.

This is what LW reasoning refers to as "using arguments as soldiers": that is, treating the arguments themselves as the unit of merit, rather than the probability space covered by those arguments. I mean, are you seriously arguing that the only way to kick humankind's collective ass is by breaking the laws of math and physics? A being of modest intelligence could probably convince us all to do ourselves in, with or without tricky mind hacks or hypnosis!

The AI doesn't have to be that strong, because humans are so damn weak.

That it can simply invent it and then acquire it using advanced social engineering is pretty far-fetched in my opinion.

You would think so, but people apparently still fall for 419 scams. Human-level intelligence is more than sufficient to accomplish social engineering.

And what about taking over the Internet? It is not clear that the Internet would even be a sufficient substrate and that it could provide the necessary resources.

Today, presumably not. However, if you actually have a sufficiently-powered AI, then presumably, resources are available.

The thing is, foominess per se isn't even all that important to the overall need for FAI: you don't have to be that much smarter or faster than a human to be able to run rings around humanity. Historically, more than one human being has done a good job at taking over a chunk of the world, beginning with nothing but persuasive speeches!

Comment author: wedrifid 02 November 2010 01:11:45AM 3 points [-]

the probability of avoiding it by chance is virtually non-existent -- like hitting a golf ball in a random direction from a random spot on earth, and expecting it to score a hole in one.

I like the analogy. It may even fit when considering building a friendly AI - like hitting a golf ball deliberately and to the best of your ability from a randomly selected spot on the earth and trying to get a hole in one. Overwhelmingly difficult, perhaps even impossible given human capabilities but still worth dedicating all your effort to attempting!