wedrifid comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
But you don't get to simply say "I don't think that's likely", and call that evidence. The general thrust of the Foom argument is very strong, as it shows there are many, many, many ways to arrive at an existential issue, and very very few ways to avoid it; the probability of avoiding it by chance is virtually non-existent -- like hitting a golf ball in a random direction from a random spot on earth, and expecting it to score a hole in one.
The default result in that case isn't just that you don't make the hole-in-one, or that you don't even wind up on a golf course: the default case is that you're not even on dry land to begin with, because two thirds of the earth is covered with water. ;-)
That's an area where I have less evidence, and therefore less opinion. Without specific discussions of what "dangerous" and "impede AGI" mean in context, it's hard to separate that argument from an evidence-free heuristic.
I don't understand why you think an AI couldn't use fuzziness or use brute force searches to accomplish the same things. Evolutionary algorithms reach solutions that even humans don't come up with.
I don't know what you mean by "easy", or why it matters. The Foom argument is that, if you develop a sufficiently powerful AGI, it will foom, unless for some reason it doesn't want to.
And there are many, many, many ways to define "sufficiently powerful"; my comments about human-level AGI were merely to show a lower bound on how high the bar has to be: it's quite plausible that an AGI we'd consider sub-human in most ways might still be capable of fooming.
I don't understand this part of your sentence - i.e., I can't guess what it is that you actually meant to say here.
Of course there are limits. That doesn't mean orders of magnitude better than a human isn't doable.
The point is, even if there are hitches and glitches that could stop a foom mid-way, they are like the size of golf courses compared to the size of the earth. No matter how many individual golf courses you propose for where a foom might be stopped, two thirds of the planet is still under water.
This is what LW reasoning refers to as "using arguments as soldiers": that is, treating the arguments themselves as the unit of merit, rather than the probability space covered by those arguments. I mean, are you seriously arguing that the only way to kick humankind's collective ass is by breaking the laws of math and physics? A being of modest intelligence could probably convince us all to do ourselves in, with or without tricky mind hacks or hypnosis!
The AI doesn't have to be that strong, because humans are so damn weak.
You would think so, but people apparently still fall for 419 scams. Human-level intelligence is more than sufficient to accomplish social engineering.
Today, presumably not. However, if you actually have a sufficiently-powered AI, then presumably, resources are available.
The thing is, foominess per se isn't even all that important to the overall need for FAI: you don't have to be that much smarter or faster than a human to be able to run rings around humanity. Historically, more than one human being has done a good job at taking over a chunk of the world, beginning with nothing but persuasive speeches!
I like the analogy. It may even fit when considering building a friendly AI - like hitting a golf ball deliberately and to the best of your ability from a randomly selected spot on the earth and trying to get a hole in one. Overwhelmingly difficult, perhaps even impossible given human capabilities but still worth dedicating all your effort to attempting!