wedrifid comments on Welcome to Less Wrong! (2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1430)
Imagine that ! :-)
That's a different goal, though. As far as I understand, olalonde's master plan looks something like this:
1). Figure out how to build AGI.
2). Build a reasonably smart one as a proof of concept.
3). Figure out where to go from there, and how to make AGI safe.
4). Eventually, build a transhuman AGI once we know it's safe.
Whereas the SIAI master plan looks something like this:
1). Make sure that an un-Friendly AGI does not get built.
2). Figure out how to build a Friendly AGI.
3). Build one.
4). Now that we know it's safe, build a transhuman AGI (or simply wait long enough, since the AGI from step (3) will boost itself to transhuman levels).
One key difference between olalonde's plan and SIAI's plan is the assumption SIAI is making: they are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels. Thus, from their perspective, olalonde's step (2) above might as well say, "build a machine that's guaranteed to eat us all", which would clearly be a bad thing.
A good summary. I'd slightly modify it in as much as they would allow the possibility that a really weak AGI may not do much in the way of FOOMing but they pretty much ignore those ones and expect they would just be a stepping stone for the developers who would go on to make better ones. (This is just my reasoning but I assume they would think similarly.)
Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario -- it's just using the developers' brains as its platform, as opposed to digital hardware. I don't know whether the SIAI folks would endorse this view, though.
Can't we limit the meaning of "self-improving" to at least stuff that the AI actually does? We can already say more precisely that the AI is being iteratively improved by the creators. We don't have to go around removing the distinction between what an agent does and what the creator of the agent happens to do to it.
Yeah, I am totally onboard with this suggestion.
Great. I hope I wasn't being too pedantic there. I wasn't trying to find technical fault with anything essential to your position.