You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on A few thoughts on a Friendly AGI (safe vs friendly, other minds problem, ETs and more) - Less Wrong Discussion

3 Post author: the-citizen 19 October 2014 07:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 20 October 2014 05:32:38AM 0 points [-]

It also depends on how you define "human". I'd hope the FAI is willing to upload us instead of wasting vast amounts of resources just so we're instantiated in a physical universe instead of a virtual one.

It's worth noting that the AI only has to be advanced enough to store the information. Once it's beaten the UFAI, it has plenty of time to build up the resources and intelligence necessary to rebuild humanity.

Comment author: the-citizen 20 October 2014 10:52:51AM *  0 points [-]

I personally imagine that AGI will arrive well before its possible to store a full down-to-the-subatomic-level map of a person in a space that's any smaller than the person. "Just store the humans and bring them back" implies such a massive storage requirement that its basically not much different from making a full copy of them anyway, so I wonder if such a massive storage device wouldn't be equally vulnerable to attack.

I'm also keen to see us continue as a biological species, even if we also run simulated brains or people in parallel. Ideally I can see us doing both if we can establish a FAI. The best bet I can see so far is to make sure a FAI arrives first :-)

Comment author: DanielLC 20 October 2014 07:44:48PM 0 points [-]

You don't need accurate down to the subatomic level. You just need a human. The same human would be nice, since it means the FAI managed to keep all of those people from dying, but unless it's programmed to only value currently alive people, that's not a big deal.

Also, you make it sound like you're saying we won't develop that storage capability until well after we develop the AGI. It's the AGI that will be developing technology. What we can do just before we make it is not a good indicator of what it can do.

Comment author: the-citizen 21 October 2014 07:10:12AM 0 points [-]

Because neural pathways and other structures of the brain are pretty small, I think you'd need an extremely high resolution. However, I guess what you're saying is that a breeding population would be enough to at least keep the species going, so I acknowledge that. Still, I'm hoping we can make something that does something in addition to that.

Your second point depends on how small the AGI can make reliable storage tech I guess.

In the perhaps this whole point is moot because its unlikely an intelligence explosion will take long enough for there to be time for other researchers to construct an alternative AGI.

Comment author: DanielLC 21 October 2014 05:18:11PM 0 points [-]

Still, I'm hoping we can make something that does something in addition to that.

Their children will be fine. You don't even need a breeding population. You just need to know how to make an egg, a sperm, and an artificial uterus.

In the perhaps this whole point is moot because its unlikely an intelligence explosion will take long enough for there to be time for other researchers to construct an alternative AGI.

It might encounter another AGI as it spreads, although I don't think this point will matter much in the ensuing war (or treaty, if they decide on that).