whpearson comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: rhollerith 13 June 2009 02:30:31PM *  2 points [-]

rwallace has been arguing the position that AI researchers are too concerned (or will become too concerned) about the existential risk from UFAI. He writes that

we need software tools smart enough to help us deal with complexity.

rwallace: can we deal with complexity sufficiently well without new software that engages in strongly-recursive self-improvement?

Without new AGI software?

One part of the risk that rwallace says outweighs the risk of UFAI is that

we remain confined to one little planet . . . with everyone in weapon range of everyone else

The only response rwallace suggests to that risk is

we need more advanced technology, for which we need software tools smart enough to help us deal with complexity

rwallace: please give your reasoning for how more advanced technology decreases the existential risk posed by weapons more than it increases it.

Another part of the risk that rwallace says outweighs the risk of UFAI is that

we remain confined to one little planet running off a dwindling resource base

Please explain how dwindling resources presents a significant existential risk. I can come up with several argument, but I'd like to see the one or two you consider the strongest arguments.

Comment author: whpearson 13 June 2009 02:48:54PM 1 point [-]

If we have uploads we can get off the planet and stay in space for a fraction of the resources it currently costs to do manned space flight. We can spread ourselves between the stars.

But an upload might go foom, so we should stop all upload research.

It is this kind of conundrum I see humanity in at the moment.

Comment author: rwallace 13 June 2009 03:10:33PM 1 point [-]

I agree, and will add:

First, an upload isn't going to "go foom": a digital substrate doesn't magically confer superpowers, and early uploads will likely be less powerful than their biological counterparts in several ways.

Second, stopping upload research is not the path of safety, because ultimately we must advance or die.

Comment deleted 14 June 2009 06:34:04AM [-]
Comment author: rwallace 14 June 2009 07:09:29AM 0 points [-]

You can't copy paste hardware; and no, an upload won't be able to run on a botnet.

Not to mention the bizarre assumption that an uploading patient will turn into a comic book villain whose sole purpose is to conquer the world.

Comment author: MugaSofer 25 April 2013 01:35:24PM -2 points [-]

an upload won't be able to run on a botnet.

Source?

Not to mention the bizarre assumption that an uploading patient will turn into a comic book villain whose sole purpose is to conquer the world.

Upvoted for this.

Comment author: MugaSofer 25 April 2013 01:08:55PM -2 points [-]

You know, I don't think I've ever seen someone argue that. Does anyone have any links?