JoshuaZ comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 29 December 2010 06:31:59PM 1 point [-]

I'm not convinced of this. As time progresses there are more and more vulnerable systems on the internet, many which shouldn't be. That includes nuclear power plants, particle accelerators, conventional power plants and others. Other systems likely have some methods of access such as communication satellites. Soon this will also include almost completely automated manufacturing plants. An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power. The extra processing power means that the AI can potentially crack cryptosystems that are being used by secure parts of the internet or non-internet systems that use radio to communicate.

That said, I agree that without strong nanotech this seems like an unlikely scenario.

Comment author: XiXiDu 29 December 2010 06:53:03PM 1 point [-]

An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power.

Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology? The point is that the only reason for a donation to the SIAI to have an higher expected pay off is the premise that AI can FOOM and kill all humans and take over the universe. In all other cases dumb risks are as or more likely and can accomplish to wipe us out as well. So why the SIAI? I'm trying to get a more definite answer to that question. I at least have to consider all possible arguments I can come up with in the time it takes to write a few comments and see what feedback I get. That way I can update my estimates and refine my thinking.

Comment author: Kaj_Sotala 29 December 2010 08:25:48PM 2 points [-]

Yes, but then how does this risk differ from asteroid impacts, solar flares

Asteroid impacts and solar flares are relatively 'dumb' risks, in that they can be defended against once you know how. They don't constantly try to outsmart you.

bio weapons or nanotechnology?

This question is a bit like asking "yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons".

Bioweapons and nanotechnology are particular special cases of "dangerous technologies that humans might come up with". An AGI is potentially employing all of the dangerous technologies humans - or AGIs - might come up with.

Comment author: XiXiDu 29 December 2010 08:37:41PM -1 points [-]

Your comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn't follow because if such an AGI is as likely as the other risks then it doesn't matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.

Comment author: JoshuaZ 29 December 2010 06:57:30PM 0 points [-]

Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology?

Well, one doesn't need to think that that it intrinsically different. One would just need to think that the marginal return here is high because we aren't putting in much resources now to look at the problem. Someone could potentially make that sort of argument for any existential risk.

Comment author: XiXiDu 29 December 2010 07:47:32PM 2 points [-]

Yes. I am getting much better responses from you than from some of the donors that replied or the SIAI itself. Which isn't very reassuring. Anyway, you are of course right there. The SIAI is currently looking into the one existential risk that is most underfunded. As I said before, I believe that the SIAI should exist and therefore should be supported. Yet I still can't follow some of the more frenetic supporters. That is, I don't see the case being as strong as some portray it. And there is not enough skepticism here, although people reassure me constantly that they have been skeptic but were eventually convinced. They just don't seem very convincing to me.

Comment author: Rain 29 December 2010 07:49:52PM 2 points [-]

I guess I should stop trying then? Have I not provided anything useful? And do I come across as "frenetic"? That's certainly not how I feel. And I figured 90 percent chance we all die to be pretty skeptical. Maybe you weren't referring to me...

Comment author: XiXiDu 29 December 2010 07:58:16PM *  1 point [-]

I'm sorry, I shouldn't have phrased my comment like that. No, I was referring to this and this comment that I just got. I feel too tired to reply to those right now because I feel they do not answer anything and that I have already tackled their content in previous comments. I'm sometimes getting a bit weary when the amount of useless information gets too high. They probably feel the same about me and I should be thankful that they take the time at all. I can assure you that my intention is not to attack anyone or the SIAI personally just to discredit them. I'm honestly interested, simply curious.

Comment author: Rain 29 December 2010 08:03:11PM 1 point [-]

OK, cool. Yeah, this whole thing does seem to go in circles at times... it's the sort of topic where I wish I could just meet face to face and hash it out over an hour or so.