Rain comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: Rain 13 August 2010 01:20:22PM *  6 points [-]

That there are no other does not mean we shouldn't be keen to create them, to establish competition.

Absolutely agreed. Though I'm barely motivated enough to click on a PayPal link, so there isn't much hope of my contributing to that effort. And I'd hope they'd be created in such a way as to expand total funding, rather than cannibalizing SIAI's efforts.

I'm not sure about this.

Certainly there are other ways to look at value / utility / whatever and how to measure it. That's why I mentioned I had a particular theory I was applying. I wouldn't expect you to come to the same conclusions, since I haven't fully outlined how it works. Sorry.

I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.

I'm not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that's the theory. There may be no way to save us.

Yeah and how is their combined probability less worrying than that of AI?

AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I've read, so does Eliezer, which is why he's working on that problem instead of, say, nanotech.

I'm mainly concerned about my own well-being.

I've mentioned before that I'm somewhat depressed, so I consider my philanthropy to be a good portion 'lack of caring about self' more than 'being concerned about the well-being of all beings'. Again, a subtractive process.

As I said before, it is [...] my intention [...] to steer some critical discussion for us non-expert, uneducated but concerned people.

Thanks! I think that's probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.

Comment author: XiXiDu 13 August 2010 02:11:01PM 0 points [-]

I think UFAI is far more likely than FAI...

It's more likely that the Klingon warbird can overpower the USS Enterprise.

I think AI is actually the most dangerous of them...

Why? Because EY told you? I'm not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.

...though I would also appreciate more critical discussion from experts and educated people...

Me too, but I was the only one around willing to start one at this point. That's the sorry state of critical examination.

Comment author: Rain 13 August 2010 02:16:50PM *  4 points [-]

It's more likely that the Klingon warbird can overpower the USS Enterprise.

To pick my own metaphor, it's more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we're going to create wonderful, cunning, incredibly powerful technology, and I think we're going to misuse it to destroy ourselves.

Why [is AI the most dangerous threat]?

Because intelligent beings are the most awesome and scary things I've ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can't see us holding back from trying to tweak intelligence itself. I view it as inevitable.

Me too [I also would appreciate more critical discussion from experts]

I'm hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.

Comment author: XiXiDu 13 August 2010 03:11:37PM 2 points [-]

What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I'm not convinced to be based on firm ground.

Comment author: Rain 13 August 2010 03:19:00PM *  1 point [-]

I'm not a very good convincer. I'd suggest reading the original material.

Comment author: HughRistik 13 August 2010 06:50:25PM 0 points [-]

Can we get some links up in here? I'm not putting the burden on you in particular, but I think more linkage would be helpful in this discussion.

Comment author: Rain 14 August 2010 04:14:45PM 0 points [-]

This thread has Eliezer's request for specific links, which appear in replies.