XiXiDu comments on The curse of identity - LessWrong

121 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 18 November 2011 03:01:43PM 1 point [-]

If you think SIAI has no chance of success, or is sure to succeed giving the funding they already have, there's no point in donating.

I agree, a very important point.

I became more convinced this was important work after talking to Anna Salamon.

I have read very little from her when it comes to issues concerning SI's main objective. Most of her posts seem to be about basic rationality.

She tried to start a webcam conversation with me once but my spoken English was just too bad and slow to have a conversation about such topics.

And even if I talked to her, she could tell me a lot and I would be unable to judge if what she says is more than internally consistent, if there is any connection to actual reality. I am simply not an AGI expert, very far from it. The best I can do so far is judge her output relative to what others have to say.

Comment author: juliawise 19 November 2011 01:40:25PM 0 points [-]

I'm also far from an expert in this field - I didn't study anything technical, and didn't have many friends who did, either. At the time I spoke to Anna, I wasn't sure how to judge whether a singularity was even possible. At her suggestion, I asked some non-LW computer scientists (her further suggestion was to walk into office hours of a math or CS department at a university, which I haven't done). They thought a singularity was fairly likely, and obviously hadn't thought about any dangers associated with it. From reading Eliezer's writings I'm convinced that a carelessly made AI could be disastrous. So from those points, I'm willing to believe that most computer scientists, if they succeeded in making an AI, would accidentally make an unfriendly one. Which makes me think SIAI's cause is a good one.

But after reading GiveWell's interview with SIAI, I don't think they're the best choice for my donation, especially since they say they don't have immediate plans for more funding at this time. I'll probably go with GiveWell's top pick once they come out with their new ratings.