XiXiDu comments on The curse of identity - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (296)
What have you done? I would really like to hear how various SI members came to believe what they believe now.
How did you learn about risks from AI? Have you been evaluating charities and learn about existential risks? What did you do next, read all available material on AGI research?
I can't imagine how someone could possible be as convinced as the average SI member without first becoming an expert when it comes to AI and complexity theory.
I ran across the Wikipedia article about the technological singularity when I was still in high school, maybe around 2004. From there, I found Staring into the Singularity, SL4, the Finnish Transhumanist Association, others.
My opinions about the Singularity have been drifting back and forth, with the initial enthusiasm and optimism being replaced by pessimism and a feeling of impending doom. I've been reading various things on and off, as well as participated in a number of online discussions. Mostly my relatively high certainty is based on the simple fact that people have been unable to refute the core claims, and so have I. I don't think it's just confirmation bias either, because plenty of my other opinions have changed over the same time period, and because I've on many occasions been unwilling to believe in the issue but then had no choice but to admit the facts.
Which paper or post outlines those core claims? I am not sure what they are.
I find it very hard to pinpoint when and how I changed my mind about what. I'd be interested to hear some examples to compare my own opinion on those issues, thanks.
What do you mean by that? What does it mean for you to believe in the issue. What facts? Personally I don't see how anyone could possible justify not to believe that risks from AI are a possibility. At the same time I think that some people are much more confident than the evidence allows them to be. Or I am missing something.
The SI is an important institution doing very important work that deserves much more monetary support and attention than it currently gets. The same is true for the FHI and existential risks research. But that's all there is to it. The fanaticism and portrayal as world saviours, e.g. "I feel like humanity's future is in good hands", really makes me sick.
Mostly just:
Off the top of my head:
Things like:
I became more convinced this was important work after talking to Anna Salamon. After talking to her and other computer scientists, I that a singularity is somewhat likely, and that it would be easy to screw up with disastrous consequences.
But evaluating a charity doesn't just mean deciding whether they're working on an important problem. It also means evaluating their chance of success. If you think SIAI has no chance of success, or is sure to succeed given the funding they already have, there's no point in donating. I have no idea how likely it is that they'll succeed, and don't know how to get such information. Holden Karnofsky's writing on estimate error is relevant here.
I agree, a very important point.
I have read very little from her when it comes to issues concerning SI's main objective. Most of her posts seem to be about basic rationality.
She tried to start a webcam conversation with me once but my spoken English was just too bad and slow to have a conversation about such topics.
And even if I talked to her, she could tell me a lot and I would be unable to judge if what she says is more than internally consistent, if there is any connection to actual reality. I am simply not an AGI expert, very far from it. The best I can do so far is judge her output relative to what others have to say.
I'm also far from an expert in this field - I didn't study anything technical, and didn't have many friends who did, either. At the time I spoke to Anna, I wasn't sure how to judge whether a singularity was even possible. At her suggestion, I asked some non-LW computer scientists (her further suggestion was to walk into office hours of a math or CS department at a university, which I haven't done). They thought a singularity was fairly likely, and obviously hadn't thought about any dangers associated with it. From reading Eliezer's writings I'm convinced that a carelessly made AI could be disastrous. So from those points, I'm willing to believe that most computer scientists, if they succeeded in making an AI, would accidentally make an unfriendly one. Which makes me think SIAI's cause is a good one.
But after reading GiveWell's interview with SIAI, I don't think they're the best choice for my donation, especially since they say they don't have immediate plans for more funding at this time. I'll probably go with GiveWell's top pick once they come out with their new ratings.