multifoliaterose comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)
Although this may not answer your questions, here are my reasons for supporting SIAI:
I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.
It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"
No one else cares about the big picture. (Nick Bostrom and the FHI excepted; if they came out against SIAI, I might change my view.) Every other organization seems to focus on the 'generic now', leaving unintended consequences to crush their efforts in the long run, or avoiding the true horrors of the world (pain, age, poverty) due to not even realizing they're solvable. The ability to predict the future, through knowledge, understanding, and computation power, are the key attributes toward making that future a truly good place. The utility calculations are staggeringly in support of the longest view, such as that provided by SIAI.
It's the simplest of the 'good outcome' possibilities. Everything else seems to depend on magical hand-waving, or an overly simplistic view of how the world works or what a single advance would mean, rather than the way it interacts with all the diverse improvements that happen along side it and how real humans would react to them. Friendly AI provides 'intelligence-waving' that seems far more likely to work in a coherent fashion.
I don't see anything else to give me hope. What else solves all potential problems at the same time, rather than leaving every advancement to be destroyed by that one failure mode you didn't think of? Of course! Something that can think of those failure modes for you, and avoid them before you even knew they existed.
It's cheap and easy to do so on a meaningful scale. It's very easy to make up a large percentage of their budget; I personally provided more than 3 percent of their annual operating costs for this year, and I'm only upper middle class. They also have an extremely low barrier to entry (any amount of US dollars and a stamp, or a credit card, or PayPal).
They're thinking about the same things I am. They're providing a tribe like LessWrong, and they're pushing, trying to expand human knowledge in the ways I think are most important, such as existential risk, humanity's future, rationality, effective and realistic reversal of pain and suffering, etc.
I don't think we have much time. The best predictions aren't very good, but human power has increased to the point where there's a true threat we'll destroy ourselves within the next 100 years through means nuclear, biological, nano, AI, wireheading, or nerf the world. Sitting on money and hoping for a better deal, or donating to institutions now that will compound into advancements generations in the future seems like too little, too late.
I still put more money into savings accounts than I give to SIAI. I'm investing in myself and my own knowledge more than the purported future of humanity as they envision. I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.
Good, informative comment.