XiXiDu comments on How to Save the World - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (135)
Good post, though I thought that it is a little too focused on money. It could say (more explicitly) what types of charity are best, and what types of action... and other ways to help that aren't money.
In my opinion, some of the most efficient ways to achieve a positive difference are, foremost: (these are strategic priorities with more positive potential than all the rest) human genetic engineering and intelligence augmentation, artificial intelligence, and reduction of existential risks. In second order of importance: (these are ways to increase utility in the here and now) destroying animals and the environment (which are cause of huge suffering), producing artificial meat to replace cruel animal farming, promoting birth control among the poor.
Activities to achieve these goals include: - Becoming very rich and using the money to achieve them; - Convincing people with lots of money to donate to these causes, and any other people to become aware of them and contribute somehow, by various means, such as by writing books, articles, making movies, posting on websites, talking to them, encouraging them to do activities to achieve them; - Conducting research personally in fields such as genetic engineering, artificial intelligence, artificial meat, birth control, etc., and convincing more people to do the same; - Helping or creating charity organizations directed towards birth control; - Fighting and discrediting religion, which is a significant hurdle to many of these efforts; - Convincing people about the right general framework of ideas that is compatible with these goals.
In my opinion, most other kinds of efforts to make a positive change, such as feeding the poor; preserving the environment; curing diseases; giving education to the poor; etc., are overrated and short-sighted, their effects in the long-term being relatively small. An increase in intelligence would produce an increase in the ability to do everything else, so it would be much more effective in the long-term; all these measures lose importance if our civilization and technological advancement be lost to some global catastrophe.
When AI starts working, several problems on which people work now will be rapidly solved (except those that require lengthy experiments). Therefore focusing on these problems now may be a waste of time, except for the meantime until their solution by AI.
Raising money seems like a matter of chance or luck. You'll naturally try it but you can't count on it, so it's not a matter of deciding to do it. Raising public awareness and enthusiasm seems to be an action with a relatively high potential: you can potentially get many other people to raise money, do scientific research, and raise public awareness and enthusiasm in their turn, so this may be the action with the most potential, even though it only accomplishes indirectly. Doing scientific research personally seems to require high stakes, in career, life, and seems to depend a bit on the place you live and what are the things that you like to study and work in. This one is a hard decision, because it is sort of a gamble with your life.
I should add that a lot of people here agree with your stand except that there is a bigger risk from AI than there is benefit. That is, we'll have to work on AI but first we should figure out how to make it friendly. That is what the SIAI is working on.
By the way, welcome to Less Wrong. You know me as Alexander Kruel on Facebook.
There seems to be a significant "risk" of making a much better world with much smarter agents and a lot less insanity and stupidity. A lot of people see that as a bad thing, however.
Looking at history, this sort of thing is fairly common. Most kinds of progress face resistance from various kinds of luddites- who would rather things stayed the way they were.
What? I don't follow. Are you saying it would be a much better world if an unfriendly AI replaced humanity? I don't think it's luddite-ish to say I'd rather not die so something else can take my place.
I'd agree to AI "unfriendly" (whatever this means... it shouldn't reason emotionally, it should just be sufficiently intelligent) replacing humanity... since we are the problem that we're trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren't very happy and fulfilled, etc. Eventually we'll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.