FormallyknownasRoko comments on Best career models for doing research? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (999)
No, this is the "collective-action-problem" - where the end of the world arrives - despite a select band of decidedly amateurish messiahs arriving and failing to accomplish anything significant.
You are looking at those amateurs now.
The END OF THE WORLD is probably the most frequently-repeated failed prediction of all time. Humans are doing spectacularly well - and the world is showing many signs of material and moral progress - all of which makes the apocalypse unlikely.
The reason for the interest here seems obvious - the Singularity Institute's funding is derived largely from donors who think it can help to SAVE THE WORLD. The world must first be at risk to enable heroic Messiahs to rescue everyone.
The most frequently-cited projected cause of the apocalypse: an engineering screw-up. Supposedly, future engineers are going to be so incompetent that they accidentally destroy the whole world. The main idea - as far as I can tell - is that a bug is going to destroy civilisation.
Also - as far as I can tell - this isn't the conclusion of analysis performed on previous engineering failures - or on the effects of previous bugs - but rather is wild extrapolation and guesswork.
Of course it is true that there may be a disaster, and END OF THE WORLD might arrive. However there is no credible evidence that this is likely to be a probable outcome. Instead, what we have appears to be mostly a bunch of fear mongering used for fundraising aimed at fighting the threat. That gets us into the whole area of the use and effects of fear mongering.
Fearmongering is a common means of psychological manipulation, used frequently by advertisers and marketers to produce irrational behaviour in their victims.
It has been particularly widely used in the IT industry - mainly in the form of fear, uncertainty and doubt.
Evidently, prolonged and widespread use is likely to help to produce a culture of fear. The long-term effects of that are not terribly clear - but it seems to be dubious territory.
I would council those using fear mongering for fund-raising purposes to be especially cautious of the harm this might do. It seems like a potentially dangerous form of meme warfare. Fear targets circuits in the human brain that evolved in an earlier, more dangerous era - where death was much more likely - so humans have an evolved vulnerability in the area. The modern super-stimulus of the END OF THE WORLD overloads those vulnerable circuits.
Maybe this is an effective way of extracting money from people - but also, maybe it is an unpleasant and unethical one. So, wannabe heroic Messiahs, please: take care. Starting out by screwing over your friends and associates by messing up their heads with a hostile and virulent meme complex may not be the greatest way to start out.
Do you also think that global warming is a hoax, that nuclear weapons were never really that dangerous, and that the whole concept of existential risks is basically a self-serving delusion?
Also, why are the folks that you disagree with the only ones that get to be described with all-caps narrative tropes? Aren't you THE LONE SANE MAN who's MAKING A DESPERATE EFFORT to EXPOSE THE TRUTH about FALSE MESSIAHS and the LIES OF CORRUPT LEADERS and SHOW THE WAY to their HORDES OF MINDLESS FOLLOWERS to AN ENLIGHTENED FUTURE? Can't you describe anything with all-caps narrative tropes if you want?
Not rhethorical questions, I'd actually like to read your answers.
Tim on global warming: http://timtyler.org/end_the_ice_age/
1-line summary - I am not too worried about that either.
Global warming is far more the subject of irrational fear-mongering then machine intelligence is.
It's hard to judge how at risk the world was from nuclear weapons during the cold war. I don't have privileged information about that. After Japan, we have not had nuclear weapons used in anger or war. That doesn't give much in the way of actual statistics to go on. Whatever estimate is best, confidence intervals would have to be wide. Perhaps ask an expert on the history of the era this question.
The END OF THE WORLD is not necessarily an idea that benefits those who embrace it. If you consider the stereotypical END OF THE WORLD plackard carrier, they are probably not benefitting very much personally. The benefit associated with the behaviour accrues mostly to the END OF THE WORLD meme itself. However, obviously, there are some people who benefit. 2012 - and all that.
The probabality of the END OF THE WORLD soon - if it is spelled out exactly what is meant by that - is a real number which could be scientifically investigated. However whether the usual fundraising and marketing campaigns around the subject illuminate that subject more than they systematically distort it seems debatable.
This is a pretty optimistic way of looking at it, but unfortunately it's quite unfounded. Current scientific consensus is that we've already released more than enough greenhouse gases to avert the next glacial period. Melting the ice sheets and thus ending the ice age entirely is an extremely bad idea if we do it too quickly for global ecosystems to adapt.
We don't even really understand what causes the glacial cycles yet. This is an area where there are multiple competing hypotheses. I list four of these on my site. So, since we don't have a proper understanding of the mechanics involved with much confidence yet, we don't yet know what it would take to prevent them.
Here's what Dyson says on the topic:
I do not believe this is contrary to any "scientific consensus" on the topic. Where is this supposed "scientific consensus" of which you speak?
Melting the ice caps is inevitably an extremely slow process - due to thermal inertia. It is also widely thought to be a runaway positive feedback cycle - and so probably a phenomenon that it would be difficult to control the rate of.
Melting of the icecaps is now confirmed to be a runaway positive feedback process pretty much beyond a shadow of a doubt. Within the last few years, melting has occurred at a rate that exceeded the upper limits of our projection margins.
Have you performed calculations on what it would take to avert the next glacial period on the basis of any of the competing models, or did you just assume that ice ages are bad, so preventing them is good and we should thus work hard to prevent reglaciation? There's a reason why your site is the first and possibly only only result in online searches for support of preventing glaciation, and it's not because you're the only one to think of it
There are others who share my views - e.g.:
Why is being difficult to control glacial melting a point in the favor of increasing greenhouse gas emissions?
It's true that climate change models are limited in their ability to project climate change accurately, although they're getting better all the time. Unfortunately, the evidence currently suggests that they're undershooting actual warming rates even at their upper limits.
The pro-warming arguments on your site essentially boil down to "warm earth is better than cold earth, so we should try to warm the earth up." Regardless of the relative merits of a warmer or colder planet though, rapid change of climate is a major burden on ecosystems. Flooding and forest fires are relatively trivial effects, it's mass extinction events that are a real matter of concern.
That is hard to parse. You are asking why I think the rate of runaway positive feedback cycles is difficult to control? That is because that is often their nature.
You talk as though I am denying warming is happening. HUH?
Right. So, if you want a stable climate, you need to end the yo-yo glacial cycles - and end the ice age. A stable climate is one of the benefits of doing that.
I have a section entitled "Climate stablity" in my essay. To quote from it:
I laughed aloud upon reading this comment; thanks for lifting my mood.
So the real problem here is weakness of arguments, since they lack explanatory power by being able to "explain" too much.
Point of fact: the negative singularity isn't a superstimulus for evolved fear circuits: current best-guess would be that it would be a quick painless death in the distant future (30 years+ by most estimates, my guess 50 years+ if ever). It doesn't at all look like how I would design a superstimulus for fear.
It typically has the feature that you, all your relatives, friends and loved-ones die - probably enough for most people to seriously want to avoid it. Michael Vasser talks about "eliminating everything that we value in the universe".
Maybe better super-stimuli could be designed - but there are constraints. Those involved can't just make up the apocalypse that they think would be the most scary one.
Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these "hell" memes results in them becoming more prominent - or whether most people just find them too ridiculous to take seriously.
Yes, you can only look at them through a camera lens, as a reflection in a pool or possibly through a ghost! ;)
I think you're trying to fit the facts to the hypothesis. Negatve singularity in my opinion is at least 50 years away. Many people I know will already be dead by then, including me if I die at the same point in life as the average of my family.
And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).
It is also not well-optimized to be believable.
It doesn't work. Jehovah's Witnesses don't even believe into a hell and they are gaining a lot of members each year and donations are on the rise. Donations are not even mandatory either, you are just asked to donate if possible. The only incentive they use is positive incentive.
People will do everything for their country if it asks them to give their life. Suicide bombers also do not blow themselves up because of negative incentive but because they promise their families help and money. Also some believe that they will enter paradise. Negative incentive makes many people reluctant. There is much less crime in the EU than in the U.S. and they got death penalty. Here you get out of jail after max. ~20 years and there's almost no violence in jails either.
I take it that you would place (t(positive singularity) | positive singularity) a significant distance further still?
This got a wry smile out of me. :)
(t(positive singularity) | positive singularity)
I'm going to say 75 years for that. But really, this is becoming very much total guesswork.
I do know that AGI -ve singularity won't happen in the next 2 decades and I think one can bet that it won't happen after that for another few decades either.
It's still interesting to hear your thoughts. My hunch is that the difficulty of the -ve --> +ve step is much harder than the 'singularity' step so I would expect the time estimates to reflect that somewhat. But there are all sorts of complications there and my guesswork is even more guess-like than yours!
If you find anyone who is willing to take you up on a bet of that form given any time estimate and any odds then please introduce them to me! ;)
Many plausible ways to S^+ involve something odd or unexpected happening. WBE might make computational political structures, i.e. political structures based inside a computer full of WBEs. This might change the way humans cooperate.
Suffices to say that FAI doesn't have to come via the expected route of someone inventing AGI and then waiting until they invent "friendliness theory" for it.
Church and cute puppies are likely worse causes, yes. I listed animal charities in my "Bad causes" video.
I don't have their budget at my fingertips - but SIAI has raked in around 200,000 dollars a year for the last few years. Not enormous - but not trivial. Anyway, my concern is not really with the cash, but with the memes. This is a field adjacent to one I am interested in: machine intelligence. I am sure there will be a festival of fear-mongering marketing in this area as time passes, with each organisation trying to convince consumers that its products will be safer than those of its rivals. "3-laws-safe" slogans will be printed. I note that Google's recent chrome ad was full of data destruction images - and ended with the slogan "be safe".
Some of this is potentially good. However, some of it isn't - and is more reminiscent of the Daisy ad.
To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?
Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:
it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.
it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.
it is not optimized for believability. In fact it is almost optimized for anti-believability, "rapture of the nerds", much public ridicule, etc.
A moment's googling finds this:
http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf
($863 444)
I leave it to readers to judge whether Tim is flogging a dead horse here.
Not the sort of thing that could, you know, give you nightmares?
The sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere "existential risk" message.
Alas, I have to reject your summary of my position. The situation as I see it:
DOOM-based organisations are likely to form with a frequency which depends on the extent to which the world is percieved to be at risk;
They are likely to form from those with the highest estimates of p(DOOM);
Once they exist, they are likely to try and grow, much like all organisations tend to do - wanting attention, time, money and other available resources;
Since they are funded in proportion to the percived value of p(DOOM), such organisations will naturally promote the notion that p(DOOM) is a large value.
This is all fine. I accept that DOOM-based organisations will exist, will loudly proclaim the coming apocalypse, and will find supporters to help them propagate their DOOM message. They may be ineffectual, cause despair and depression or help save the world - depending on their competence - and on to what extent their paranoia turns out to be justified.
However, such organisations seem likely to be very bad sources of information for anyone interested in the actual value of p(DOOM). They have obvious vested interests.
Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.
Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there's simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.
But I think your pooh-pooh'ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
Why don't you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
The most likely route to survival would seem to be that the entire model of the future propounded here is wrong. But in that case we move into the domain of irrelevance.
I hope I am not "pooh-pooh'ing". There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine - or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view - and so its views on the topic should be taken with multiple pinches of salt.
I am not sure I understand fully - but I think the short answer is because I don't agree with that. What risks there are, we can collectively do things about. I appreciate that it isn't easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.
Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there's much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.
If our idea of an ethical corporation is one whose motto is "don't be evil", then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.
I think your disapproval of animal charities is based on circular logic, or at least an unproven premise.
You seem to be saying that animal causes are unworthy recipients of human effort because animals aren't humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don't think it's proven that people only like animals because the animals are super-stimuli.
I could be mistaken, but I think that a more abstract utilitarian approach grounds out in some sort of increased enjoyment of life, or else it's an effort to assume a universe-eye's view of what's ultimately valuable. I'm inclined to trust the former more.
What's your line of argument for supporting charities that help people?
I usually value humans much more than I value animals. Given a choice between saving a human or N non-human animals, N would normally have to be very large before I would even think twice about it. Similar values are enshrined in law in most countries.
To the extent that the law accurately represents the values of the people it governs charities are not necessary. Vales enshrined in law are by necessity irrelevant.
(Noting by way of pre-emption that I do not require that laws should fully represent the values of the people.)
I do not agree. If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.
You may be interested in Alan Dawrst's essays on animal suffering and animal suffering prevention.
I believe the numbers are actually higher than $200,000. SIAI's 2008 budget was about $500,000. 2006 was about $400,000 and 2007 was about $300,000 (as listed further in the linked thread). I haven't researched to see if gross revenue numbers or revenue from donations are available. Curiously, Guidestar does not seem to have 2009 numbers for SIAI, or at least I couldn't find those numbers; I just e-mailed a couple people at SIAI asking about that.
That being said, even $500,000, while not trivial, seems to me a pretty small budget.
Sorry, yes, my bad. $200,000 is what they spent on their own salaries.