Comment author: timtyler 11 December 2010 07:50:33PM *  -2 points [-]

Alas, I have to reject your summary of my position. The situation as I see it:

  • DOOM-based organisations are likely to form with a frequency which depends on the extent to which the world is percieved to be at risk;

  • They are likely to form from those with the highest estimates of p(DOOM);

  • Once they exist, they are likely to try and grow, much like all organisations tend to do - wanting attention, time, money and other available resources;

  • Since they are funded in proportion to the percived value of p(DOOM), such organisations will naturally promote the notion that p(DOOM) is a large value.

This is all fine. I accept that DOOM-based organisations will exist, will loudly proclaim the coming apocalypse, and will find supporters to help them propagate their DOOM message. They may be ineffectual, cause despair and depression or help save the world - depending on their competence - and on to what extent their paranoia turns out to be justified.

However, such organisations seem likely to be very bad sources of information for anyone interested in the actual value of p(DOOM). They have obvious vested interests.

Comment author: FormallyknownasRoko 12 December 2010 12:16:29AM *  4 points [-]

Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.

Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there's simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.

But I think your pooh-pooh'ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.

Why don't you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.

The most likely route to survival would seem to be that the entire model of the future propounded here is wrong. But in that case we move into the domain of irrelevance.

Comment author: wedrifid 11 December 2010 04:53:31PM 3 points [-]

it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

Not the sort of thing that could, you know, give you nightmares?

Comment author: FormallyknownasRoko 11 December 2010 08:55:24PM 4 points [-]

The sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere "existential risk" message.

Comment author: FormallyknownasRoko 11 December 2010 04:41:30PM *  8 points [-]

To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?

Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:

  • it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.

  • it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

  • it is not optimized for believability. In fact it is almost optimized for anti-believability, "rapture of the nerds", much public ridicule, etc.

Comment author: FormallyknownasRoko 11 December 2010 04:47:21PM 4 points [-]

A moment's googling finds this:

http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf

"Total Income £546,415"

($863 444)

I leave it to readers to judge whether Tim is flogging a dead horse here.

Comment author: timtyler 11 December 2010 01:07:58PM *  0 points [-]

Church and cute puppies are likely worse causes, yes. I listed animal charities in my "Bad causes" video.

I don't have their budget at my fingertips - but SIAI has raked in around 200,000 dollars a year for the last few years. Not enormous - but not trivial. Anyway, my concern is not really with the cash, but with the memes. This is a field adjacent to one I am interested in: machine intelligence. I am sure there will be a festival of fear-mongering marketing in this area as time passes, with each organisation trying to convince consumers that its products will be safer than those of its rivals. "3-laws-safe" slogans will be printed. I note that Google's recent chrome ad was full of data destruction images - and ended with the slogan "be safe".

Some of this is potentially good. However, some of it isn't - and is more reminiscent of the Daisy ad.

Comment author: FormallyknownasRoko 11 December 2010 04:41:30PM *  8 points [-]

To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?

Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:

  • it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.

  • it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

  • it is not optimized for believability. In fact it is almost optimized for anti-believability, "rapture of the nerds", much public ridicule, etc.

Comment author: wedrifid 11 December 2010 02:20:06PM 0 points [-]

I'm going to say 75 years for that. But really, this is becoming very much total guesswork.

It's still interesting to hear your thoughts. My hunch is that the difficulty of the -ve --> +ve step is much harder than the 'singularity' step so I would expect the time estimates to reflect that somewhat. But there are all sorts of complications there and my guesswork is even more guess-like than yours!

I do know that AGI -ve singularity won't happen in the next 2 decades and I think one can bet that it won't happen after that for another few decades either.

If you find anyone who is willing to take you up on a bet of that form given any time estimate and any odds then please introduce them to me! ;)

Comment author: FormallyknownasRoko 11 December 2010 02:26:52PM 0 points [-]

Many plausible ways to S^+ involve something odd or unexpected happening. WBE might make computational political structures, i.e. political structures based inside a computer full of WBEs. This might change the way humans cooperate.

Suffices to say that FAI doesn't have to come via the expected route of someone inventing AGI and then waiting until they invent "friendliness theory" for it.

Comment author: wedrifid 11 December 2010 02:03:53PM 0 points [-]

Negatve singularity in my opinion is at least 50 years away.

I take it that you would place (t(positive singularity) | positive singularity) a significant distance further still?

And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).

This got a wry smile out of me. :)

Comment author: FormallyknownasRoko 11 December 2010 02:12:16PM 0 points [-]

(t(positive singularity) | positive singularity)

I'm going to say 75 years for that. But really, this is becoming very much total guesswork.

I do know that AGI -ve singularity won't happen in the next 2 decades and I think one can bet that it won't happen after that for another few decades either.

Comment author: timtyler 11 December 2010 12:44:49PM *  2 points [-]

It typically has the feature that you, all your relatives, friends and loved-ones die - probably enough for most people to seriously want to avoid it. Michael Vasser talks about "eliminating everything that we value in the universe".

Maybe better super-stimuli could be designed - but there are constraints. Those involved can't just make up the apocalypse that they think would be the most scary one.

Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these "hell" memes results in them becoming more prominent - or whether most people just find them too ridiculous to take seriously.

Comment author: FormallyknownasRoko 11 December 2010 12:55:06PM *  1 point [-]

I think you're trying to fit the facts to the hypothesis. Negatve singularity in my opinion is at least 50 years away. Many people I know will already be dead by then, including me if I die at the same point in life as the average of my family.

And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).

It is also not well-optimized to be believable.

Comment author: timtyler 11 December 2010 12:18:07PM *  -2 points [-]

The END OF THE WORLD is probably the most frequently-repeated failed prediction of all time. Humans are doing spectacularly well - and the world is showing many signs of material and moral progress - all of which makes the apocalypse unlikely.

The reason for the interest here seems obvious - the Singularity Institute's funding is derived largely from donors who think it can help to SAVE THE WORLD. The world must first be at risk to enable heroic Messiahs to rescue everyone.

The most frequently-cited projected cause of the apocalypse: an engineering screw-up. Supposedly, future engineers are going to be so incompetent that they accidentally destroy the whole world. The main idea - as far as I can tell - is that a bug is going to destroy civilisation.

Also - as far as I can tell - this isn't the conclusion of analysis performed on previous engineering failures - or on the effects of previous bugs - but rather is wild extrapolation and guesswork.

Of course it is true that there may be a disaster, and END OF THE WORLD might arrive. However there is no credible evidence that this is likely to be a probable outcome. Instead, what we have appears to be mostly a bunch of fear mongering used for fundraising aimed at fighting the threat. That gets us into the whole area of the use and effects of fear mongering.

Fearmongering is a common means of psychological manipulation, used frequently by advertisers and marketers to produce irrational behaviour in their victims.

It has been particularly widely used in the IT industry - mainly in the form of fear, uncertainty and doubt.

Evidently, prolonged and widespread use is likely to help to produce a culture of fear. The long-term effects of that are not terribly clear - but it seems to be dubious territory.

I would council those using fear mongering for fund-raising purposes to be especially cautious of the harm this might do. It seems like a potentially dangerous form of meme warfare. Fear targets circuits in the human brain that evolved in an earlier, more dangerous era - where death was much more likely - so humans have an evolved vulnerability in the area. The modern super-stimulus of the END OF THE WORLD overloads those vulnerable circuits.

Maybe this is an effective way of extracting money from people - but also, maybe it is an unpleasant and unethical one. So, wannabe heroic Messiahs, please: take care. Starting out by screwing over your friends and associates by messing up their heads with a hostile and virulent meme complex may not be the greatest way to start out.

Comment author: FormallyknownasRoko 11 December 2010 12:37:57PM *  5 points [-]

Point of fact: the negative singularity isn't a superstimulus for evolved fear circuits: current best-guess would be that it would be a quick painless death in the distant future (30 years+ by most estimates, my guess 50 years+ if ever). It doesn't at all look like how I would design a superstimulus for fear.

Comment author: FormallyknownasRoko 10 December 2010 11:55:46PM *  3 points [-]

Suppose that Blackmail is

merely an affective category, a class of situations activating a certain psychological adaptation

-- then we should ask what features of the ancestral environment caused us to evolve it. We might understand it better in that case.

I suspect that the ancestral environment came with a very strong notion of a default outcome for a given human, in the absence of there being any particular negotiation, and also came with a clear notion of negative interaction (stabbing, hitting, kicking) versus positive interaction (giving fish, teaching how to hunt better, etc).

Comment author: WrongBot 10 December 2010 08:30:22PM 29 points [-]

The future is probably an impending train wreck. But if we can save the train, then it'll grow wings and fly up into space while lightning flashes in the background and Dragonforce play a song about fiery battlefields or something. We're all stuck on the train anyway, so saving it is worth a shot.

I hate to see smart people who give a shit losing to despair. This is still the most important problem and you can still contribute to fixing it.

TL;DR: I want to give you a hug.

Comment author: FormallyknownasRoko 10 December 2010 11:35:25PM -3 points [-]

We're all stuck on the train anyway, so saving it is worth a shot.

I disagree with this argument. Pretty strongly. No selfish incentive to speak of.

View more: Prev | Next