Comment author: Pablo_Stafforini 20 December 2012 04:40:09PM 8 points [-]

Thanks for posting this. In the future, please consider adding a paragraph that provides a summary, or at least a snapshot, of the article's contents. In this case, you could have included this one:

For this month’s Carnival, we shall survey a selection of recent posts that are loosely arranged around the theme of existential threats to contemporary philosophy. I focus on four. Pre-theoretic intuitions seem a little less credible as sources of evidence. Talk about possible worlds seems just a bit less scientific. The very idea of rationality looks as though it is being taken over by cognate disciplines, like cognitive science and psychology. And some of the most talented philosophers of the last generation have taken up arms against a scientific theory that enjoys a strong consensus. Some of these threats are disturbing, while others are eminently solvable. All of them deserve wider attention.

Comment author: betterthanwell 21 December 2012 12:35:17PM *  5 points [-]

In the future, please consider adding a paragraph that provides a summary, or at least a snapshot, of the article's contents.

Yes. However, I would suggest not to wait for next time to do it right. Do it right, now.

I will downvote the top post, but I promise to upvote it, if and when benthamite's suggestion is followed.

Sorry for the carrot and stick, but doing so shouldn't take more than a minute.
(Which would be less than was spent on writing this.)

In response to Notes on Psychopathy
Comment author: buybuydandavis 20 December 2012 02:00:56AM 7 points [-]

Deviant but not necessarily diseased or dysfunctional minds can demonstrate resistance to all treatment and attempts to change their mind

Maybe they need better treatments. Has anyone asked psychopaths - "How would you convince a psychopath like you to stop doing X?" Has anyone let psychopaths try? Aren't they the master manipulators? They even have a readily available model of a psychopath to test out the theory on. How convenient! Sufficiently motivating a psychopath with rewards for changing the mind of another psychopath might be an effective treatment for the first psychopath. Did they try that treatment?

I don't mean to be pissy, but "resistance to all treatments and attempts to change their mind" strikes me as a huge claim. Ruling out the "it's something I didn't think of" theory is one of the worst cognitive biases.

Comment author: betterthanwell 20 December 2012 02:16:29PM *  9 points [-]

Maybe they need better treatments. Has anyone asked psychopaths - "How would you convince a psychopath like you to stop doing X?" Has anyone let psychopaths try? Aren't they the master manipulators? They even have a readily available model of a psychopath to test out the theory on. How convenient! Sufficiently motivating a psychopath with rewards for changing the mind of another psychopath might be an effective treatment for the first psychopath. Did they try that treatment?

Something like it was tried in Canada, in the sixties, with LSD, in a four year experiment where a group of 30 psychopaths were, at least apparently, temporarily reformed through unconventional means.

This strange and unique program was obliquely referenced in the top post:

...operated for over a decade in a maximum security psychiatric hospital and drew worldwide attention for its novelty. The program was described at length by Barker and colleagues…The results of a follow-up conducted an average of 10.5 years after completion of treatment showed that, compared to no program (in most cases, untreated offenders went to prison), treatment was associated with lower violent recidivism for non-psychopaths but higher violent recidivism for psychopaths.

The Insane Criminal as Therapist
E.T. Barker, M. H. Mason, The Canadian Journal of Corrections, Oct. 1968.

Here's an account from a recent pop-psychology book, The Psychopath Test:

In the late 1960s, a young Canadian psychiatrist believed he had the answer. His name was Elliott Barker and he had visited radical therapeutic communities around the world, including nude psychotherapy sessions occurring under the tutelage of an American psychotherapist named Paul Bindrim. Clients, mostly California free-thinkers and movie stars, would sit naked in a circle and dive headlong into a 24-hour emotional and mystical rollercoaster during which participants would scream and yell and sob and confess their innermost fears. Barker worked at a unit for psychopaths inside the Oak Ridge hospital for the criminally insane in Ontario. Although the inmates were undoubtedly insane, they seemed perfectly ordinary. This, Barker deduced, was because they were burying their insanity deep beneath a facade of normality. If the madness could only, somehow, be brought to the surface, maybe it would work itself through and they could be reborn as empathetic human beings.

And so he successfully sought permission from the Canadian government to obtain a large batch of LSD, hand-picked a group of psychopaths, led them into what he named the "total encounter capsule", a small room painted bright green, and asked them to remove their clothes. This was truly to be a radical milestone: the world's first ever marathon nude LSD-fuelled psychotherapy session for criminal psychopaths.

Barker's sessions lasted for epic 11-day stretches. There were no distractions – no television, no clothes, no clocks, no calendars, only a perpetual discussion (at least 100 hours every week) of their feelings. Much like Bindrim's sessions, the psychopaths were encouraged to go to their rawest emotional places by screaming and clawing at the walls and confessing fantasies of forbidden sexual longing for each other, even if they were, in the words of an internal Oak Ridge report of the time, "in a state of arousal while doing so".

...

Barker watched it all from behind a one-way mirror and his early reports were gloomy. The atmosphere inside the capsule was tense. Psychopaths would stare angrily at each other. Days would go by when nobody would exchange a word. But then, as the weeks turned into months, something unexpected began to happen.

The transformation was captured in an incredibly moving film. These tough young prisoners are, before our eyes, changing. They are learning to care for one another inside the capsule.

We see Barker in his office, and the look of delight on his face is quite heartbreaking. His psychopaths have become gentle. Some are even telling their parole boards not to consider them for release until after they've completed their therapy. The authorities are astonished.

Several of the 30 participants of the experiment went on to commit violent homicides some years after release.

An internal memo from the experiment: "LSD in a Coercive Milieu Therapy Program" (E.T Barker)

Intriguing.

Comment author: ciphergoth 01 December 2012 12:29:58PM 1 point [-]

Runaway ("Venusian") climate change is an existential risk, though most consider that a pretty unlikely scenario.

Comment author: betterthanwell 01 December 2012 01:31:25PM 1 point [-]

"On the Earth, the IPCC states that "a 'runaway greenhouse effect'—analogous to Venus—appears to have virtually no chance of being induced by anthropogenic activities."
http://www.ipcc.ch/meetings/session31/inf3.pdf

Hmm, the IPCC asserts this statement without providing any argument to support it.

Some quick thoughts: In the beginning, there were no oceans. The earth was molten and without form. Now, assume venusian-runaway is a possibility for for this planet's climate. Why has it not already occurred, much, much earlier in the planet's history?

The planet was very much hotter and more humid in the very distant past. The CO2 in the oceans and the methane in the permafrost was captured from the atmosphere. The O2 in the atmosphere is a biogenic waste product of photosynthesis.

I do think the oceans will boil eventually, not because of global warming, but because of solar warming, after the sun has depleted it's hydrogen.

Comment author: Sean_o_h 27 November 2012 09:11:30PM *  22 points [-]

Hi,

Let me introduce myself: I'm Sean and I work as project manager at FHI (finally got around to registering!). In posts here I won't be speaking on behalf of FHI unless I explicitly state so (although, like Stuart, I imagine I often will be). I'm not involved officially with CSER, but I'm in communication with them and hope to be keeping up to date with them over the coming months.

A few comments on your observations:

2) CSER have done a deliberate and well-orchestrated "media splash" campaign over the last week, but I believe they're finished with this now. They've got some big names involved and a good support structure in place in Cambridge, which helps.

3) My understanding is that CSER hasn't published anything yet because they don't exist yet in a practical sense - they've been founded but nobody's employed, and they're still gathering seed funding.

4) The Sunday Times article's a bit unfortunate and the general feeling at FHI is that we're not too impressed by the journalist's work, but please note that the more "controversial" statements are the journalist's own thoughts (it's not clear in all places if you skim the article like I did at first). CSER has some good people behind it, and at the time of writing the FHI plans to support it and collaborate with it where possible - we think it's a very positive development in the field of Xrisk. Even the term getting out there is a positive!

Comment author: betterthanwell 27 November 2012 10:46:32PM *  6 points [-]

Welcome, and thanks for the comments.

Even the term getting out there is a positive!

Agreed.

If journalism demands that you stick to Hollywood references when communicating a concept,
it wouldn't be so bad if journalists managed to understand and convey the distinction between:

  • The wholly implausible, worse than useless Terminator humanoid hunter-killer robot scenario.
  • The not completely far-fetched Skynet launches every nuke, humanity dies scenario.
Comment author: lukeprog 27 November 2012 05:01:38AM 7 points [-]

There were some serious errors in the coverage of this story in The Sunday Times (UK).

Comment author: betterthanwell 27 November 2012 11:30:42AM *  8 points [-]

Yudkowsky seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.
“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.

Wow. This particular mistake seems to be an unlikely and even difficult mistake to make in good faith,
as opposed to, for example, by outright dishonesty.

Update: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.

Never mind, it seems they don't even try to be honest.

Centre for the Study of Existential Risk (CSER) at Cambridge makes headlines.

19 betterthanwell 26 November 2012 08:56PM

As of an hour ago, I had not yet heard of the Centre for the Study of Existential Risk.

Luke announced it to Less Wrong, as The University of Cambridge announced it to the world, back in April:

CSER at Cambridge University joins the others.

Good people involved so far, but the expected output depends hugely on who they pick to run the thing.

CSER is scheduled to launch next year.

 


 

Here is a small selection of CSER press coverage from the last two days:

http://www.bbc.co.uk/news/technology-20501091

http://www.guardian.co.uk/education/shortcuts/2012/nov/26/cambridge-university-terminator-studies

http://www.dailymail.co.uk/news/article-2238152/Cambridge-University-open-Terminator-centre-study-threat-humans-artificial-intelligence.html

http://www.theregister.co.uk/2012/11/26/new_centre_human_extinction_risks/

http://www.slashgear.com/new-ai-think-tank-hopes-to-get-real-on-existential-risk-26258246/

http://www.techradar.com/news/world-of-tech/super-brains-to-guard-against-robot-apocalypse-1115293

http://www.hindustantimes.com/world-news/Europe/Cambridge-to-study-risks-from-robots-at-Terminator-Centre/Article1-964746.aspx

http://economictimes.indiatimes.com/news/news-by-industry/et-cetera/cambridge-to-study-risks-from-robots-at-terminator-centre/articleshow/17372042.cms

http://www.extremetech.com/extreme/141372-judgment-day-update-disneys-grenade-catching-robot-and-the-burger-flipping-robot-that-could-replace-2-million-us-workers

http://slashdot.org/topic/bi/cambridge-university-vs-skynet/

http://www.businessinsider.com/researchers-robots-risk-human-civilization-2012-11

http://www.newscientist.com/article/dn22534-megarisks-that-could-drive-us-to-extinction.html

http://news.cnet.com/8301-11386_3-57553993-76/killer-robots-cambridge-brains-to-assess-ai-risk/

http://www.globalpost.com/dispatches/globalpost-blogs/weird-wide-web/cambridge-university-opens-so-called-termintor-centre-stu

http://www.washingtonpost.com/world/europe/cambridge-university-to-open-center-studying-the-risks-of-technology-to-humans/2012/11/25/e551f4d0-3733-11e2-9258-ac7c78d5c680_story.html

http://www.foxnews.com/tech/2012/11/26/terminator-center-to-open-at-cambridge-university/

Google News: All 119 news sources...

 


 

Here's an excerpt from one quite typical story appearing in tech-tabloid theregister.co.uk today:

 

Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES

Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species.

A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk (CSER) to analyse the ultimate risks to the future of mankind - including bio- and nanotech, extreme climate change, nuclear war and artificial intelligence.

Apart from the frequent portrayal of evil - or just misguidedly deadly - AI in science fiction, actual real scientists have also theorised that super-intelligent machines could be a danger to the human race.

Jaan Tallinn, the former software engineer who was one of the founders of Skype, has campaigned for serious discussion of the ethical and safety aspects of artificial general intelligence (AGI).

Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said.
[...]




The source for these stories appears to be a press release from the University of Cambridge:

Humanity’s last invention and our uncertain future

In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built. [...] 



Three Four quick observations:

1: That's a lot of Terminator II photos.
2: FHI at Oxford and the Singularity Institute does not often get this kind of attention.
3: CSER doesn't appear to have published anything yet.
4: The number of people who have heard the term "existential risk" must have doubled a few times today.

Comment author: Kyre 25 November 2012 02:29:08AM 14 points [-]

Which catastrophic risks does a mars colony mitigate ? Using a list from a recent post by Stuart Armstrong (table by Anders Sandberg) ...

Earth impactors : yes

War : probably. Unlikely that Mars colonies would be valuable or strategically important enough to extended a war to Mars, but possible that the same conditions that led to war on Earth could lead to local war on Mars, or that war on Earth could be exploited by factions on Mars

Famine : yes. Keeping a Mars colony fed might be a major challenge especially at first, but independent of the same challenge on Earth. If famine on Earth is caused by a plant pathogen, it could spread to Mars, but there is the nice long quarantine.

Pandemics : probably. Until there is much more advanced propulsion technology that cuts trip time to days, the trip serves as a natural quarantine period. Also, really nasty features like the ability to persist in the environment, or replicate in non-human hosts, or spread via aerosols, don't present any additional threat on Mars.

Super volcanos : yes. Also, unlike e.g. impactors, there probably isn't an equivalent Martian hazard.

Supernova, GRB : probably ? Unlike impactors, a supernova or GRB would affect both Earth and Mars. However, if the major impact on Earth is deaths by radiation of exposed people and destruction of agriculture by destruction of the ozone layer, then Mars should be much more resilient, since settlements have to be more radiation hardened anyway, and the agriculture would be under glass or under ground.

Climate change : yes

Global computer failure : probably not ? If Mars colony infrastructure is very robustly designed it might survive without computers. I expect that it would not be possible to software quarantine Mars.

Bioweapons : probably. For Mars to be included in a deliberate pandemic attack, you would need to get the agent into each separate Martian settlement, probably simultaneously. Unlike Earth, separate Martian cities could probably enforce effective travel restrictions and quarantines.

Nano weapons : no. Unlike bio weapons, presumably all you would have to do would be get some spores to somewhere on Mars.

Physics threats : depends on scale of disaster. Merely planetary scale ones (e.g. black hole eats Earth quietly), yes. Larger scale, no.

Super intelligence : no

A few of points that occurred while going though the list.

The above is assuming that the Mars colonies are self-sufficient, otherwise a catastrophe on Earth is a catastrophe for Mars.

Eexistential risks are described a causing actual human extinction, or massive mortality and long term curtailment of human progress (e.g. putting human population and society back to the Stone Age). Mars colonies mitigate against the first, and could mitigate against the second if Mars is developed to the point where it is wealthy and has an independent space program - to the point where Mars could offer meaningful aid to Earth.

In response to comment by Kyre on Musk, Mars and x-risk
Comment author: betterthanwell 25 November 2012 01:38:59PM *  8 points [-]

Which catastrophic risks does a mars colony mitigate? ... Climate change : yes

If a Mars colony mitigates catastrophic risk (extinction risk?) from climate change,
then climate change is not an existential risk to human civilization on earth.

If humans can thrive on Mars, Earth based humanity will be able to cope with any climate change less drastic than transforming the climate of Earth to something as hostile as the current climate of Mars.

In response to Causal Reference
Comment author: betterthanwell 21 October 2012 03:21:36AM 5 points [-]

Mainstream status points to /Eliezer_Yudkowsky-drafts/ (Forbidden: You aren't allowed to do that.)

Comment author: betterthanwell 17 October 2012 08:52:57PM 7 points [-]

It is really quite frustrating to discuss the intersection of physics and free will with a man who is capable of posting this (...)

So... Don't?

Comment author: betterthanwell 14 October 2012 03:12:12PM *  1 point [-]

What objections can be raised against this argument? I'm looking both for good objections and objections that many people are likely to raise, even if they aren't really any good.

I'm not sure if this is an objection many people are likely to raise, or a good one, but in any case, here are my initial thoughts:

Transhumanism is just a set of values, exactly like humanism is a set of values. The feasibility of transhumanism can be shown from a compiling a list of those values that are said to qualify someone as a transhumanist, and the observed existence of people with such values, whom we then slap a label on, and say: Here is a transhumanist!

Half an hour on google should probably suffice to persuade the sceptic that transhumanists do in fact exist, and therefore transhumanism is feasible. And so we're done.


I realize that this is not what you mean when you refer to the feasibility of transhumanism. You want to make an argument for the possiblity of "actual transhumans". Something along the lines of: "It is feasible that humans with quantitatively or qualitatively superior abilities, in some domain, relative to some baseline (such as the best, or the average performance of some collection of humans, perhaps all humans) can exist." Which seems trivially true, for the reasons you mention.

Where are the boundaries of human design space? Who do we decide to put in the plain old human category? Who do we put in the transhuman category — and who is just another human with some novel bonus attribute?

If one goes for such a definition of a transhuman as the one I propose above, are world record holding athletes then weakly transhuman, since they go beyond the previously recorded bounds of human capability in strength, or speed, or endurance?

I'd say yes, but justifying that would require a longer reply. One question one would have to answer is: Who is a human? (The answers one would get to this question has likely changed quite a bit since the label "human" was first invented.)


If one allows the category of things that receives a "yes" in reply to the question "is this one a human?" to change at all, if one allows that category to expand or indeed to grow over time, perhaps by an arbitrary amount. (Which is excactly what seems, to me at least, to have happened, and seems to continue to be the case.) Then, perhaps, there will never be a transhuman. Only a growing category of things which one considers to be "human". Including some humans that are happier, better, stronger and faster than any current or previously recorded human.

In order to say "this one is a transhuman" one needs to first decide upon some limits to what one will call "human", and then decide, arbitrarily, that whoever goes beyond these limits, we will put into this new category, instead of continuing to relax the boundaries of humanity, so as to include the new cases, as is usual.

View more: Prev | Next