Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sean_o_h 30 August 2014 03:56:29PM 1 point [-]

Thanks, reassuring. I've mainly been concerned about a) just how silly the paperclip thing looks in the context it's been put b) the tone, a bit - as one commenter on the article put it

"I find the light tone of this piece - "Ha ha, those professors!" to be said with an amused shake of the head - most offensive. Mock all you like, but some of these dangers are real. I'm sure you'll be the first to squeal for the scientists to do something if one them came true. Price asks whether I have heard of the philosophical conundrum the Prisoner's Dilemma. I have not. Words fail me. Just what do you know then son? Once again, the Guardian sends a boy to do a man's job."

Comment author: Sarokrae 30 August 2014 04:05:48PM 1 point [-]

I wouldn't worry too much about the comments. Even Guardian readers don't hold the online commentariat of the Guardian in very high esteem, and it's reader opinion, not commenter opinion, that matters the most.

It seems like the most highly upvoted comments are pretty sane anyway!

Comment author: Sean_o_h 30 August 2014 03:16:57PM *  15 points [-]

Hi,

I'd be interested on LW's thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We've been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen's Aeon/Atlantic pieces); this wasn't the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficult than expected.

In my view the interview material turned out better than expected, given the clear inferential gap. I am less happy with the 'catastrophic scenarios'' which I was asked for. The text I sent (which I circulated to FHI/CSER members) was distinctly less sensational, and contained a lot more qualifiers. E.g. for geoengineering I had: "Scientific consensus is against adopting it without in depth study and broader societal involvement in the decisions made, but there may be very strong pressure to adopt once the impacts of climate change become more severe." and my pathogen modification example did not go nearly as far. While qualifiers can seem like unnecessary padding to editors, it can really change the tone of a piece. Similarly, in a pre-emptive line to ward off sensationalism, I included "I hope you can make it clear these are "worst case possibilities that currently appear worthy of study" rather than "high-likelihood events". Each of these may only have e.g. a 1% likelihood of occurring. But in the same way an aeroplane passenger shouldn't accept a 1% possibility of a crash, society should not accept a 1% possibility of catastrophe. I see our role as (like airline safety analysts) figuring out which risks are plausible, and for those, working to reduce the 1% to 0.00001%"; this was sort-of-addressed, but not really.

That said, the basic premises - that a virus could be modified for greater infectivity and released by a malicious actor, 'termination risk' for atmospheric aerosol geoengineering, future capabilities of additive manufacturing for more dangerous weapons - are intact.

Re: 'paperclip maximiser'. I mentioned this briefly in conversation, after we'd struggled for a while with inferential gaps on AI (and why we couldn't just outsmart something smarter than us, etc), presenting it as a 'toy example' used in research papers on AI goals, meant to encapsulate the idea that seemingly harmless or trivial but poorly thought through goals can result in unforseen and catastrophic consequences when paired with the kind of advanced resource utilisation and problem-solving ability a future AI might have. I didn't expect it it to be taken as a literal doomsday concern - and it wasn't in the text I sent - and to my mind it looks very silly in there, possibly deliberately so. However, I feel that Huw and Jaan's explanations were very good, and quite well-presented..

We've been considering whether we should limit ourselves to media opportunities where we can write the material ourselves, or have the opportunity to view and edit the final material before publishing. MIRI has significantly cut back on its media engagement, and this seems on the whole sensible (FHI's still doing a lot, some turns out very good, some not so good).

Lesson to take away: 1) this stuff can be really, really hard. 2) Getting used to v sophisticated, science/tech-savvy journalists and academics can leave you unprepared. 3) Things that are v reasonable with qualifies can become v unreasonable if you remove the qualifiers - and editors often just see the qualifiers as unnecessary verbosity (or want the piece to have stronger, more senational claims)

Right now, I'm leaning fairly strongly towards 'ignore and let quietly slip away' (the guardian has a small UK readership, so how much we 'push' this will probably make a difference), but I'd be interested in whether LW sees this as net positive or net negative on balance for existential risk in the public. However, I'm open to updating. I asked a couple of friends unfamiliar with the area what their take away impression was, and it was more positive than I'd anticipated.

Comment author: Sarokrae 30 August 2014 03:51:34PM 7 points [-]

I've read a fair number of x-risk related news pieces, and this was by far the most positive and non-sensationalist coverage that I've seen by someone who was neither a scientist nor involved with x-risk organisations.

The previous two articles I'd seen on the topic were about 30% Terminator references. This article, while not necessarily a 100% accurate account, at least takes the topic seriously.

[LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI

19 Sarokrae 30 August 2014 02:04PM

http://www.theguardian.com/technology/2014/aug/30/saviours-universe-four-unlikely-men-save-world

The article is titled "The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse", and features interviews with Martin Rees, Huw Price, Jaan Tallinn and Partha Dasgupta. The author takes a rather positive tone about CSER and MIRI's endeavours, and mentions x-risks other than AI (bioengineered pandemic, global warming with human interference, distributed manufacturing).

I find it interesting that the inferential distance for the layman to the concept of paperclipping AI is much reduced by talking about paperclipping America, rather than the entire universe: though the author admits still struggling with the concept. Unusually for an journalist who starts off unfamiliar with these concepts, he writes in a tone that suggests that he takes the ideas seriously, without the sort of "this is very far-fetched and thus I will not lower myself to seriously considering it" countersignalling usually seen with x-risk coverage. There is currently the usual degree of incredulity in the comments section though.

For those unfamiliar with The Guardian, it is a British left-leaning newspaper with a heavy focus on social justice and left-wing political issues. 

In response to comment by Alicorn on White Lies
Comment author: drethelin 10 February 2014 07:19:38AM 16 points [-]

An emotional response to your statement is not indiscriminate braindumping. I'm not talking about always saying whatever happens to be in my mind at any time. Since I've probably already compromised any chance of going to a rationalist dinner party by being in favor of polite lies, I might as well elaborate: I think your policy is insanely idealistic. I think less of you for having it. But I don't think enough less of you not to want to be around you and I think it's very likely plenty of people you hang out with lie all the time in the style of the top level post and just don't talk to you about it. We know that humans are moist robots and react to stimuli. We know the placebo effect exists. We know people can fake confidence and smiles and turn them real. But consequentialist arguments in favor of untruths don't work on a deontologist. I guess mostly I'm irate at the idea that social circles I want to move in can or should be policed by your absurdity.

I don't think the above constitutes an indiscriminate braindump but I don't think it would be good to say to anyone face to face and I don't actually feel confident it's good to say online.

In response to comment by drethelin on White Lies
Comment author: Sarokrae 11 February 2014 12:31:57AM 7 points [-]

This is a summary reasonably close to my opinion.

In particular, outright denouncement of ordinary social norms of the sort used by (and wired into) most flesh people, and endorsement of an alternative system involving much more mental exhaustion for the likes of people like me, feels so much like defecting that I would avoid interacting with any person signalling such opinions.

Comment author: Sarokrae 17 January 2014 12:41:26PM *  4 points [-]

Actually I don't think you're right. I don't think there's much consensus on the issue within the community, so there's not much of a conclusion to draw:

Last year's survey answer to "which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" was as follows:

Pandemic (bioengineered): 272, 23% Environmental collapse: 171, 14.5% Unfriendly AI: 160, 13.5% Nuclear war: 155, 13.1% Economic/Political collapse: 137, 11.6% Pandemic (natural): 99, 8.4% Nanotech: 49, 4.1% Asteroid: 43, 3.6%

Comment author: shminux 20 September 2013 10:08:00PM 19 points [-]

That's quite amazing, really. If, by rephrasing a question, one can remove a bias, then it makes sense to learn to detect poorly phrased questions and to ask better questions of oneself and others. This seems like a cheaper alternative than fighting one's nature with Bayesian debiasing. Or maybe the first step toward it.

Comment author: Sarokrae 21 September 2013 10:29:15AM 2 points [-]

I'm pretty sure this is one of the main areas Prof David Spiegelhalter is trying to cover with experiments like this one. He advises the British government on presenting medical statistics, and his work is worth a read if you want to know about how to phrase statistical questions so people get them more right.

Comment author: Sarokrae 30 August 2013 03:49:18PM 3 points [-]

This post reminded me of a conversation I was having the other day, where I noted that I commit the planning fallacy far less than average because I rarely even model myself as an agent.

Comment author: someonewrongonthenet 18 August 2013 02:33:08PM *  1 point [-]

briefly describe the "subagents" and their personalities/goals?

Comment author: Sarokrae 18 August 2013 05:52:11PM *  0 points [-]

A non-exhaustive list of them in very approximate descending order of average loudness:

  • Offspring (optimising for existence, health and status thereof. This is my most motivating goal right now and most of my actions are towards optimising for this, in more or less direct ways.)

  • Learning interesting things

  • Sex (and related brain chemistry feelings)

  • Love (and related brain chemistry feelings)

  • Empathy and care for other humans

  • Prestige and status

  • Epistemic rationality

  • Material comfort

I notice the problem mainly as the loudness of "Offspring" varies based on hormone levels, whereas "Learning new things" doesn't. In particular when I optimise almost entirely for offspring, cryonics is a waste of time and money, but on days where "learning new things" gets up there it isn't.

Comment author: Swimmer963 13 August 2013 05:58:17PM 4 points [-]

I am not convinced that it's easy, or even really possible, to change from one thinking style to the other.

After four years of nursing school, I changed from an INTJ to an INFJ on the Myers-Briggs. The medical field is somewhere where you're constantly getting bombarded with data, some of it very relevant and some of it not, and you have to react fast. There's value in being able to think logically and systematically through a patient's symptoms to make sure you're not missing something–but it's too slow much of the time, and I've learned to at least notice my quick flash-intuitions. The feeling that "something is wrong even if I don't know why" can be an incredibly valuable indication that you have to check something again, ask someone else to have a look for you, etc. Also, dealing with human beings in the most vulnerable moments of their lives is a great way to develop empathy.

I am having some difficulty understand the "Ignoring your emotions" section, much less seeing the use of "fixing" this "failing".

It's helped me a lot. Anna Salamon recently shared her technique of "when I have a mysterious annoying emotion that I don't endorse, I ask it what it wants." I may not endorse the emotion, but I feel it, and even if I try to ignore it, it'll probably still impact my behaviour, i.e. by making me act less nicely towards a person who irritates me. But I frequently can figure out "what the emotion wants"–for example, it turns out that a large percentage of the time, when quotes from an article annoy me, it's because I implicitly feel like they're attacking me because they criticize or point fun at someone who I identify with.

Example: the movie "The Heat" was hilarious but left me with a bad taste in my mouth, and I was able to track down that it was because one of the main characters, a female cop who was characterized as very smart and capable but nerdy and socially unaware, was poked fun at a lot and eventually changed by becoming less nerdy and more like the other main character, a female cop who broke all the rules with a "git 'er done" attitude (who AFAICT didn't change at all.) I felt more similar to the nerdy character, and part of me felt that the movie was making fun of nerds in general. I was able to convince myself that this wasn't a reason to be cranky.

Comment author: Sarokrae 18 August 2013 10:12:41AM 1 point [-]

As an "INFJ" who has learned to think in an "INTJ" way through doing a maths degree and hanging out with INTJs, I also agree that different ways of problem solving can be learned. What I tend to find is that my intuitive way of thinking gets me a less accurate, faster answer, which is in keeping with what everyone else has suggested.

However, with my intuitive thinking, there is also an unusual quirk that although my strongly intuitive responses are fairly inaccurate (correct about half the time), this is much more accurate than they have any right to be given the precision of the ones that are correct. My intuitive thinking usually applies to people and their emotions, and I frequently get very specific hypotheses about the relationships between a set of people. Learning logical thinking has allowed me to first highlight hypotheses with intuition, then slowly go through and falsify the wrong ones, which leads me to an answer that I think I couldn't possibly get with logic alone, since my intuition uses things like facial expressions and body languages and voice inflections to gather much more data than I could consciously.

Comment author: NancyLebovitz 20 July 2013 11:51:43AM 2 points [-]

What choices does your processor agent tend to make? Under what circumstances does it favor particular sub-agents?

Comment author: Sarokrae 21 July 2013 08:21:23PM 1 point [-]

"Whichever subagent currently talks in the "loudest" voice in my head" seems to be the only way I could describe it. However, "volume" doesn't lend to a consistent weighting because it varies, and I'm pretty sure varies depending on hormone levels amongst many things, making me easily dutch-bookable based on e.g. time of month.

View more: Next