All of Sean_o_h's Comments + Replies

(disclaimer: one of the coauthors) Also, none of the linked comments by the coauthors actually praise the paper as good and thoughtful? They all say the same thing, which is "pleased to have contributed" and "nice comment about the lead author" (a fairly early-career scholar who did lots and lots of work and was good to work with). I called it "timely", as the topic of open-sourcing was very much live at the time.

 

(FWIW, I think this post has valid criticism re: the quality of the biorisk literature cited and the strength with which the case was conveyed; and I think this kind of criticism is very valuable and I'm glad to see it).

This is super awesome. Thank you for doing this.

Johnson was perhaps below average in his application to his studies, but it would be a mistake to think he is/was a below average intelligence pupil.

-13RussHolmes
I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy.

With Mustafa Suleyman, the cofounder most focused on applied (and leading DeepMind Applied) leaving for google, this seems like quite a plausible prediction. So a refocusing on being a primarily research company with fewer applied staff (an area that can soak up a lot of staff) resulting in a 20% reduction of staff probably wouldn't provide a lot of evidence (and is probably not what Robin had in mind). A reduction of research staff, on the other hand, would be very interesting.

4johnswentworth
On the contrary, I'd say a reduction in "applied" work and a re-focus toward research would be quite consistent with an "AI winter" scenario. There's always open-ended research somewhere; a big part of an AI "boom" narrative is trying to apply the method of the day to all sorts of areas (and the method of the day mostly failing to make economically meaningful headway in most areas). To put it differently: the AI boom/bust narrative usually revolves around faddish ML algorithms (expert systems, SVMs, neural networks...). If people are cutting back on trying to apply the most recent faddish algorithms, and instead researching new algorithms, that sounds a lot like the typical AI winter story. On the other hand, if people are continuing to apply e.g. neural networks in new areas, and continuing to find that they work well enough to bring to market, then that would not sound like the AI winter story.

(Cross-posted to the EA forum). (Disclosure: I am executive director of CSER) Thanks again for a wide-ranging and helpful review; this represents a huge undertaking of work and is a tremendous service to the community. For the purpose of completeness, I include below 14 additional publications authored or co-authored by CSER researchers for the relevant time period not covered above (and one that falls just outside but was not previously featured):

Global catastrophic risk:

Ó hÉigeartaigh. The State of Research in Existential Risk

Avin, Wintle, Weitzdorfer, O... (read more)

It is possible they had timing issues whereby a substantial amount of work was done in earlier years but only released more recently. In any case they have published more in 2018 than in previous years.

(Disclosure: I am executive director of CSER) Yes. As I described in relation to last year's review, CSER's first postdoc started in autumn 2015, most started in mid 2016. First stages of research and papers began being completed throughout 2017, most papers then going to peer-reviewed journals. 2018 is more indicative of run-rate output, althoug... (read more)

And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting - Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output - otherwise I'm confident it would have been terrible ;)

FLI's anthony aguirre is centrally involved or leading, AFAIK.

Thanks for the initiative! I'll be there Thursday through Saturday (plus Sunday) for symposia and workshops, if anyone would like to chat (Sean O hEigeartaigh, CSER).

A quick reminder: our deadline closes a week from tomorrow (midday UK time) - so now would be a great time to apply if you were thinking of it, or to remind fellow researchers! Thanks so much, Seán.

A pre-emptive apology: I have a heavy deadline schedule over the next few weeks, so will answer questions when I can - please excuse any delays!

"The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas)." That is, after all, how we humans are planning to get around our self-modification limitations in creating AI ;)

0Stuart_Armstrong
Indeed ^_^
Sean_o_h190

In turn Nick, for his part, very regularly and explicitly credits the role that Eliezer's work and discussions with Eliezer have played in his own research and thinking over the course of the FHI's work on AI safety.

Sean_o_h120

A few comments. I was working with Nick when he wrote that, and I fully endorsed it as advice at the time. Since then, the Xrisk funding situation - and number of locations at which you can do good work - has improved dramatically. it would be worth checking with him how he feels now. My view is that jobs are certainly still competitive though.

In that piece he wrote "I find the idea of doing technical research in AI or synthetic biology while thinking about x-risk/GCR promising." I also strongly endorse this line of thinking. My view is that in a... (read more)

3John_Maxwell
Sounds like you're doing a lot; thanks so much!

9 single author research papers is extremely impressive! Well done.

This does seem quite hazardous, though. If an emergency happened at 3am, I'm pretty sure I'd want my phone easily available and usable.

I was going to say this too, it's a good point. Potential fix: have a cheap non-smartphone on standby at home.

2The_Jaded_One
I already have another phone. Yes, this is a good point though.
Sean_o_h150

Leplen, thank you for your comments, and for taking the time to articulate a number of the challenges associated with interdisciplinary research – and in particular, setting up a new interdisciplinary research centre in a subfield (global catastrophic and existential risk) that is in itself quite young and still taking shape. While we don’t have definitive answers to everything you raise, they are things we are thinking a lot about, and seeking a lot of advice on. While there will be some trial and error, given the quality and pooled experience of the acad... (read more)

9leplen
Thanks so much for your thoughtful response. This clarifies the position dramatically and makes it sound much more attractive. If I have any further questions related to my application specifically, I'll certainly let you know.

Placeholder: this is a good comment and good questions, which I will respond to by tomorrow or Sunday.

This was a poorly phrased line, and it is helpful to point that out. While I can't and shouldn't speak for the OP, I'm confident that the OP didn't mean it in an "ordering people from best to worst" way, especially knowing the tremendous respect that people working and volunteering in X-risk have for Seth himself, and for GCRI's work. I would note that the entire point of this post (and the AMA which the OP has organised) was to highlight GCRI's excellent work and bring it to the attention of more people in the community. However, I can also see ... (read more)

1IlyaShpitser
Duly noted, thanks. This kind of tone deafness seems to be a pattern here in the LW-sphere, however. For instance, look at this: http://lesswrong.com/lw/lco/could_you_be_prof_nick_bostroms_sidekick/ Really? ---------------------------------------- An appeal to charity in the reading of "public-facing, external communication" is a little odd. Public-facing means you can't beg off on social incompetence, being overworked, etc. You have to convince the public of something, and they don't owe you charity in how they read your message. They will retreat to their prejudices and gut instincts right away. It is in the job description of public-facing communication to deal with this.

They've also released their code (for non-commercial purposes): https://sites.google.com/a/deepmind.com/dqn/

In other interesting news, a paper released this month describes a way of 'speeding up' neural net training, and an approach that achieves 4.9% top 5 validation error on Imagenet. My layperson's understanding is that this is the first time human accuracy has been exceeded on the Imagenet benchmarking challenge, and represents an advance on Chinese giant Baidu's progress reported last month, which I understood to be significant in its own right. http... (read more)

0JWonz
FYI - to those who are running the code, the atari ROMs must be named properly otherwise you will hit a segmentation fault. For example, with Breakout name it "breakout.bin".
9jkrause
One thing to note about the number for human accuracy for ImageNet that's been going around a lot recently is that it was really a relatively informal experiment done by a couple of members of the Stanford vision lab (see section 6.4 of the paper for details). In particular, the number everyone cites was just one person, who, while he trained himself quite a while to recognize the ImageNet categories, nonetheless was prone to silly mistakes from time to time. A more optimistic human error is probably closer to 3-4%, but with that in mind the recent results people have been posting are still extremely impressive. It's also worth pointing another paper from Microsoft Research that beat the 5.1% human performance and actually came out a few days before Google's. It's a decent read, and I wouldn't be surprised if people start incorporating elements from both MSR and Google's papers in the near future.
4skeptical_lurker
I saw this paper before, and maybe I'm being an idiot but I didn't understand this: I thought one generally trained the networks layer by layer, so layer n would be completely finished training before layer n+1 starts. Then there is no problem of "the distribution of each layer's inputs changes" because the inputs are fixed once training starts. Admittedly, this is a problem if you don't have all the training data to start of with and want to learn incrementally, but AFAICT that is not generally the case in these benchmarking contests. Regardless, its amazing how simple DNNs are. People have been working on computer vision and AI for about 60 years, and then a program like this comes along which is only around 500 lines of code, conceptually simple enough to explain to anyone with a reasonable mathematical background, but can nevertheless beat humans at a reasonable range of tasks.

Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.

His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:

"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent... (read more)

Script/movie development was advised by CSER advisor and AI/neuroscience expert Murray Shanahan (Imperial). Haven't had time to go see it yet, but looking forward to it!

Yes. The link with guidelines, grant portal, should be on FLI website within the coming week or so.

Sean_o_h120

This will depend on how many other funders are "swayed" towards the area by this funding and the research that starts coming out of it. This is a great bit of progress, but alone is nowhere near the amount needed to make optimal progress on AI. It's important people don't get the impression that this funding has "solved" the AI problem (I know you're not saying this yourself).

Consider that Xrisk research in e.g. biology draws usefully on technical and domain-specific work in biosafety and biosecurity being done more widely. Until now A... (read more)

Sean_o_h270

An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.

(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouragin... (read more)

Vika120

Seconded (as an FLI person)

As another non-native speaker, I frequently find myself looking for a "plural you" in English, which was what I read hyporational's phrase as trying to convey. Useful feedback not to use 'you people'.

Sean_o_h150

A question I've been curious about: to those of you who have taken modafinil regularly/semi-regularly (as opposed to a once off) but have since stopped: why did you stop? Did it stop being effective? Was it no longer useful for your lifestyle? Any other reasons? Thanks!

4drethelin
I got more side effects when I took it regularly as opposed to taking it every now and then. Headaches and so on.

I take fish oil (generic) capsules most days, for the usual reasons they're recommended. Zinc tablets when I'm feeling run down.

Perhaps not what you mean by supplements (in which case, apologies!), but If we're including nootropics, I take various things to try to extend my productive working day. I take modafinil twice a week (100mg in mornings), and try to limit my caffeine on those days. I take phenylpiracetam about twice a week too (100mg in afternoons on different days to modafinil), and nicotine lozenges (1mg) intermittently through the week (also no... (read more)

Sean_o_h100

I think our field of philosophy, and that of xrisk, could very much benefit from more/better figures, but this might be the biologist in me speaking. Look at how often Nick Bostrom's (really quite simplistic) xrisk "scope versus intensity" graph is used/reproduced.

Thank you for writing this clear and well-researched post, really useful stuff.

Does your experience refer to M&G? I can see why you anti-recommend them!

3philh
Yes, that's with M&G. I haven't tried signing up with anyone else.

I'd be very interested in hearing about your experience and advice further along in the process. Thanks!

5philh
My experience so far is that first time I tried to sign up, I entered a form field wrong and couldn't correct it without starting over. The second time, I got to the stage of entering my bank details and clicking confirm, and the website timed out. Then they took money from my account, and sent me physical mail asking for proof of identity. (I assume this is a legal requirement, but I don't remember seeing anything about it before signing up.) I've sent it to them, and they said they needed a week to review the documents, and that letter was dated the 17th and I haven't heard anything since.

Thank you, also useful advice. My pre-moving to UK savings are all in Euro, my post-moving to UK savings are in sterling, so I guess I'll have to look at both. Damn UK refusing to join the single currency, makes my personal finances so much more complicated...

I agree that this would be a good idea, and agree with the points below. Some discussion of this took place in this thread last Christmas: http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/

On that thread I provided information about FHI's room for more funding (accurate as of start of 2014) plus the rationale for FHI's other, less Xrisk/Future of Humanity-specific projects (externally funded). I'd be happy to do the same at the end of this year, but instead representing CSER's financial situation and room for more funding.

Oh, excellent - thanks so much! Side note: I really look forward to making some of the London meet ups when work pressure subsides a little, seems like these meet ups are excellent.

5philh
I'll add to this - I'm in the process of setting one up. I couldn't find anything about Scottish Mutual online. I'm currently trying with M&G, but I anti-recommend them. I believe when I asked who people are currently using, the answers were Fidelity and Legal & General, so those are probably sensible places to try.
Sean_o_h120

Would you (or anyone else) have good suggestions for index funds for those living and earning in the UK/Europe? Thanks!

5coffeespoons
I would recommend Fidelity's FTSE All-Share tracker (it had the lowest fees I could find when I started saving some money in there a few months ago).
6ChristianKl
I don't have particular advise, but I would point out that UK and the rest of Europe differ. You want to invest in a fund in your own currency to avoid exchange rate risks. If the currency that you need in your life is Euro, invest in a Euro notated fund. If it's Pound Sterling, invest in a fund in that currency.

We had a session on this at the London meetup. Here is the single-sheet-of-A4 how-to, which includes a non-complete list of institutions in the UK that provide index funds, and a very rough guide to researching them.

Thank you! We appear to have been successful with our first foundation grant; however, the official award T&C letter comes next week, so we'll know then what we can do with it, and be able to say something more definitive. We're currently putting the final touches on our next grant application (requesting considerably more funds).

I think the sentence in question refers to a meeting on existential/extreme technological risk we will be holding in Berlin, in collaboration with the German Government, on 19th of September. We hope to use this as an opportu... (read more)

Nearly certainly, unfortunately that communication didn't involve me so I don't know which one it is! But I'll ask him when I next see him, and send you a link. http://www.econ.cam.ac.uk/people/crsid.html?crsid=pd10000&group=emeritus

"A journalist doesn't have any interest not to engage in sensationalism."

Yes. Lazy shorthand in my last lw post, apologies. I should have said something along the lines of "in order to clarify our concerns , and not give the journalist the honest impression we though these things all represented imminent doom, which might result in sensationalist coverage" - as in, sensationalism resulting from misunderstanding. If the journalist chooses deliberately to engage in sensationalism, that's a slightly different thing - and yes, it sells news... (read more)

1ChristianKl
I don't think the article you linked does demonstrate that reporting produces misunderstanding. You have to think about the alternative. How does the average person form their beliefs? They might hear something from a friend. They might read the horoscope. Even when the journalist actually writes "scientists think we need to learn more about this, and recommend use of the precautionary principle before engaging" many readers will simply read "scientists say 'don't do this" or they simply ignore it. Especially when you focus on what they actually remember from reading the article.

Thanks, reassuring. I've mainly been concerned about a) just how silly the paperclip thing looks in the context it's been put b) the tone, a bit - as one commenter on the article put it

"I find the light tone of this piece - "Ha ha, those professors!" to be said with an amused shake of the head - most offensive. Mock all you like, but some of these dangers are real. I'm sure you'll be the first to squeal for the scientists to do something if one them came true. Price asks whether I have heard of the philosophical conundrum the Prisoner's Dilemma. I have not. Words fail me. Just what do you know then son? Once again, the Guardian sends a boy to do a man's job."

1Sarokrae
I wouldn't worry too much about the comments. Even Guardian readers don't hold the online commentariat of the Guardian in very high esteem, and it's reader opinion, not commenter opinion, that matters the most. It seems like the most highly upvoted comments are pretty sane anyway!

Thanks. Re: your last line, quite a bit of this is possible: we've been building up a list of "safe hands" journalists at FHI for the last couple of years, and as a result, our publicity has improved while the variance in quality has decreased.

In this instance, we (CSER) were positively disposed towards the newspaper as a fairly progressive one with which some of our people had had a good set of previous interactions. I was further encouraged by the journalist's request for background reading material. I think there was just a bit of a mismatch... (read more)

Sean_o_h240

Hi,

I'd be interested on LW's thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We've been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen's Aeon/Atlantic pieces); this wasn't the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficul... (read more)

4VAuroch
This piece is definitely good publicity, rather than bad. It takes the ideas seriously despite deliberately emphasizing the extent to which the author is confused by them, and writes in a tone appropriate for a mass audience. You know how television shows will have the character who starts as "the new guy" they need to explain everything to, to serve as an audience insert? This author made himself the new guy.
2AlexMennen
I agree that there are better articles to direct people towards. I doubt this piece does much damage just by existing, though; it seems more likely that it's either net neutral or slightly net positive. Also, from the article, Congratulations.
2lukeprog
Is the paper mentioned and apparently quoted in the Guardian article, which Dasgupta described as "somewhat informal," available anywhere?
5cameroncowan
I've worked in PR for the better part of 10 years and I've worked for sticky things like politics where context is everything and you are right, editors love to pull out something that "looks" very dramatic to get attention and the Guardian is notorious for this. However, I think the best thing to do is to fight fire with fire. Whatever media you do you should respond to the serious pieces with blog posts of your own. Clarifying things and making your side of the story known is just as important. I am also a believer in that you shouldn't leave your message in the hands of other people. I would then follow these stories up with awesome videos/blog posts of your own that people can interact with on a variety of platforms. That would allow you to get your message out in your way. That way when you do take that interview there is plenty to talk about. Its all about controlling the message.
5Ben Pace
I definitely thought it was one of the best pieces of X-Risk Journalism I've seen, in so far as spreading good information in a considered tone of voice.
2ChristianKl
A journalist doesn't have any interest not to engage in sensationalism. Editors want to write articles that the average person understands. It's their job to simplify. That still has a good chance of leaving the readers more informed than they were before reading the article. Explaining things to the average person is hard. It's not the kind of article that I would sent people who have an background and who approach you. On the other hand it's quite fine for the average person.
8Sarokrae
I've read a fair number of x-risk related news pieces, and this was by far the most positive and non-sensationalist coverage that I've seen by someone who was neither a scientist nor involved with x-risk organisations. The previous two articles I'd seen on the topic were about 30% Terminator references. This article, while not necessarily a 100% accurate account, at least takes the topic seriously.

I'd call it a net positive. Along the axis of "Accept all interviews, wind up in some spectacularly abysmal pieces of journalism" and "Only allow journalism that you've viewed and edited", the quantity vs quality tradeoff, I suspect the best place to be would be the one where the writers who know what they're going to say in advance are filtered, and where the ones who make an actual effort to understand and summarize your position (even if somewhat incompetent) are engaged.

I don't think the saying "any publicity is good publicity&... (read more)

Sean_o_h200

Without knowing the content of your talk (or having time to Skype at present, apologies), allow me to offer a few quick points I would expect a reasonably well-informed, skeptical audience member to make (part-based on what I've encountered):

1) Intelligence explosion requires AI to get to a certain point of development before it can really take off (let's set aside that there's still a lot we need to figure out about where that point is, or whether there are multiple different versions of that point). People have been predicting that we can reach that stag... (read more)

5Punoxysm
All good points. I'd focus on #4 as the primary point. Focusing on theoretical safety measures far ahead of the development of the technology to be made safe is very difficult and has no real precedent in previous engineering efforts. In addition, MIRI's specific program isn't heading in a clear direction and hasn't gotten a lot of traction in the mainstream AI research community yet. Edit: Also, hacks and heuristics are so vital to human cognition in every domain, that it seems clear that general computation models like AIXI don't show the roadmap to AI, despite their theoretical niceness.
Sean_o_h240

Speaking as someone who speaks about X-risk reasonably regularly: I have empathy for the OP's desire for no surprises. IMO there are many circumstances in which surprises are very valuable - one on one discussions, closed seminars and workshops where a productive, rational exchange of ideas can occur, boards like LW where people are encouraged to interact in a rational and constructive way.

Public talks are not necessarily the best places for surprises, however. Unless you're an extremely skilled orator, the combination of nerves, time limitations, crowd ... (read more)

8Shmi
Right. Exposure to a weak meme inoculates people against being affected by similar memes in the future. There was a recent SSC post about it, I think. Bad presentation is worse than no presentation at all.
1fowlertm
Correct :)

Thank you for this post, extremely helpful and I'm very grateful for the time you put into writing/researching it.

A question: what's your opinion of when "level of exercise" goes from "diminishing returns" to "negative returns" in health and longevity? Background: I used to train competitively for running, 2xday for 2hrs total time/day, 15hrs week total (a little extra at the weekend) which sounds outlandish but is pretty standard in competitive long-distance running/cycling/triathlon. I quit because a) it wasn't compatible wi... (read more)

1RomeoStevens
So around 4200 Met-min/week is my guess for your total activity level. The data is too noisy for me to make a solid recommendation for someone like you up at the tail end of the measured results. For what it's worth, I doubt you're negatively impacting your health at that level. Marathons and other extreme endurance events seem harmful to me based on limited data, but stuff well below that is probably beneficial. Now if we include optimal for stress as an additional criteria beyond just optimal total activity I think we could make some more solid predictions. It sounds like your current routine is dovetailing pretty well with the rest of your work/life balance, so I'd be loathe to change it much if I was in your shoes. OTOH if it does seem like there is a pain point I also doubt you'd be harming yourself significantly by reducing your exercise load slightly. Just be sure to set up some sort of Schelling fence for yourself so you don't fall too far. I'm guessing maybe setting up such a fence is what motivated your question. I'll just keep pointing at the 3500 Met-min/week as something we actually have evidence for, even if slight. This pretty much exactly corresponds in your case from switching from 6d/wk to 5d/wk. If you anted to drop it any further you'd probably be wanting to up the intensity to compensate. Keep up the awesome work.

Some emerging concerns I'm aware of for really serious runners: heart problems due to thickened heart wall, skin cancer (just due to being out in the sun so much, sweating off sunscreen). Potential causes for concern: lots of cortisol production from hard aerobic exercise, inflammation.

0NancyLebovitz
I keep wondering whether sports where the major point is overriding the desire to stop are actually a bad idea-- that desire to stop might have evolved to be protective.

Fascinating, thank you for this!

0NancyLebovitz
You're welcome. I keep hoping someone with more knowledge of statistics than I've got will take a look at the karate study.

For a lot of people running should be fine for their knees if done properly.

As far as I can tell, running is most likely to damage your knees if you're (a) very big/heavy (b) have poor running technique (most people don't learn to run properly/efficiently) (c) run a lot on bad surfaces (avoid running extensively on surfaces that are banked, or where you may step in potholes!) (d) have a genetic predisposition to knee problems or have brought on ostearthritis-type conditions through poor diet (happens sometimes with exercise anorexics).

As a past competitive... (read more)

0Caspar Oesterheld
I actually did not want to go too deep into discussing specific sports and wait for another 24 hours, but... I never had actual problems with the knees myself - I'm neather heavy, nor run that much at all (100mile/week for 6 years is extremely impressive!), also I eat helthy and think my technique to be okay. But I am very young. My grandfather, who has been doing a lot of sports his whole life (to my knowledge he still rides his bicycle for 50 miles a day or something at age 80) had some knee problems and therefore changed from relatively serious marathon running (best time ~2:40) to swimming and bicycling. Of course these are just anecdotes that do not prove anything. I would be very interested in the current state of research on the matter. For me the most important argument against long-distance running is that it seems to conflict with general fitness. After running my second marathon I pretty much sucked at everything else, even riding a bicycle... Also, long-distance running takes a lot of time to practice, so now I changed to less than daily interval training, also supplemented by weight training.
Load More