Sorry, the whole "impossibility of 4 minute mile" / "4 minute mile effect" is a myth.
Bannister did his (successful) attempt in May 1954 because he knew John Landy (in particular, but also a few others) had set his sights on it and were getting close, and he thought (as Landy did too) that Landy would get it that year as soon as he got to Europe. They were both right - Landy did it 6 weeks later.
The reason the record had stayed just over 4 mins for so long was WWII interrupting athletics - Hagg and Andersson had got it down to 4:01.4 pretty quick between 19...
(disclaimer: one of the coauthors) Also, none of the linked comments by the coauthors actually praise the paper as good and thoughtful? They all say the same thing, which is "pleased to have contributed" and "nice comment about the lead author" (a fairly early-career scholar who did lots and lots of work and was good to work with). I called it "timely", as the topic of open-sourcing was very much live at the time.
(FWIW, I think this post has valid criticism re: the quality of the biorisk literature cited and the strength with which the case was conveyed; and I think this kind of criticism is very valuable and I'm glad to see it).
I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy.
With Mustafa Suleyman, the cofounder most focused on applied (and leading DeepMind Applied) leaving for google, this seems like quite a plausible prediction. So a refocusing on being a primarily research company with fewer applied staff (an area that can soak up a lot of staff) resulting in a 20% reduction of staff probably wouldn't provide a lot of evidence (and is probably not what Robin had in mind). A reduction of research staff, on the other hand, would be very interesting.
(Cross-posted to the EA forum). (Disclosure: I am executive director of CSER) Thanks again for a wide-ranging and helpful review; this represents a huge undertaking of work and is a tremendous service to the community. For the purpose of completeness, I include below 14 additional publications authored or co-authored by CSER researchers for the relevant time period not covered above (and one that falls just outside but was not previously featured):
Global catastrophic risk:
Ó hÉigeartaigh. The State of Research in Existential Risk
Avin, Wintle, Weitzdorfer, O...
It is possible they had timing issues whereby a substantial amount of work was done in earlier years but only released more recently. In any case they have published more in 2018 than in previous years.
(Disclosure: I am executive director of CSER) Yes. As I described in relation to last year's review, CSER's first postdoc started in autumn 2015, most started in mid 2016. First stages of research and papers began being completed throughout 2017, most papers then going to peer-reviewed journals. 2018 is more indicative of run-rate output, althoug...
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting - Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output - otherwise I'm confident it would have been terrible ;)
"The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas)." That is, after all, how we humans are planning to get around our self-modification limitations in creating AI ;)
A few comments. I was working with Nick when he wrote that, and I fully endorsed it as advice at the time. Since then, the Xrisk funding situation - and number of locations at which you can do good work - has improved dramatically. it would be worth checking with him how he feels now. My view is that jobs are certainly still competitive though.
In that piece he wrote "I find the idea of doing technical research in AI or synthetic biology while thinking about x-risk/GCR promising." I also strongly endorse this line of thinking. My view is that in a...
Leplen, thank you for your comments, and for taking the time to articulate a number of the challenges associated with interdisciplinary research – and in particular, setting up a new interdisciplinary research centre in a subfield (global catastrophic and existential risk) that is in itself quite young and still taking shape. While we don’t have definitive answers to everything you raise, they are things we are thinking a lot about, and seeking a lot of advice on. While there will be some trial and error, given the quality and pooled experience of the acad...
This was a poorly phrased line, and it is helpful to point that out. While I can't and shouldn't speak for the OP, I'm confident that the OP didn't mean it in an "ordering people from best to worst" way, especially knowing the tremendous respect that people working and volunteering in X-risk have for Seth himself, and for GCRI's work. I would note that the entire point of this post (and the AMA which the OP has organised) was to highlight GCRI's excellent work and bring it to the attention of more people in the community. However, I can also see ...
They've also released their code (for non-commercial purposes): https://sites.google.com/a/deepmind.com/dqn/
In other interesting news, a paper released this month describes a way of 'speeding up' neural net training, and an approach that achieves 4.9% top 5 validation error on Imagenet. My layperson's understanding is that this is the first time human accuracy has been exceeded on the Imagenet benchmarking challenge, and represents an advance on Chinese giant Baidu's progress reported last month, which I understood to be significant in its own right. http...
Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.
His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:
"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent...
This will depend on how many other funders are "swayed" towards the area by this funding and the research that starts coming out of it. This is a great bit of progress, but alone is nowhere near the amount needed to make optimal progress on AI. It's important people don't get the impression that this funding has "solved" the AI problem (I know you're not saying this yourself).
Consider that Xrisk research in e.g. biology draws usefully on technical and domain-specific work in biosafety and biosecurity being done more widely. Until now A...
An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.
(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouragin...
I take fish oil (generic) capsules most days, for the usual reasons they're recommended. Zinc tablets when I'm feeling run down.
Perhaps not what you mean by supplements (in which case, apologies!), but If we're including nootropics, I take various things to try to extend my productive working day. I take modafinil twice a week (100mg in mornings), and try to limit my caffeine on those days. I take phenylpiracetam about twice a week too (100mg in afternoons on different days to modafinil), and nicotine lozenges (1mg) intermittently through the week (also no...
I agree that this would be a good idea, and agree with the points below. Some discussion of this took place in this thread last Christmas: http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/
On that thread I provided information about FHI's room for more funding (accurate as of start of 2014) plus the rationale for FHI's other, less Xrisk/Future of Humanity-specific projects (externally funded). I'd be happy to do the same at the end of this year, but instead representing CSER's financial situation and room for more funding.
We had a session on this at the London meetup. Here is the single-sheet-of-A4 how-to, which includes a non-complete list of institutions in the UK that provide index funds, and a very rough guide to researching them.
Thank you! We appear to have been successful with our first foundation grant; however, the official award T&C letter comes next week, so we'll know then what we can do with it, and be able to say something more definitive. We're currently putting the final touches on our next grant application (requesting considerably more funds).
I think the sentence in question refers to a meeting on existential/extreme technological risk we will be holding in Berlin, in collaboration with the German Government, on 19th of September. We hope to use this as an opportu...
Nearly certainly, unfortunately that communication didn't involve me so I don't know which one it is! But I'll ask him when I next see him, and send you a link. http://www.econ.cam.ac.uk/people/crsid.html?crsid=pd10000&group=emeritus
"A journalist doesn't have any interest not to engage in sensationalism."
Yes. Lazy shorthand in my last lw post, apologies. I should have said something along the lines of "in order to clarify our concerns , and not give the journalist the honest impression we though these things all represented imminent doom, which might result in sensationalist coverage" - as in, sensationalism resulting from misunderstanding. If the journalist chooses deliberately to engage in sensationalism, that's a slightly different thing - and yes, it sells news...
Thanks, reassuring. I've mainly been concerned about a) just how silly the paperclip thing looks in the context it's been put b) the tone, a bit - as one commenter on the article put it
"I find the light tone of this piece - "Ha ha, those professors!" to be said with an amused shake of the head - most offensive. Mock all you like, but some of these dangers are real. I'm sure you'll be the first to squeal for the scientists to do something if one them came true. Price asks whether I have heard of the philosophical conundrum the Prisoner's Dilemma. I have not. Words fail me. Just what do you know then son? Once again, the Guardian sends a boy to do a man's job."
Thanks. Re: your last line, quite a bit of this is possible: we've been building up a list of "safe hands" journalists at FHI for the last couple of years, and as a result, our publicity has improved while the variance in quality has decreased.
In this instance, we (CSER) were positively disposed towards the newspaper as a fairly progressive one with which some of our people had had a good set of previous interactions. I was further encouraged by the journalist's request for background reading material. I think there was just a bit of a mismatch...
Hi,
I'd be interested on LW's thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We've been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen's Aeon/Atlantic pieces); this wasn't the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficul...
I'd call it a net positive. Along the axis of "Accept all interviews, wind up in some spectacularly abysmal pieces of journalism" and "Only allow journalism that you've viewed and edited", the quantity vs quality tradeoff, I suspect the best place to be would be the one where the writers who know what they're going to say in advance are filtered, and where the ones who make an actual effort to understand and summarize your position (even if somewhat incompetent) are engaged.
I don't think the saying "any publicity is good publicity&...
Without knowing the content of your talk (or having time to Skype at present, apologies), allow me to offer a few quick points I would expect a reasonably well-informed, skeptical audience member to make (part-based on what I've encountered):
1) Intelligence explosion requires AI to get to a certain point of development before it can really take off (let's set aside that there's still a lot we need to figure out about where that point is, or whether there are multiple different versions of that point). People have been predicting that we can reach that stag...
Speaking as someone who speaks about X-risk reasonably regularly: I have empathy for the OP's desire for no surprises. IMO there are many circumstances in which surprises are very valuable - one on one discussions, closed seminars and workshops where a productive, rational exchange of ideas can occur, boards like LW where people are encouraged to interact in a rational and constructive way.
Public talks are not necessarily the best places for surprises, however. Unless you're an extremely skilled orator, the combination of nerves, time limitations, crowd ...
Thank you for this post, extremely helpful and I'm very grateful for the time you put into writing/researching it.
A question: what's your opinion of when "level of exercise" goes from "diminishing returns" to "negative returns" in health and longevity? Background: I used to train competitively for running, 2xday for 2hrs total time/day, 15hrs week total (a little extra at the weekend) which sounds outlandish but is pretty standard in competitive long-distance running/cycling/triathlon. I quit because a) it wasn't compatible wi...
For more colour, see this article, which shows the same trend on the same timelines for a bunch of other distances - steady progress till 1940ish, a 10-15 year WW2 gap, then further steady progress mid 1950s on.
https://www.scienceofrunning.com/2017/05/the-roger-bannister-effect-the-myth-of-the-psychological-breakthrough.html?v=47e5dceea252