This is a claim about reality. Do we actually know that pulling numbers out of your arse actually does produce better results than pulling the decisions out directly? Or does it just feel better, because you have a theory now?
Years later, this unsurprising intuition is spectacularly confirmed by the Good Judgement Project; details in "Superforecasting".
I don't claim that it developed skill and talent in all participants, nor even in the median participant.
And yet you called it "a resounding success". Does that mean that you're focusing on the crème de la crème, the top tier of the participants, while being less concerned with what's happening in lower quantiles?
For this unusual, MIRI-comissioned workshop, yes.
The Flash player for the video of Max Tegmark and Nick Bostrom speaking at the UN is super annoying. Anyone know how to extract the raw video file so I can watch it more conveniently? Thanks!
I've never heard of this book or author before, anyone read it? How does it compare to eg "Smarter Than Us" or "Our Final Invention"?
The cryonics movement needs more people with clinical medical backgrounds involved, but then it also needs people with practical business experience.
I will give you a business intelligence test. Look at just the home page of the website for this startup cryonics organization in Oregon, and tell me one obvious thing that it lacks - just on the home page:
It doesn't say what the hell they actually do.
Superb, thanks! Did you create this, or is there a way I could have found this for myself? Cheers :)
Could someone be kind enough to share the text of Stuart Russell's interview with Science here?
Fears of an AI pioneer
John Bohannon
Science 17 July 2015: Vol. 349 no. 6245 pp. 252
DOI:10.1126/science.349.6245.252
<http://www.sciencemag.org/content/349/6245/252.full>
From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy". The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics. The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe. I'm not aware of any large movement calling for regulation either inside or outside AI, because we don't know how to write such regulation.
Karl Sims evolved simple blocky creatures to walk and swim (video). In the paper, he writes "For land environments, it can be necessary to prevent creatures from generating high velocities by simply falling over" - ISTR the story is that in the first version of the software, the winning creatures were those that grew very tall and simply fell over towards the target.
[Edited]
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Looks like their website has been taken over by spam. Which in turn gives me very little confidence in an organization that's supposed to be around until my death and for many years afterwards.
Do you know anything about the current state of play in the UK? Are you still covered?
Longevity is much less of a concern with CUK; they don't do storage, only standby and transport.
I live in the Bay Area now.