NIPS 2015
There is a fairly large contingent of safety research oriented people at NIPS this year. I'm unfortunately not among them, but if you're there and interested in connecting with others on AI safety topics or other LW issues - general rationality, EA, etc. I welcome you to make this thread a Schelling point to create meeting opportunities :). You can also PM me I can connect you to people I know are there.
Velocity of behavioral evolution
This suggested a major update on the velocity of behavioral trait evolution.
Basically mice transmitted fear of cherry smell reliably into the very next generation (via epigenetics).
www.newscientist.com/article/dn24677-fear-of-a-smell-can-be-passed-down-several-generations.html#.VJRgr8ADo
This seems pretty important.
What Peter Thiel thinks about AI risk
This is probably the clearest statement from him on the issue:
25:30 mins in
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.
I recommend the rest of the lecture as well, it's a good summary of "Zero to One" and a good QA afterwards.
Cognitive distortions of founders
Interesting take on entrepreneurial success:
http://www.harrisonmetal.com/cognitive-distortions-of-founders/
More in depth here:
http://quarry.stanford.edu/xapm1111126lse/docs/02_LSE_Cognitive.pdf
Curious what people here think of this.
FAI PR tracking well [link]
This time, it's by "The Editors" of Bloomberg view (which is very significant in News world). Content is very reasonable explanation of AI concerns, though not novel to this audience.
http://www.bloombergview.com/articles/2014-08-10/intelligent-machines-scare-smart-people
Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas? Perhaps one of the orgs (MIRI, FHI, CSER, FLI) reach out and say hello to the editors?
Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]
http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html
Very surprised none has linked to this yet:
TL;DR: AI is a very underfunded existential risk.
Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.
Gunshot victims to be suspended between life and death [link]
- First "official" program to practice suspended animation
- The article naturally goes on to ask whether longer SA (months, years) is possible
- Amazing quote: "Every day at work I declare people dead. They have no signs of life, no heartbeat, no brain activity. I sign a piece of paper knowing in my heart that they are not actually dead. I could, right then and there, suspend them. But I have to put them in a body bag. It's frustrating to know there's a solution."
- IMO this if (I hope!) successful, will go a long way to bridge the emotional gap for cryonics
Huffington Post article on DeepMind-requested AI ethics board, links back to LW [link]
http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html
Not going to summarize the article content, but I think this is the highest-level publication linking to LW so far.
Also, it's appears that Shane Legg, Jaan Tallin and others at Deep Mind leveraged the acquisition and moved the friendly AI conversation to a higher level, quite possibly highest level at Google. Interesting times, these are.
PSA for LW futurists/academics
Rutgers was planning to offer a "Future of Humankind" course via Coursera, but I received this announcement today:
"Hello Future of Humankind registrants,
The Future of Humankind Course Staff"
Average Coursera offerings are in tens of thousands, and this might be a chance to make a large impact. Proper contacts with Rutgers could be easily established, I also have some direct contacts at Coursera.
(I'm specifically thinking about you http://lesswrong.com/user/James_Miller)
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)