NIPS 2015

7 Dr_Manhattan 07 December 2015 08:31PM

There is a fairly large contingent of safety research oriented people at NIPS this year. I'm unfortunately not among them, but if you're there and interested in connecting with others on AI safety topics or other LW issues - general rationality, EA, etc. I welcome you to make this thread a Schelling point to create meeting opportunities :). You can also PM me I can connect you to people I know are there.

Velocity of behavioral evolution

3 Dr_Manhattan 19 December 2014 05:34PM

This suggested a major update on the velocity of behavioral trait evolution.

Basically mice transmitted fear of cherry smell reliably into the very next generation (via epigenetics). 

www.newscientist.com/article/dn24677-fear-of-a-smell-can-be-passed-down-several-generations.html#.VJRgr8ADo

This seems pretty important.

What Peter Thiel thinks about AI risk

12 Dr_Manhattan 11 December 2014 09:22PM

This is probably the clearest statement from him on the issue:

http://betaboston.com/news/2014/12/10/audio-peter-thiel-visits-boston-university-to-talk-entrepreneurship-and-backing-zuck/

25:30 mins in

 

TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

 

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

Cognitive distortions of founders

3 Dr_Manhattan 11 December 2014 03:19AM

Interesting take on entrepreneurial success:

http://www.harrisonmetal.com/cognitive-distortions-of-founders/

More in depth here:

http://quarry.stanford.edu/xapm1111126lse/docs/02_LSE_Cognitive.pdf

Curious what people here think of this. 

 

FAI PR tracking well [link]

7 Dr_Manhattan 15 August 2014 09:23PM

This time, it's by "The Editors" of Bloomberg view (which is very significant in News world). Content is very reasonable explanation of AI concerns, though not novel to this audience.

http://www.bloombergview.com/articles/2014-08-10/intelligent-machines-scare-smart-people

Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas? Perhaps one of the orgs (MIRI, FHI, CSER, FLI) reach out and say hello to the editors? 

Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]

18 Dr_Manhattan 21 April 2014 04:55PM

http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html

Very surprised none has linked to this yet:

TL;DR: AI is a very underfunded existential risk.

Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.

 

Gunshot victims to be suspended between life and death [link]

24 Dr_Manhattan 27 March 2014 04:33PM

http://www.newscientist.com/article/mg22129623.000-gunshot-victims-to-be-suspended-between-life-and-death.html?full=true

- First "official" program to practice suspended animation

- The article naturally goes on to ask whether longer SA (months, years) is possible 

- Amazing quote: "Every day at work I declare people dead. They have no signs of life, no heartbeat, no brain activity. I sign a piece of paper knowing in my heart that they are not actually dead. I could, right then and there, suspend them. But I have to put them in a body bag. It's frustrating to know there's a solution."

- IMO this if (I hope!) successful, will go a long way to bridge the emotional gap for cryonics

Huffington Post article on DeepMind-requested AI ethics board, links back to LW [link]

13 Dr_Manhattan 30 January 2014 01:20AM

http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html

Not going to summarize the article content, but I think this is the highest-level publication linking to LW so far.

Also, it's appears that Shane Legg, Jaan Tallin and others at Deep Mind leveraged the acquisition and moved the friendly AI conversation to a higher level, quite possibly highest level at Google. Interesting times, these are.

H+ review of James Miller's Singularity Rising [link]

-4 Dr_Manhattan 17 January 2014 02:16AM

PSA for LW futurists/academics

7 Dr_Manhattan 31 October 2013 03:54PM

Rutgers was planning to offer a "Future of Humankind" course via Coursera, but I received this announcement today:

"Hello Future of Humankind registrants,

It is with great sadness, that we must inform you that due to the untimely death of James Martin, we will be unable to offer the Future of Humankind course as had originally been planned.  Rutgers University is interested in developing similar course material, and we will keep you abreast of any updates with developments in our courses.

Thank you for your understanding,

The Future of Humankind Course Staff"

Average Coursera offerings are in tens of thousands, and this might be a chance to make a large impact. Proper contacts with Rutgers could be easily established, I also have some direct contacts at Coursera.

(I'm specifically thinking about you 
http://lesswrong.com/user/James_Miller)

View more: Next