Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: lukeprog 28 February 2015 06:54:29PM 5 points [-]

In descending order of importance:

  1. Instrumental rationality.
  2. How to actually practice becoming more rational.
  3. Rationality via cluster thinking and intelligent imitation.
Comment author: KatjaGrace 23 February 2015 09:19:21PM 6 points [-]

If there is an objective morality, but we don't care about it, is it relevant in any way?

Comment author: lukeprog 24 February 2015 02:58:55AM 2 points [-]

I think Peter Singer wrote a paper arguing "no," but I can't find it at the moment.

Comment author: lukeprog 25 January 2015 11:06:34PM 20 points [-]
Comment author: drethelin 15 January 2015 10:28:14PM 0 points [-]

wow everyone is so squinty

Comment author: lukeprog 15 January 2015 10:40:36PM 2 points [-]

It was so bright out! The photo has my eyes completely closed, unfortunately. :)

Comment author: devi 12 January 2015 04:01:01PM 6 points [-]

Is the idea to get as many people as possible to sign this? Or do we want to avoid the image of a giant LW puppy jumping up and down while barking loudly, when the matter finally starts getting attention from serious people?

Comment author: lukeprog 12 January 2015 09:29:00PM *  7 points [-]

After the first few pages of signatories, I recognize very few of the names, so my guess is that LW signers will just get drowned in the much larger population of people who support the basic content of the research priorities document, which means there's not much downside to lots of LWers signing the open letter.

Comment author: Eliezer_Yudkowsky 14 December 2014 07:00:33PM 7 points [-]

Context: Elon Musk thinks there's an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I'm also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).

Comment author: lukeprog 12 January 2015 01:10:01AM 0 points [-]

The Rob Bensinger post on this is now here.

Comment author: lukeprog 11 January 2015 08:13:06PM 2 points [-]
Comment author: lukeprog 26 December 2014 03:46:48PM 19 points [-]

In case LWers are wondering why MIRI didn't post to LW about its own fundraising drive, that's because we already finished it.

Also, if your employer does corporate matching (check here) and you haven't used it all up yet and you'd like to donate to CFAR, remember to do so before January 1st so that your corporate matching for 2014 doesn't go unused!

Comment author: polymathwannabe 24 December 2014 12:23:45AM *  2 points [-]

In the original article (PDF, free to download after you register) I find:

"The artificial connectome has been extended to a single application written in Python and run on a Raspberry Pi computer."

Comment author: lukeprog 24 December 2014 01:23:02AM 4 points [-]

The original article also links this YouTube video, for those who are interested.

Comment author: Rick_from_Castify 16 December 2014 11:56:42PM 5 points [-]

Thanks for the offer Robin but we've decided to go with professionals. Early on we auditioned people from the LessWrong community and we decided that having a well produced reading with a professional voice actor makes a big difference on the listening experience.

Comment author: lukeprog 17 December 2014 12:11:22AM 3 points [-]

Agree!

View more: Next