Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: seez 06 May 2014 10:09:32PM 17 points [-]

Seriously! I just overheard someone say "wow, maybe all that rationality stuff actually does help them do better."

Comment author: Louie 06 May 2014 10:44:30PM 3 points [-]

That's cool. Where did you hear that?

Comment author: Louie 05 May 2014 11:14:44AM *  43 points [-]

2009: "Extreme Rationality: It's Not That Great"

2010: "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality"

2013: "How about testing our ideas?"

2014: "Truth: It's Not That Great"

2015: "Meta-Countersignaling Equilibria Drift: Can We Accelerate It?"

2016: "In Defense Of Putting Babies In Wood Chippers"

Comment author: Mestroyer 28 April 2014 02:57:32AM 3 points [-]

It seemed pretty obvious to me that MIRI thinks defenses cannot be made, whether or not such a list exists, and wants easier ways to convince people that defenses cannot be made. Thus the part that said: "We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated. "

Comment author: Louie 28 April 2014 03:10:32AM 4 points [-]

Yes. I assume this is why she's collecting these ideas.

Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in".

In general MIRI isn't in favor of soliciting storytelling about the singularity. It's a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.

Comment author: jimrandomh 28 April 2014 02:01:01AM 0 points [-]

It seemed pretty obvious to me that the point of making such a list was to plan defenses.

Comment author: Louie 28 April 2014 02:38:03AM 4 points [-]

Than you should reduce your confidence in what you consider obvious.

Comment author: Louie 28 April 2014 01:48:07AM 0 points [-]

So MIRI is interested in making a better list of possible concrete routes to AI taking over the world.

I wouldn't characterize this as something that MIRI wants.

Comment author: Louie 21 March 2014 10:28:33PM *  5 points [-]

To clarify, One Medical partnered with us on this event... but are not materially involved with expanding MIRI themselves. They're simply an innovative business nearby us in Berkeley who wanted to support our work. I know it's somewhat unprecedented to see MIRI with strong corporate support, but trust me, it's a good thing. One Medical's people did a ton of legwork and made it super easy to host over 100 guests at that event with almost no planning needed on our part. They took care of everything so we could just focus on our work. A perfect partnership in our opinion.

Also, we still have $149 credits for the free 1-year memberships to One Medical's service. If you live in Berkeley, SF, NY, Boston, Chicago, LA, or DC and are looking for a good primary care doctor, check out their website and if you think it's a good fit for you, take them up on their promotional offer with this link: http://bit.ly/1fnRHrH (expires 4/9/14).

Comment author: Alex_Altair 03 October 2013 08:16:54PM 1 point [-]

The material covered in Causality is more like a subset of that in PGM. PGM is like an encyclopedia, and Causality is a comprehensive introduction to one application of PGMs.

Comment author: Louie 05 October 2013 08:36:42AM 1 point [-]

Thanks. That was what I thought, but I haven't read Causality yet.

Comment author: cousin_it 30 September 2013 06:31:35PM *  5 points [-]

I noticed that the course list doesn't cover several topics that are popular on LW. Some suggestions:

Game theory - Fudenberg and Tirole

K-complexity - Li and Vitanyi

Causality - Pearl

And maybe something on cryptography, but I don't know enough about it to recommend a good book.

Comment author: Louie 03 October 2013 07:52:49PM 0 points [-]

Do you think Causality is a superior recommendation to Probabilistic Graphical Models?

Comment author: JonahSinick 22 July 2013 10:09:28PM 8 points [-]

The links to Eliezer's Open Problems in FAI papers are broken.

Comment author: Louie 22 July 2013 10:58:21PM 6 points [-]

Fixed. Thanks.

Comment author: CarlShulman 13 June 2013 06:50:06AM *  7 points [-]

A question at this point I might ask is how good does the final estimate have to be?

First, there are multiple applications of accurate estimates.

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ."

There are some people going around making the claim, based on the extreme low-ball cost estimates, that these veg ads would save human lives more cheaply than AMF by reducing food prices. With saner estimates, not so, I think.

Second, there's the question of flow-through effects, which presumably dominate in a total utilitarian calculation anyway, if that's what you're into. The animal experiences probably don't have much effect there, but people being vegetarian might have some, as could effects on human health, pollution, food prices, social movements, etc.

To address the total utilitarian question would require a different sort of evidence, at least in the realistic ranges.

Comment author: Louie 16 June 2013 10:24:35AM 1 point [-]

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ." There are some people going around making the claim, based on the extreme low-ball cost estimates.

Correct. I make this claim. If vegetarianism is that cheap, it's reasonable to bin it with other wastefully low-value virtues like recycling paper, taking shorter showers, turning off lights, voting, "staying informed", volunteering at food banks, and commenting on less wrong.

View more: Next