Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jsteinhardt 10 December 2016 10:10:48AM 7 points [-]

Thanks for posting this, I think it's good to make these things explicit even if it requires effort. One piece of feedback: I think someone who reads this who doesn't already know what "existential risk" and "AI safety" are will be confused (they suddenly show up in the second bullet point without being defined, though it's possible I'm missing some context here).

Comment author: Qiaochu_Yuan 30 November 2016 08:20:53PM 4 points [-]

All of this, and also, there are strategic considerations on GiveWell's side: they want to be able to offer recommendations that they can defend publicly to their donor pool, which is filled with a particular mix of people looking for a particular kind of charity recommendation out of GiveWell. Directly comparing MIRI to more straightforward charities like AMF dilutes GiveWell's brand in a way that would be strategically a bad idea for them, and these sorts of considerations are part of the reason why OpenPhil exists:

We feel it is important to start separating the GiveWell brand from the Open Philanthropy Project brand, since the latter is evolving into something extremely different from GiveWell’s work identifying evidence-backed charities serving the global poor. A separate brand is a step in the direction of possibly conducting the two projects under separate organizations, though we aren’t yet doing that (more on this topic at our overview of plans for 2014 published earlier this year).

Comment author: jsteinhardt 01 December 2016 02:59:20AM 2 points [-]

I don't think you are actually making this argument, but this comes close to an uncharitable view of GiveWell that I strongly disagree with, which goes something like "GiveWell can't recommend MIRI because it would look weird and be bad for their brand, even if they think that MIRI is actually the best place to donate to." I think GiveWell / OpenPhil are fairly insensitive to considerations like this and really just want to recommend the things they actually think are best independent of public opinion. The separate branding decision seems like a clearly good idea to me, but I think that if for some reason OpenPhil were forced to have inseparable branding from GiveWell, they would be making the same recommendations.

Comment author: Lightwave 29 November 2016 10:41:25PM *  9 points [-]

Funny you should mention that..

AI risk is one of the 2 main focus areas for the The Open Philanthropy Project for this year, which GiveWell is part of. You can read Holden Karnofsky's Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity.

They consider that AI risk is high enough on importance, neglectedness, and tractability (their 3 main criteria for choosing what to focus on) to be worth prioritizing.

Comment author: jsteinhardt 30 November 2016 04:34:17AM 5 points [-]

Also like: here is a 4000-word evaluation of MIRI by OpenPhil. ???

Comment author: ChristianKl 18 July 2016 03:29:14PM 0 points [-]

What do you actually want to do with your life? There are careers like politics where personal connection that are gathered during university years are very important.

There are other careers such as starting a startup where personal connections with high status people might not be central and a lot of the YC founders don't have them.

Either there's some sort of self-selection, or do graduates from there have better prospects than graduates of 'University of X, YZ'?

Why "either or"?

Comment author: jsteinhardt 18 July 2016 05:31:28PM 1 point [-]

Wait what? How are you supposed to meet your co-founder / early employees without connections? College is like the ideal place to meet people to start start-ups with.

Comment author: Gunnar_Zarncke 13 July 2016 10:55:11PM -2 points [-]

You seem to think that people that are not completely satiated are automatically cranky. That doesn't match my observation.

Also you may have multiple dishes. For example we mostly start with a collaboratively prepared soup - which thereby will be the right size by construction. Later we have some snacks or sweets or fruits. First the fresh ones, later if needed packaged ones.

Comment author: jsteinhardt 14 July 2016 01:17:27AM 1 point [-]

I don't think I need that for my argument to work. My claim is that if people get, say, less than 70% of a meal's worth of food, an appreciable fraction (say at least 30%) will get cranky.

Comment author: Gunnar_Zarncke 13 July 2016 07:35:14PM -1 points [-]

But there is a difference between having an amount appropriate to avoid crankiness and more than can be eaten.

Comment author: jsteinhardt 13 July 2016 09:01:50PM 1 point [-]

But like, there's variation in how much food people will end up eating, and at least some of that is not variation that you can predict in advance. So unless you have enough food that you routinely end up with more than can be eaten, you are going to end up with a lot of cranky people a non-trivial fraction of the time. You're not trying to peg production to the mean consumption, but (e.g.) to the 99th percentile of consumption.

Comment author: MrMind 13 July 2016 08:16:56AM *  1 point [-]

That calories are used as social lubricant irks me a lot. I understand why it was so in the past, but we live in a world filled to the brim with food, do we really need tens of thousands of calories at any social gathering?
The answer is obiously not, indeed it would be beneficial to lower the amount circulating... But as Lumifer spotted and wannabe rationalists often overlook, what appears as waste and irrationality is actually a situation optimized for status.
Ignoring status is almost always a bad idea, BUT: we can always treat it as just another contraint.
Given that we need to optimize for status and waste reduction, what could we do?

  • coordinate with a charity to pick-up the leftovers
  • use food that can be easily refrigerated and consumed gradually later
  • have food in stages, so that variety masks lack of abundance (and pressure people into eating leftovers)
  • repackage leftovers and offer them as parting gifts ...

These are just from a less than five minute brainstorming session, I'm sure someone invested in this would come up with much more interesting and creative ideas.

Comment author: jsteinhardt 13 July 2016 03:45:52PM 2 points [-]

I don't think this is really a status thing, more a "don't be a dick to your guests" thing. Many people get cranky if they are hungry, and putting 30+ cranky people together in a room is going to be a recipe for unpleasantness.

Comment author: Jiro 06 July 2016 04:06:56AM 6 points [-]

This write-up is about my most recent exercise: Do a Non Gender-Conforming Thing

Don't spend your idiosyncrasy credits frivolously.

Comment author: jsteinhardt 06 July 2016 06:58:50AM 2 points [-]

I don't really think this is spending idiosyncrasy credits... but maybe we hang out in different social circles.

Comment author: username2 28 April 2016 09:58:42AM 1 point [-]

I don't like this idea, but people, please do not downvote Daniel just because you disagree. Downvote thumb is not for disagreements, it's for comments that don't add anything to the discussion.

Comment author: jsteinhardt 29 April 2016 06:26:53AM 1 point [-]

I assume at least some of the downvotes are from Eugene sockpuppets (he tends to downvote any suggestions that would make it harder to do his trolling).

Comment author: Vika 30 January 2016 04:59:54AM 9 points [-]

The above-mentioned researchers are skeptical in different ways. Andrew Ng thinks that human-level AI is ridiculously far away, and that trying to predict the future more than 5 years out is useless. Yann LeCun and Yoshua Bengio believe that advanced AI is far from imminent, but approve of people thinking about long-term AI safety.

Okay, but surely it’s still important to think now about the eventual consequences of AI. - Absolutely. We ought to be talking about these things.

Comment author: jsteinhardt 30 January 2016 08:01:43AM *  13 points [-]

+1 To go even further, I would add that it's unproductive to think of these researchers as being on anyone's "side". These are smart, nuanced people and rounding their comments down to a specific agenda is a recipe for misunderstanding.

View more: Next