Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Fluttershy 21 April 2017 10:32:03PM *  2 points [-]

Some troubling relevant updates on EA Funds from the past few hours:

  • On April 20th, Kerry Vaughan from CEA published an update on EA Funds on the EA Forum. His post quotes the previous post in which he introduced the launch of EA Funds, which said:

We only want to focus on the Effective Altruism Funds if the community believes it will improve the effectiveness of their donations and that it will provide substantial value to the EA community. Accordingly, we plan to run the project for the next 3 months and then reassess whether the project should continue and if so, in what form.

  • In short, it was promised that a certain level of community support would be required to justify the continuation of EA Funds beyond the first three months of the project. In an effort to communicate that such a level of support existed, Kerry commented:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

  • Around 11 hours ago, I pointed out that this claim was patently false.
  • (I stand corrected by the reply to this comment which addressed this bullet point: the original post on which I had commented wasn't hidden from the EA Forum; I just needed to log out of my account on the EA Forum to see it after having downvoted it.)
  • Between the fact that the EA Funds project has taken significant criticism, failed to implement a plan to address it, acted as if its continuation was justified on the basis of having not received any such criticism, and signaled its openness to being deceptive in the future by doing all of this in a way that wasn't plausibly deniable, my personal opinion is that there is not sufficient reason to allow the EA Funds to continue to operate past their three-month trial period, and additionally, that I have less reason to trust other projects run by CEA in light of this debacle.
Comment author: jsteinhardt 21 April 2017 11:35:42PM 5 points [-]

When you downvote something on the EA forum, it becomes hidden. Have you tried viewing it while not logged in to your account? It's still visible to me.

Comment author: tristanm 20 April 2017 04:23:56PM 1 point [-]

Hmm. I'm reading OPP's grant write up for MIRI from 8/2016 and I think in that context I can see why it seems a little odd. For one thing, they say:

this research agenda has little potential to decrease potential risks from advanced AI in comparison with other research directions that we would consider supporting.

This in particular strikes me as strange because 1) If MIRI's approach can be summarized as "Finding method(s) to ensure guaranteed safe AI and proving them rigorously", then technically speaking, that approach should have nearly unlimited "potential", although I suppose it could be argued that progress would be made slowly compared to the speed at which practical AI improves. 2) "Other research directions" is quite vague. Can they point to where these other directions are outlined, a summary of accomplishments in that direction, and why they might feel they have a better potential?

My feeling is that in order to feel that MIRI's overall approach lacks potential, given that all current approaches to AI safety are fairly speculative and that there is no general consensus on how the problem should specifically be approached, then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly. I also speculate that they might have unstated reasons for being skeptical of MIRI's approach.

All of what I've said above is highly speculative and is based on my current, fairly uninformed outsider view.

Comment author: jsteinhardt 20 April 2017 09:03:58PM 1 point [-]

then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly.

I don't think there is consensus among technical advisors on what directions are most promising. Also, Paul has written substantially about his preferred approach (see here for instance), and I've started to do the same, although so far I've been mostly talking about obstacles rather than positive approaches. But you can see some of my writing here and here. Also my thoughts in slide form here, although those slides are aimed at ML experts.

Comment author: freyley 17 March 2017 11:10:59AM *  17 points [-]

Cohousing, in the US, is the term of art. I spent a while about a decade ago attempting to build a cohousing community, and it's tremendously hard. In the last few months I've moved, with my kids, into a house on a block with friends with kids, and I can now say that it's tremendously worthwhile.

Cohousings in the US are typically built in one of three ways:

  • Condo buildings, each condo sold as a condominium
  • Condo/apartment buildings, each apartment sold as a coop share
  • Separate houses.

The third one doesn't really work in major cities unless you get tremendously lucky.

The major problem with the first plan is, due to the Fair Housing Act in the 1960s, which was passed because at the time realtors literally would not show black people houses in white neighborhoods, you cannot pick your buyers. Any attempt to enforce rationalists moving in is illegal. Cohousings get around this by having voluntary things, but also by accepting that they'll get freeriders and have to live with it. Some cohousings I know of have had major problems with investors deciding cohousing is a good investment, buying condos, and renting them to whoever while they wait for the community to make their investment more valuable.

The major problem with the coop share approach is that, outside of New York City, it's tremendously hard to get a loan to buy a coop share. Very few banks do these, and usually at terrible interest rates.

Some places have gotten around this by having a rich benefactor who buys a big building and rents it, but individuals lose out on the financial benefits of homeownership. In addition, it is probably also illegal under the Fair Housing Act to choose your renters if there are separate units.

The other difficulties with cohousing are largely around community building, which you've probably seen plenty of with rationalist houses, so I won't belabor the point on that.

Comment author: jsteinhardt 20 March 2017 02:31:47AM 3 points [-]

Any attempt to enforce rationalists moving in is illegal.

Is this really true? Based on my experience (not any legal experience, just seeing what people generally do that is considered fine) I think in the Bay Area the following are all okay:

  • Only listing a house to your friends / social circle.
  • Interviewing people who want to live with you and deciding based on how much you like them.

The following are not okay:

  • Having a rule against pets that doesn't have an exception for seeing-eye dogs.
  • Explicitly deciding not to take someone as a house-mate only on the basis of some protected trait like race, etc. (but gender seems to be fine?).
Comment author: jsteinhardt 10 December 2016 10:10:48AM 8 points [-]

Thanks for posting this, I think it's good to make these things explicit even if it requires effort. One piece of feedback: I think someone who reads this who doesn't already know what "existential risk" and "AI safety" are will be confused (they suddenly show up in the second bullet point without being defined, though it's possible I'm missing some context here).

Comment author: Qiaochu_Yuan 30 November 2016 08:20:53PM 4 points [-]

All of this, and also, there are strategic considerations on GiveWell's side: they want to be able to offer recommendations that they can defend publicly to their donor pool, which is filled with a particular mix of people looking for a particular kind of charity recommendation out of GiveWell. Directly comparing MIRI to more straightforward charities like AMF dilutes GiveWell's brand in a way that would be strategically a bad idea for them, and these sorts of considerations are part of the reason why OpenPhil exists:

We feel it is important to start separating the GiveWell brand from the Open Philanthropy Project brand, since the latter is evolving into something extremely different from GiveWell’s work identifying evidence-backed charities serving the global poor. A separate brand is a step in the direction of possibly conducting the two projects under separate organizations, though we aren’t yet doing that (more on this topic at our overview of plans for 2014 published earlier this year).

Comment author: jsteinhardt 01 December 2016 02:59:20AM 2 points [-]

I don't think you are actually making this argument, but this comes close to an uncharitable view of GiveWell that I strongly disagree with, which goes something like "GiveWell can't recommend MIRI because it would look weird and be bad for their brand, even if they think that MIRI is actually the best place to donate to." I think GiveWell / OpenPhil are fairly insensitive to considerations like this and really just want to recommend the things they actually think are best independent of public opinion. The separate branding decision seems like a clearly good idea to me, but I think that if for some reason OpenPhil were forced to have inseparable branding from GiveWell, they would be making the same recommendations.

Comment author: Lightwave 29 November 2016 10:41:25PM *  9 points [-]

Funny you should mention that..

AI risk is one of the 2 main focus areas for the The Open Philanthropy Project for this year, which GiveWell is part of. You can read Holden Karnofsky's Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity.

They consider that AI risk is high enough on importance, neglectedness, and tractability (their 3 main criteria for choosing what to focus on) to be worth prioritizing.

Comment author: jsteinhardt 30 November 2016 04:34:17AM 5 points [-]

Also like: here is a 4000-word evaluation of MIRI by OpenPhil. ???

Comment author: ChristianKl 18 July 2016 03:29:14PM 0 points [-]

What do you actually want to do with your life? There are careers like politics where personal connection that are gathered during university years are very important.

There are other careers such as starting a startup where personal connections with high status people might not be central and a lot of the YC founders don't have them.

Either there's some sort of self-selection, or do graduates from there have better prospects than graduates of 'University of X, YZ'?

Why "either or"?

Comment author: jsteinhardt 18 July 2016 05:31:28PM 1 point [-]

Wait what? How are you supposed to meet your co-founder / early employees without connections? College is like the ideal place to meet people to start start-ups with.

Comment author: Gunnar_Zarncke 13 July 2016 10:55:11PM -2 points [-]

You seem to think that people that are not completely satiated are automatically cranky. That doesn't match my observation.

Also you may have multiple dishes. For example we mostly start with a collaboratively prepared soup - which thereby will be the right size by construction. Later we have some snacks or sweets or fruits. First the fresh ones, later if needed packaged ones.

Comment author: jsteinhardt 14 July 2016 01:17:27AM 1 point [-]

I don't think I need that for my argument to work. My claim is that if people get, say, less than 70% of a meal's worth of food, an appreciable fraction (say at least 30%) will get cranky.

Comment author: Gunnar_Zarncke 13 July 2016 07:35:14PM -1 points [-]

But there is a difference between having an amount appropriate to avoid crankiness and more than can be eaten.

Comment author: jsteinhardt 13 July 2016 09:01:50PM 1 point [-]

But like, there's variation in how much food people will end up eating, and at least some of that is not variation that you can predict in advance. So unless you have enough food that you routinely end up with more than can be eaten, you are going to end up with a lot of cranky people a non-trivial fraction of the time. You're not trying to peg production to the mean consumption, but (e.g.) to the 99th percentile of consumption.

Comment author: MrMind 13 July 2016 08:16:56AM *  1 point [-]

That calories are used as social lubricant irks me a lot. I understand why it was so in the past, but we live in a world filled to the brim with food, do we really need tens of thousands of calories at any social gathering?
The answer is obiously not, indeed it would be beneficial to lower the amount circulating... But as Lumifer spotted and wannabe rationalists often overlook, what appears as waste and irrationality is actually a situation optimized for status.
Ignoring status is almost always a bad idea, BUT: we can always treat it as just another contraint.
Given that we need to optimize for status and waste reduction, what could we do?

  • coordinate with a charity to pick-up the leftovers
  • use food that can be easily refrigerated and consumed gradually later
  • have food in stages, so that variety masks lack of abundance (and pressure people into eating leftovers)
  • repackage leftovers and offer them as parting gifts ...

These are just from a less than five minute brainstorming session, I'm sure someone invested in this would come up with much more interesting and creative ideas.

Comment author: jsteinhardt 13 July 2016 03:45:52PM 2 points [-]

I don't think this is really a status thing, more a "don't be a dick to your guests" thing. Many people get cranky if they are hungry, and putting 30+ cranky people together in a room is going to be a recipe for unpleasantness.

View more: Next