Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Qiaochu_Yuan 27 May 2017 06:56:21PM 0 points [-]

Sure, but what I'd like to know is why Nisan thinks that difference is important in this case.

Comment author: jsteinhardt 27 May 2017 08:51:14PM 8 points [-]

Parts of the house setup pattern-match to a cult, cult members aren't good at realizing when they need to leave, but their friends can probably tell much more easily.

(I don't mean the above as negatively as it sounds connotatively, but it's the most straightforward way to say what I think is the reason to want external people. I also think this reasoning degrades gracefully with the amount of cultishness.)

Comment author: Qiaochu_Yuan 27 May 2017 08:56:58AM *  0 points [-]

This seems extreme. Do you not expect that each participant will already have at least one friend outside the house they can talk to about the house if things go poorly, without this needing to be an explicit policy? Or do you worry that things will go so poorly that this won't work for some reason? If so, can you share a more detailed model?

Comment author: jsteinhardt 27 May 2017 05:29:34PM 8 points [-]

I think there's a difference between a friend that one could talk to (if they decide to), and a friend tasked with the specific responsibility of checking in and intervening if things seem to be going badly.

Comment author: jsteinhardt 28 April 2017 04:52:09AM 1 point [-]

I feel like you're straw-manning scenario analysis. Here's an actual example of a document produced via scenario analysis: Global Trends 2035.

Comment author: Fluttershy 21 April 2017 10:32:03PM *  2 points [-]

Some troubling relevant updates on EA Funds from the past few hours:

  • On April 20th, Kerry Vaughan from CEA published an update on EA Funds on the EA Forum. His post quotes the previous post in which he introduced the launch of EA Funds, which said:

We only want to focus on the Effective Altruism Funds if the community believes it will improve the effectiveness of their donations and that it will provide substantial value to the EA community. Accordingly, we plan to run the project for the next 3 months and then reassess whether the project should continue and if so, in what form.

  • In short, it was promised that a certain level of community support would be required to justify the continuation of EA Funds beyond the first three months of the project. In an effort to communicate that such a level of support existed, Kerry commented:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

  • Around 11 hours ago, I pointed out that this claim was patently false.
  • (I stand corrected by the reply to this comment which addressed this bullet point: the original post on which I had commented wasn't hidden from the EA Forum; I just needed to log out of my account on the EA Forum to see it after having downvoted it.)
  • Between the fact that the EA Funds project has taken significant criticism, failed to implement a plan to address it, acted as if its continuation was justified on the basis of having not received any such criticism, and signaled its openness to being deceptive in the future by doing all of this in a way that wasn't plausibly deniable, my personal opinion is that there is not sufficient reason to allow the EA Funds to continue to operate past their three-month trial period, and additionally, that I have less reason to trust other projects run by CEA in light of this debacle.
Comment author: jsteinhardt 21 April 2017 11:35:42PM 5 points [-]

When you downvote something on the EA forum, it becomes hidden. Have you tried viewing it while not logged in to your account? It's still visible to me.

Comment author: tristanm 20 April 2017 04:23:56PM 1 point [-]

Hmm. I'm reading OPP's grant write up for MIRI from 8/2016 and I think in that context I can see why it seems a little odd. For one thing, they say:

this research agenda has little potential to decrease potential risks from advanced AI in comparison with other research directions that we would consider supporting.

This in particular strikes me as strange because 1) If MIRI's approach can be summarized as "Finding method(s) to ensure guaranteed safe AI and proving them rigorously", then technically speaking, that approach should have nearly unlimited "potential", although I suppose it could be argued that progress would be made slowly compared to the speed at which practical AI improves. 2) "Other research directions" is quite vague. Can they point to where these other directions are outlined, a summary of accomplishments in that direction, and why they might feel they have a better potential?

My feeling is that in order to feel that MIRI's overall approach lacks potential, given that all current approaches to AI safety are fairly speculative and that there is no general consensus on how the problem should specifically be approached, then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly. I also speculate that they might have unstated reasons for being skeptical of MIRI's approach.

All of what I've said above is highly speculative and is based on my current, fairly uninformed outsider view.

Comment author: jsteinhardt 20 April 2017 09:03:58PM 2 points [-]

then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly.

I don't think there is consensus among technical advisors on what directions are most promising. Also, Paul has written substantially about his preferred approach (see here for instance), and I've started to do the same, although so far I've been mostly talking about obstacles rather than positive approaches. But you can see some of my writing here and here. Also my thoughts in slide form here, although those slides are aimed at ML experts.

Comment author: freyley 17 March 2017 11:10:59AM *  17 points [-]

Cohousing, in the US, is the term of art. I spent a while about a decade ago attempting to build a cohousing community, and it's tremendously hard. In the last few months I've moved, with my kids, into a house on a block with friends with kids, and I can now say that it's tremendously worthwhile.

Cohousings in the US are typically built in one of three ways:

  • Condo buildings, each condo sold as a condominium
  • Condo/apartment buildings, each apartment sold as a coop share
  • Separate houses.

The third one doesn't really work in major cities unless you get tremendously lucky.

The major problem with the first plan is, due to the Fair Housing Act in the 1960s, which was passed because at the time realtors literally would not show black people houses in white neighborhoods, you cannot pick your buyers. Any attempt to enforce rationalists moving in is illegal. Cohousings get around this by having voluntary things, but also by accepting that they'll get freeriders and have to live with it. Some cohousings I know of have had major problems with investors deciding cohousing is a good investment, buying condos, and renting them to whoever while they wait for the community to make their investment more valuable.

The major problem with the coop share approach is that, outside of New York City, it's tremendously hard to get a loan to buy a coop share. Very few banks do these, and usually at terrible interest rates.

Some places have gotten around this by having a rich benefactor who buys a big building and rents it, but individuals lose out on the financial benefits of homeownership. In addition, it is probably also illegal under the Fair Housing Act to choose your renters if there are separate units.

The other difficulties with cohousing are largely around community building, which you've probably seen plenty of with rationalist houses, so I won't belabor the point on that.

Comment author: jsteinhardt 20 March 2017 02:31:47AM 3 points [-]

Any attempt to enforce rationalists moving in is illegal.

Is this really true? Based on my experience (not any legal experience, just seeing what people generally do that is considered fine) I think in the Bay Area the following are all okay:

  • Only listing a house to your friends / social circle.
  • Interviewing people who want to live with you and deciding based on how much you like them.

The following are not okay:

  • Having a rule against pets that doesn't have an exception for seeing-eye dogs.
  • Explicitly deciding not to take someone as a house-mate only on the basis of some protected trait like race, etc. (but gender seems to be fine?).
Comment author: jsteinhardt 10 December 2016 10:10:48AM 8 points [-]

Thanks for posting this, I think it's good to make these things explicit even if it requires effort. One piece of feedback: I think someone who reads this who doesn't already know what "existential risk" and "AI safety" are will be confused (they suddenly show up in the second bullet point without being defined, though it's possible I'm missing some context here).

Comment author: Qiaochu_Yuan 30 November 2016 08:20:53PM 4 points [-]

All of this, and also, there are strategic considerations on GiveWell's side: they want to be able to offer recommendations that they can defend publicly to their donor pool, which is filled with a particular mix of people looking for a particular kind of charity recommendation out of GiveWell. Directly comparing MIRI to more straightforward charities like AMF dilutes GiveWell's brand in a way that would be strategically a bad idea for them, and these sorts of considerations are part of the reason why OpenPhil exists:

We feel it is important to start separating the GiveWell brand from the Open Philanthropy Project brand, since the latter is evolving into something extremely different from GiveWell’s work identifying evidence-backed charities serving the global poor. A separate brand is a step in the direction of possibly conducting the two projects under separate organizations, though we aren’t yet doing that (more on this topic at our overview of plans for 2014 published earlier this year).

Comment author: jsteinhardt 01 December 2016 02:59:20AM 2 points [-]

I don't think you are actually making this argument, but this comes close to an uncharitable view of GiveWell that I strongly disagree with, which goes something like "GiveWell can't recommend MIRI because it would look weird and be bad for their brand, even if they think that MIRI is actually the best place to donate to." I think GiveWell / OpenPhil are fairly insensitive to considerations like this and really just want to recommend the things they actually think are best independent of public opinion. The separate branding decision seems like a clearly good idea to me, but I think that if for some reason OpenPhil were forced to have inseparable branding from GiveWell, they would be making the same recommendations.

Comment author: Lightwave 29 November 2016 10:41:25PM *  9 points [-]

Funny you should mention that..

AI risk is one of the 2 main focus areas for the The Open Philanthropy Project for this year, which GiveWell is part of. You can read Holden Karnofsky's Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity.

They consider that AI risk is high enough on importance, neglectedness, and tractability (their 3 main criteria for choosing what to focus on) to be worth prioritizing.

Comment author: jsteinhardt 30 November 2016 04:34:17AM 5 points [-]

Also like: here is a 4000-word evaluation of MIRI by OpenPhil. ???

Comment author: ChristianKl 18 July 2016 03:29:14PM 0 points [-]

What do you actually want to do with your life? There are careers like politics where personal connection that are gathered during university years are very important.

There are other careers such as starting a startup where personal connections with high status people might not be central and a lot of the YC founders don't have them.

Either there's some sort of self-selection, or do graduates from there have better prospects than graduates of 'University of X, YZ'?

Why "either or"?

Comment author: jsteinhardt 18 July 2016 05:31:28PM 1 point [-]

Wait what? How are you supposed to meet your co-founder / early employees without connections? College is like the ideal place to meet people to start start-ups with.

View more: Next