Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gwillen 24 October 2017 10:47:21PM 0 points [-]

But how did she take it / how did she respond? Provide us evidence in favor of your approach. :-)

(FWIW, I tend to have similar kinds of thoughts when people communicate with me less than I'd like -- less than I'd communicate in their place -- and I think an important insight is that not everybody has those thoughts. In my case, I think there's a convincing case that I have an anxious attachment style, and it helps me to reflect on the fact that there's a lot more happening in my head than happening in reality, in these cases.)

Comment author: justinpombrio 26 July 2017 10:31:46PM 3 points [-]

What is your goal? Type theory is at the intersection of programming languages and logic. If you care about programming languages and type systems, read TAPL:


If you care about type theory purely as a logic, I don't have an obvious recommendation, but could point you at some material.

(Programming Languages researcher)

Comment author: gwillen 27 July 2017 01:09:13AM 1 point [-]

Seconding TAPL, it was the textbook for the type theory course I took in college, it is topnotch.

Comment author: gwillen 05 June 2017 03:59:26AM *  6 points [-]

I think even besides the just world hypothesis, there's a statistical thing at work in the blondes example, which has a name that I am totally unable to find right now.

If you imagine (just as a simple example) that most people you interact with regularly are people providing services to you commercially, who therefore have jobs; and that further, getting a job requires one to be a strong applicant on one of various axes (e.g. a more attractive person can get a job with less intelligence, and vice versa), then you will find those things inversely correlated (spuriously) in the population of job-havers, due to a selection effect with a name I can't remember.

EDIT: https://en.wikipedia.org/wiki/Berkson%27s_paradox

Comment author: drethelin 31 May 2017 05:59:20AM 9 points [-]


Comment author: gwillen 03 June 2017 08:45:15AM 0 points [-]

I don't disagree that downvotes are valuable, but I think what was needed here was moderator action. It's much too late for that now, though. (And I'm not blaming the moderators -- I've been in their shoes and their job is very difficult. There would have been plenty of blame heaped on them if they'd done what I think is the right thing.)

Comment author: Duncan_Sabien 26 May 2017 10:49:48AM 2 points [-]

The number of excuses for not being present is basically the most restrictive list you'd expect—if you're literally not in town, if you're sick, if you're attending to a personal tragedy. The idea is not to make the house anyone's first priority, it's to make it something like everyone's third priority (but actually above all but a couple of things).

So, no missing exercise because of a party, no missing it because you kinda need to work late, etc. Maybe missing for a once-in-a-year opportunity like a talk or a concert that you've been looking forward to for ages, with specific recompense to your housemates for the cost imposed by your absence? But in short, it's the thing that other stuff has to work around, not vice-versa.

Comment author: gwillen 26 May 2017 07:11:16PM 2 points [-]

Ok, this sounds quite a bit less authoritarian than I was picturing, and I basically did expect that you were planning to require this to be essentially everyone's first priority, maybe tied with paid employment at best, but even then requiring that paid employment take specific forms not conflicting with the experiment. (I had definitely framed it this way in my head when I was asking my other question in this thread.) I don't know if I'm the only one.

Comment author: Duncan_Sabien 26 May 2017 11:06:14AM 1 point [-]

Y'know, that was the section I was least confident in. I think I'm updating my assertion to something like "will have logged an initial 20 hours, enough to understand the territory and not feel identity-blocked from moving forward if desired."

I suspect you're looking at at least 100 hours to even begin to be competent to do informal contract work in any of those fields, and properly more like 1000+ hours' training. Some of them require certification, as well.

Comment author: gwillen 26 May 2017 07:06:55PM 1 point [-]

I was assuming "fundamentals of" didn't imply getting the skill to the point that one actually would be employable with it, just that one would get enough of the basics to do the skill and continue to practice it. That level seems eminently achievable. The greater level does seem challenging.

Comment author: Duncan_Sabien 26 May 2017 10:59:54AM *  11 points [-]

I think the main issue here is culture. Like, I agree with you that I think most members of the rationalsphere wouldn't do well in a military bootcamp, and I think this suggests a failing of the rationalist community—a pendulum that swung too far, and has weakened people in a way that's probably better than the previous/alternative weakness, but still isn't great and shouldn't be lauded. I, at least, would do fine in a military bootcamp. So, I suspect, would the rationalists I actually admire (Nate S, Anna S, Eli T, Alex R, etc). I suspect Eliezer wouldn't join a military bootcamp, but conditional on him having chosen to do so, I suspect he'd do quite well, also. There's something in there about being able to draw on a bank of strength/go negative temporarily/have meta-level trust that you can pull through/not confuse pain with damage/not be cut off from the whole hemisphere of strategies that require some amount of battering.

It makes sense to me that our community's allergic to it—many people entered into such contexts before they were ready, or with too little information, or under circumstances where the damage was real and extreme. But I think "AVOID AT ALL COSTS! RED FLAG! DEONTOLOGICAL REJECTION!" is the wrong lesson to take from it, and I think our community is closer to that than it is to a healthy, carefully considered balance.

Similarly, I think the people-being-unreliable thing is a bullshit side effect/artifact of people correctly identifying flexibility and sensitivity-to-fluctuating-motivation as things worth prioritizing, but incorrectly weighting the actual costs of making them the TOP priorities. I think the current state of the rationalist community is one that fetishizes freedom of movement and sacrifices all sorts of long-term, increasing-marginal-returns sorts of gains, and that a few years from now, the pendulum will swing again and people will be doing it less wrong and will be slightly embarrassed about this phase.

(I'm quite emphatic about this one. Of all the things rationalists do, this one smacks the most of a sort of self-serving, short-sighted immaturity, the exact reason why we have the phrase "letting the perfect be the enemy of the good.")

I do think Problem 4 can probably be solved incrementally/with a smaller intervention, but when I was considering founding a house, one of my thoughts was "Okay, good—in addition to all the other reasons to do this, it'll give me a context to really turn a bazooka on that one pet peeve."

Comment author: gwillen 26 May 2017 06:32:18PM 1 point [-]

Thank you for your thoughtful response!

Comment author: gwillen 26 May 2017 10:26:37AM 7 points [-]

I find this project very interesting! I can imagine an alternate-universe version of me being super excited to join it. I think it's even possible that the this-universe version of me could benefit a lot from joining it. (I would see most of the benefit from myself in solving Problem 2, I think.)

But... I think there is not more than an 80% chance I would make it 6 months in such an environment without hitting the eject button to preserve my own sense of (physical or psychological) safety. (That is, a chance of at least 20% that I would hit the eject button.) I do think it's great that Code of Conduct rule #1 encourages people to protect their own safety even at the cost of leaving the project. (Although for people of limited economic means this might be hard to execute, given the need to find a replacement, so probably "has the means to deal with needing to leave if the project doesn't work out" is a screening factor.)

It's possible this is just a fact about me, more than about the project. But I don't have the sense that a lot of other members of the rationalosphere would well tolerate, say, an actual military boot camp environment, which feels a lot like the direction this is aimed. It's possible I'm misunderstanding the degree of control you / the project expects to exert over the lives of the participants. But I know that I got happier when I adopted the rule that adulthood means never letting anybody force me to do anything that feels unsafe, even if refusing has significant costs. (For comparison, my largest concern about going to a CFAR workshop was that being subjected to a "comfort zone expansion" exercise, while in remote woods, with complete strangers, on a sunk cost of thousands of dollars, would be a high-stakes problem if I didn't like how it went. Pete Michaud correctly disabused me of this concern during the interview.) Again, perhaps this just means that Dragon Army is not for me. But I'm curious what you think about it. It seems hard to imagine I could go 6 months of committing to try to perfectly execute all the stated rules plus one experimental norm per week without ending up in at least one situation where following the rules felt unsafe.

Separately, I'm interested in whether you think Problem 4 could be tackled separately from an all-consuming project like Dragon Army. I feel like I have seen the "desperately hoping nobody will bail after the third meeting" thing a lot before, but usually the context is "a bunch of people vaguely want to get a thing done but nobody has really committed to it", in which context bailing after the third meeting is not violating any norms or agreements. Without making any new norms, one already has the option of actually asking for explicit commitments, rather than just seeing who shows up, and I think this option is not used often enough. I guess the failure mode of trying to solve Problem 4 alone is, if you ask for explicit commitments, you discover that people just won't give them in the first place. Dragon Army seems like a big hammer to solve this but maybe it's the only way?

Comment author: gwillen 16 May 2017 09:48:18PM 2 points [-]

We don't have the funding to make a movie which becomes a cult classic.

Maybe? Surely we don't have to do the whole thing ourselves, right -- AI movies are hip now, probably we don't need to fund a whole movie ourselves. Could we promote "creation of fiction that sends a useful message" as an Effective Career? :-)

Comment author: gwillen 09 April 2017 01:56:41AM 4 points [-]

I am interested in Project Hufflepuff, disappointed I'm going to miss the unconference (but with most options being weekdays it was almost inevitable), but following closely to see what other opportunities come up for me to be involved.

View more: Next