Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gwillen 26 May 2017 10:26:37AM 1 point [-]

I find this project very interesting! I can imagine an alternate-universe version of me being super excited to join it. I think it's even possible that the this-universe version of me could benefit a lot from joining it. (I would see most of the benefit from myself in solving Problem 2, I think.)

But... I think there is not more than an 80% chance I would make it 6 months in such an environment without hitting the eject button to preserve my own sense of (physical or psychological) safety. (That is, a chance of at least 20% that I would hit the eject button.) I do think it's great that Code of Conduct rule #1 encourages people to protect their own safety even at the cost of leaving the project. (Although for people of limited economic means this might be hard to execute, given the need to find a replacement, so probably "has the means to deal with needing to leave if the project doesn't work out" is a screening factor.)

It's possible this is just a fact about me, more than about the project. But I don't have the sense that a lot of other members of the rationalosphere would well tolerate, say, an actual military boot camp environment, which feels a lot like the direction this is aimed. It's possible I'm misunderstanding the degree of control you / the project expects to exert over the lives of the participants. But I know that I got happier when I adopted the rule that adulthood means never letting anybody force me to do anything that feels unsafe, even if refusing has significant costs. (For comparison, my largest concern about going to a CFAR workshop was that being subjected to a "comfort zone expansion" exercise, while in remote woods, with complete strangers, on a sunk cost of thousands of dollars, would be a high-stakes problem if I didn't like how it went. Pete Michaud correctly disabused me of this concern during the interview.) Again, perhaps this just means that Dragon Army is not for me. But I'm curious what you think about it. It seems hard to imagine I could go 6 months of committing to try to perfectly execute all the stated rules plus one experimental norm per week without ending up in at least one situation where following the rules felt unsafe.

Separately, I'm interested in whether you think Problem 4 could be tackled separately from an all-consuming project like Dragon Army. I feel like I have seen the "desperately hoping nobody will bail after the third meeting" thing a lot before, but usually the context is "a bunch of people vaguely want to get a thing done but nobody has really committed to it", in which context bailing after the third meeting is not violating any norms or agreements. Without making any new norms, one already has the option of actually asking for explicit commitments, rather than just seeing who shows up, and I think this option is not used often enough. I guess the failure mode of trying to solve Problem 4 alone is, if you ask for explicit commitments, you discover that people just won't give them in the first place. Dragon Army seems like a big hammer to solve this but maybe it's the only way?

Comment author: gwillen 16 May 2017 09:48:18PM 2 points [-]

We don't have the funding to make a movie which becomes a cult classic.

Maybe? Surely we don't have to do the whole thing ourselves, right -- AI movies are hip now, probably we don't need to fund a whole movie ourselves. Could we promote "creation of fiction that sends a useful message" as an Effective Career? :-)

Comment author: gwillen 09 April 2017 01:56:41AM 4 points [-]

I am interested in Project Hufflepuff, disappointed I'm going to miss the unconference (but with most options being weekdays it was almost inevitable), but following closely to see what other opportunities come up for me to be involved.

Comment author: gwillen 25 March 2017 11:13:03PM 0 points [-]

The obvious next step seems to be a fork of this extension that doesn't restrict itself to legal sources. That would make it a hell of a lot more useful.

Comment author: gwillen 17 March 2017 08:06:30AM 5 points [-]

This is interesting and I am interested in it. (I live in the distant far reaches of southbay which makes my interest maybe less relevant than it could be.) I see a few major sticking points.

  • If not everyone is paying their own way, a sticking point is the arrangement of who pays how much, accounting for the fact that individual people's desire to pay for individual other people may change over time, and people's financial situations may change over time, and kicking people out of their housing on short notice is bad, and housing in the bay area is already very expensive so the prospect of paying a premium to subsidize others, especially unspecified others, may be unpalatable.
  • As you say, dispute-resolution: It will be necessary to regulate people's behavior and it will sometimes be necessary to expel people, and this is the usual problem of expelling people from communities -- which is already so hard that communities typically handle it poorly it and are sometimes destroyed by either doing it or failing to do it -- except that money and people's housing will be at stake in this case, which not only raises the emotional stakes significantly (as if they weren't bad enough), it adds financial and maybe legal stakes as well.
In response to LessWrong Discord
Comment author: RyanCarey 13 March 2017 09:35:54AM 4 points [-]
Comment author: gwillen 13 March 2017 09:36:06PM 6 points [-]

Or even more oddly on point, today's XKCD:


Comment author: dglukhov 10 March 2017 09:40:47PM 2 points [-]

Low-quality thought-vomiting, eh?

I'll try to keep it civil. I get the feeling the site is as far removed from the site's founding goals and members as a way to striate the site's current readership. Either pay into a training seminar through one of the institutions advertised above, or be left behind to bicker over minutia in an underinformed fashion. That said, nobody can doubt the usefulness in personal study, though it is slow and unguided.

I'm suspicious, of the current motives here, of the atmosphere this site provides. I guess it can't be helped since MIRI and CFAR are at the mercy of needing revenue just like any other institution. So where does one draw the line between helpful guidance and malevolent exploitation?

Comment author: gwillen 11 March 2017 11:48:33PM *  1 point [-]

Can you please clarify whose motives you're talking about, and generally be a lot more specific with your criticisms? Websites don't have motives. CFAR and MIRI don't run this website although of course they have influence. (In point of fact I think it would be more realistic to say nobody runs this website, in the sense that it is largely in 'maintenance mode' and administrator changes/interventions tend to be very minimal and occasional.)

Comment author: Bound_up 11 March 2017 02:53:44PM 4 points [-]

On the Value of Pretending

Actors don't break down the individual muscle movements that go into expression; musicians don't break down the physical properties of the notes or series of notes that produce expression.

They both simulate feeling to express it. They pretend to feel it. If we want to harness confidence, amiability, and energy, maybe there's some value in pretending and simulating (what would "nice person" do?).

Cognitive Behavioral Therapy teaches that our self-talk strongly affects us, counseling us not to say "Oh, I suck" kind of things. Positive self-talk "I can do this" may be worth practicing.

I'm not sure why, but this feels not irrational, but highly not-"rational" (against the culture associated with "rationality."). This also intrigues me...

Comment author: gwillen 11 March 2017 11:45:36PM *  2 points [-]

In this vein, I have had some good results from the simple expedient of internally-saying "I want to do this" instead of "I have to do this" with regards to things that system 2 wants to do (when system 1 feels reluctant), i.e. akratic things. I have heard this reframing suggested before but I feel like I get benefit from actually thinking the "I want" verbally.

Comment author: gwillen 19 February 2017 01:11:10AM 0 points [-]

Hm, my fox (https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox) and satisficer instincts really really don't the recommendation to 'unwind partial funding'. (I feel like there's a lot of stuff mixed into this post, but I am only talking about partial funding issues.) I thought I had seen something similar to the rough argument I'm about to make in a GiveWell/OPP blogpost, but it's not in the one you're writing about, so I'm not sure whether I did or not. If I did, I am probably partly plagiarizing it, badly.

The argument basically goes this way: I think it's very often the case that a mixed strategy is a good idea in practice, even if you're totally sure that one of two pure strategies must be superior, but you can't tell with any confidence which one it is.

It seems to me that you're arguing it's better to pick whichever of the two pure strategies -- full funding or no funding -- seems more likely than not to be superior, rather than do some of each. (It seems like in reality you think fully funding is a clear winner, but in 'Unwind partial funding' you seem to allow that either is possible -- just not anything in between.) In fact, I see in a longer post you state the notion that "GiveWell thinks that its recommendation underperformed opportunity cost, and therefore did net harm." As far as I can tell from my sense of your meaning, this is a perfectly utilitarian position, but the idea that underperforming opportunity cost is a net harm implies that any possible action that GiveWell takes with imperfect information is doing vast, tremendous, incalculable amounts of net harm. No matter how much good they do relative to the state of the world if they didn't exist, they are doomed to do huge quantities of net harm, relative to the world where they have certainty about what the optimal choices actually are.

This doesn't seem like a very encouraging position to take, in a messy world where human beings with extremely limited knowledge and optimization capacity are slowly groping their way towards doing good.

So I bristle at the idea that, because GiveWell is concerned that Good Ventures fully-funding their charities might cause harm -- but is certain that Good Ventures not funding those charities at all would cause harm -- they should be subject to moral outrage for some sort of dishonesty because they chose to hedge their bets.

Comment author: gwillen 22 December 2016 07:09:54AM *  1 point [-]

Hm, I did notice a child -- I suspect and presume the same one you mean -- who made a number of loud comments during the performance. (That one couldn't have been Alicorn's, who is too young to make comments.) At least for the comments that happened while I was on stage with choir, I felt like they got a laugh from the audience, and I found the whole thing mildly entertaining. The rest of the time I didn't really notice them well enough to recall details. But I can totally see how they could be distracting and bothersome to others.

I fear, though, that -- if you feel that the event was truly 'ruined' by this -- it may be hard to find sufficient common ground between you and child-havers for both to be happy attending the same event. As a non-child-haver myself (and a non-child-wanter) who doesn't especially dislike children, my suspicion is that you are a significant outlier on the "degree of annoyance" spectrum? But I now find myself interested in data on this.

(EDIT: I just realized that it's possible that the child was much closer to you than to me, so we might have had different experiences that might color my views differently if I were sitting where you were.)

Comment author: gwillen 24 December 2016 09:01:09PM 1 point [-]

Self-reply: After reading other comments and replies to me, I'm updating in the direction of believing that I'm unusually tolerant of child noises, for someone not possessed of children myself.

View more: Next