Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: lifelonglearner 25 April 2017 11:46:26PM 4 points [-]

Relevant info: I've volunteered at 1 CFAR workshop and hang out in the CFAR office periodically. My views here represent my models of how I think CFAR is thinking and are my own.

For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.

Related to the idea of a prep course, I'll be making a LW post in the next few days about my attempt to create a new sequence on instrumental rationality that is complementary to the sort of self-improvement CFAR does. That may be of interest to you.

Otherwise, I can say that at least at the workshop I was at, there was zero mention of AI safety from the staff. (You can read my review here). It's my impression that there's a lot of cool stuff CFAR could be doing in tandem w/ their workshops, but they're time constrained. Hopefully this becomes less so w/ their new hires.

I do agree that having additional scaffolding would be very good, and that's part of my motivation to start on a new sequence.

Happy to talk more on this as I also think this is an important thing to focus on.

Comment author: ThoughtSpeed 26 April 2017 12:47:55AM 1 point [-]

For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.

Yep, you were one of the parties I was thinking of. Nice work! :D

Comment author: Lumifer 19 April 2017 12:56:39AM 2 points [-]

I'm quite serious about this.

Are you? Well then, go make it happen.

Otherwise it sounds like entitled whining.

Comment author: ThoughtSpeed 25 April 2017 11:41:58PM *  3 points [-]

What I'm about to say is within the context of seeing you be one of the most frequent commenters on this site.

Otherwise it sounds like entitled whining.

That is really unfriendly to say; honestly the word I want to use is "nasty" but that is probably hyperbolic/hypocritical. I'm not sure if you realize this but a culture of macho challenging like this discourages people from participating. I think you and several other commenters who determine the baseline culture of this site should try to be more friendly. I have seen you in particular use a smiley before so that's good and you're probably a friendly person along many dimensions. But I want to emphasize how intimidated newcomers or people who are otherwise uncomfortable with what is probably interpreted-by-you as joshing-around with LW-friends. To you it may feel like you are pursuing less-wrongness, but to people who are more neurotic and/or more unfamiliar with this forum it can come across as feeling hounded, even if vicariously.

I do not want to pick on people I don't know but there are other frequent commenters who could use this message too.

Comment author: ThoughtSpeed 25 April 2017 11:07:30PM 3 points [-]
  1. Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).

  2. Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.

I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).

Comment author: helldalgo 21 December 2016 04:35:36AM 1 point [-]

I've been doing this with an interpersonal issue. I guess that's getting resolved this week.

Comment author: ThoughtSpeed 25 April 2017 10:06:42PM 0 points [-]

Did it get resolved? :)

Comment author: ThoughtSpeed 25 April 2017 09:34:40PM 0 points [-]

I had asked someone how I could contribute, and they said there was a waitlist or whatever. Like others have mentioned, I would recommend prioritizing maximal user involvement. Try to iterate quickly and get as many eyeballs on it as you can so you can see what works and what breaks. You can't control people.

Comment author: Elo 17 April 2017 04:30:59PM 12 points [-]

Hi, I know you mean well but can you maybe talk to existing people about efforts before doing something like a huge declaration. False rallying flags are just going to confuse everyone.

It would probably be difficult to be clear on who to ask but as far as I can tell you didn't even try before posting.

Comment author: ThoughtSpeed 17 April 2017 07:01:42PM 6 points [-]

I do want to heap heavy praise on the OP for Just Going Out And Trying Something, but yes, consult with other projects to avoid duplication of effort. :)

Comment author: madhatter 10 April 2017 05:15:52PM 4 points [-]

Wow, that had for some reason never crossed my mind. That's probably a very bad sign.

Comment author: ThoughtSpeed 16 April 2017 05:00:20AM 0 points [-]

Honestly, it probably is. :) Not a bad sign as in you are a bad person, but bad sign as in this is an attractor space of Bad Thought Experiments that rationalist-identifying people seem to keep falling into because they're interesting.

Comment author: Los793 04 April 2017 11:49:50PM 1 point [-]

Casual rationalist-adjacent here (I've been reading LW for over a year, but this is my first post). I also very much agree (and with the parent comment too). I only want to add that in my experience weird jargon-- even the kind that doesn't obscure communication-- is a large part of why people find the community impenetrable. I don't necessarily mean major concepts from the Sequences, which serve a clear purpose of condensing and which everyone who sticks around long enough should know regardless.

But more subtle jargon, even phrases as simple as "level up dealcraft" (and sdr, I don't mean to single you out-- I could take an example from anywhere-- your post is just the most immediate) as opposed to, say, "improve negotiating skills." Sure, the meaning of is discernible from the context-- almost everyone would grasp the meaning-- but the wording will isolate a lot of people.

Comment author: ThoughtSpeed 15 April 2017 08:59:57PM 0 points [-]

I think "upskill" is another one of these.

In response to The Social Substrate
Comment author: RomeoStevens 09 February 2017 08:53:41AM 4 points [-]

One confusion I've had; where people treat emotions as a level on which it is difficult to fake things but then later don't act surprised when actions are at odds with the previous moment to moment feelings. Like, I was happy to accept that your current experience is validly how you feel in the moment, but I didn't think how you feel is strong evidence for your 'true' beliefs/future actions etc. And it's weird that others do given that the track record is actually quite bad. So if I take one perspective, people are constantly lying and get very mad if you point this out.

This made a lot more sense when I stopped modeling people as monolithic agents. The friction arises because they are modeling themselves as monolithic agents. So I changed the way I talk about people's preferences. But it is still tricky and I often forget. I've thought of this as a sort of extension to NVC, NMC or non-monolithic communication also encourages you to remove the I and You constructs from your language and see what happens. It isn't possible in real time communication, but it is an interesting exercise while journaling in that it forces a confrontation with direct observer moments.

Comment author: ThoughtSpeed 04 April 2017 12:08:28AM 0 points [-]

What is NMC?

(For anyone who doesn't know: NVC stands for Nonviolent Communication. I would highly recommend it.)

Comment author: taryneast 28 February 2014 09:59:04AM 2 points [-]

Yeah. Beeminder doesn't work for me either - nor do most online punishment-based motivators.

My problem with it is that it doesn't punish you for failing to do the thing you need to do. It punishes you for failing to record the fact that you did the thing you need to do.

So if you're time-poor (like me) and still managed to do the thing... but didn't have time to go online and tell beeminder that you did the thing... you still get punished. :(

Comment author: ThoughtSpeed 28 March 2017 08:12:44AM 1 point [-]

Agreed that this is a problem! Thankfully there are a lot of integrations with Beeminder that automatically enter data. You can hook up hundreds of different applications to it through IFTTT or Zapier.

View more: Next