All of ThoughtSpeed's Comments + Replies

I find it funny people think questions about the Chinese Room argument or induction are obvious, tangential, or silly. xD

Anyway: What is the best algorithm for deciding between careers?

(Rule 1: Please don't say the words "consult 80,000 Hours" or "Use the 80K decision tool!" That is analogous to telling an atypically depressed person to "read a book on exercise and then go out and do things!" Like, that's really not a helpful thing, since those people are completely booked (they didn't respond to my application despite my fitt... (read more)

0ChristianKl
Write your assumptions about the alternatives career paths that you see down. Talk to other people. If you have multiple options talk with your friends about those and get their feedback. If your friends don't include people who are in those careers reach out for people inside those careers. Cold approaching people over LinkedIn is one way you can get in contact with people and there are likely also networking events.
0Screwtape
I don't know what the best algorithm is, but what I did was something like the following. Step 1. Make a list of the things you enjoy doing. Attempt to be specific where possible- you want to get at the activity that's actually enjoyable, so "making up stories" is more accurate for me than "writing" is, since it's the storytelling part that's fun for me instead of the sitting down and typing specifically. Sort the list in the order that you most enjoy doing the thing, with an eye towards things you can do a lot of. (I do like chocolate, but there's a sharp limit in the amount of chocolate I can eat before it stops being fun.) There's no exact length you need, but 10~15 seems to be the sweet spot. Step 2. Line up the things you enjoy doing with jobs that do them a lot. Make a list of those jobs, putting under each job the different things you would like about them along with things you know you'd dislike about doing the job. Talking to people in that field, reading interviews with them, and good old fashioned googling are good steps here. Sort the jobs by how many of your favourite things to do are in them and how few things you don't want to do are in them. Step 3. Take the list of jobs, and look up how much money each job makes, along with how much demand there is for that job and how many qualifications you'd need to earn to reasonably expect to get the job. Hours worked per week and health risks are also good things to think about. (Note: Sitting at a computer for nine hours straight should really count as a health risk. I'm not joking.) Step 4. You now have a good notion of enjoyment vs practicality. If there's a standout winner in both of them, do that. If not, then consider your tradeoffs carefully. You will probably enjoy things less when you have to wake up every morning and do them, but it also caught me by surprise how little time it feels like I have to work on personal projects after eight or nine hours plus commuting. Step 5. Think about UBI and cr
0Elo
There is a reason people say 80k. And it's because they did the research already. If not 80k. Read deep work, so good they can't ignore you and maybe others booms that suggest a "strategy" for employment. (short version - get a job in an area on purpose. Ie if you are a vampire, a job in a factory making garlic free whole foods.) Ask people around you. Maybe 10. Why they chose their career, and if they like it. Ignore their answers and double check by observing them work.
0Lumifer
Figure out what satisfies the three criteria: * You like doing this * You are good at doing this * Other people value this (aka will pay you money for doing this)

Exactly ZERO.

...

Zero is not a probability! You cannot be infinitely certain of anything!

Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).

By common usage in this subculture, the concept of Friendliness has a specific meaning-set attached to it that implies a combination of 1) a know-it-when-I-see-it isomorphism to common-usage 'friendliness' (e.g. "I'm not being tortured"), and 2) A deeper sense in which the universe is being optimized by our own criteria by a more pow... (read more)

I just started a Facebook group to coordinate effective altruist youtubers. I'd definitely say rationality also falls under the umbrella. PM me and I can add you. :)

0ImmortalRationalist
Such as?

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0.

Why would that round down to zero? That's a lot more people having cancer than getting nuked!

(It would be hilarious if Zubon could actually respond after almost a decade)

For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.

Yep, you were one of the parties I was thinking of. Nice work! :D

What I'm about to say is within the context of seeing you be one of the most frequent commenters on this site.

Otherwise it sounds like entitled whining.

That is really unfriendly to say; honestly the word I want to use is "nasty" but that is probably hyperbolic/hypocritical. I'm not sure if you realize this but a culture of macho challenging like this discourages people from participating. I think you and several other commenters who determine the baseline culture of this site should try to be more friendly. I have seen you in particular use a... (read more)

1Lumifer
Well, first of all I don't think I determine the baseline culture of the site. I'm more of an outlier here and I was just a lurker when the site was in its heyday. Second, as opposed to some, I don't want just more people. I want more interesting, smart, competent people and yes, this implies that I prefer a certain class of people to not be here -- preferably by not showing up here at all. Relative rarity of idiots is a BIG advantage of LW -- if you want broad participation Reddit, etc. are there for you. Third, "hounded"? This is the 'net and though a particular technology has been desired by many, it hasn't been invented yet. Close the browser or switch to a different tab and hey! you're free and safe.
  1. Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).

  2. Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and

... (read more)
6pcm
Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives. Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1. Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That's different from "you don't have to do anything". Most of the difficulties I've had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some "learned helplessness" about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I've only experienced in reaction to observing people around me being in appropriate moods. Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.
3ChristianKl
The first reason of why isn't CFAR doing X is CFAR thinks other things besides X are more important targets for their effort. At the beginning, CFAR considered probability calibration very important. As far as I understand today they consider it less important and a variety of other mental skills more important. As a result I think they decided against spending more resources on the Credence game. As far as a Double Crux app goes, it's a project that somebody could do but I'm not sure that CFAR is the best actor to do it. If Arbital does it and tries to build a community around it, that might be higher return. As far as I understand CFAR chooses to spend effort on optimizing the post-workshop experience with weekly excercises. I can understand that they might belief that's more likely to provide good returns then focusing on the pre-workshop experience. CFAR staff did publish http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/ and http://lesswrong.com/lw/o2k/flinching_away_from_truth_is_often_about/ but I guess writing concepts down in that way takes a lot of effort.
6Qiaochu_Yuan
I think it comes down to a combination of 1) not being very confident that CFAR has the True Material yet, and 2) not being very confident in CFAR's ability to correct misconceptions in any format other than teaching in-person workshops. That is, you might imagine that right now CFAR has some Okay Material, but that teaching it in any format other than in person risks misunderstandings where people come away with some Bad Material, and that neither of these is what we really want, which is for people to come away with the True Material, whatever that is. There's at least historically been a sense that one of the only ways to get people to approximately come away with the True Material is for them to actually talk in person to the instructors, who maybe have some of it in their heads but not quite in the CFAR curriculum yet. (This is based on a combination of talking to CFAR instructors and volunteering at workshops.) It's also worth pointing out that CFAR is incredibly talent-constrained as an organization, and that there are lots of things CFAR could do that would plausibly be a good idea and that CFAR might even endorse as plausibly a good idea but that they just don't have the person-hours to prioritize. At the mainstream workshops there is no mention of a topic anywhere in the neighborhood of AI safety anywhere in the curriculum. If it comes up at all it's in informal conversations.
4gilch
I'm also interested in developing my instrumental rationality, and I think many of us are. Some may not have noticed CFAR's resources pages: Reading List; Rationality Videos; Blog Updates; and Rationality Checklist They do update these from time to time. I can't really speak for CFAR's plans or motives though. Last I heard they were still in an experimental phase and weren't confident enough in their material to go public with it in a big way yet. Has this changed?
5[anonymous]
Relevant info: I've volunteered at 1 CFAR workshop and hang out in the CFAR office periodically. My views here represent my models of how I think CFAR is thinking and are my own. For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like. Related to the idea of a prep course, I'll be making a LW post in the next few days about my attempt to create a new sequence on instrumental rationality that is complementary to the sort of self-improvement CFAR does. That may be of interest to you. Otherwise, I can say that at least at the workshop I was at, there was zero mention of AI safety from the staff. (You can read my review here). It's my impression that there's a lot of cool stuff CFAR could be doing in tandem w/ their workshops, but they're time constrained. Hopefully this becomes less so w/ their new hires. I do agree that having additional scaffolding would be very good, and that's part of my motivation to start on a new sequence. Happy to talk more on this as I also think this is an important thing to focus on.

I had asked someone how I could contribute, and they said there was a waitlist or whatever. Like others have mentioned, I would recommend prioritizing maximal user involvement. Try to iterate quickly and get as many eyeballs on it as you can so you can see what works and what breaks. You can't control people.

I do want to heap heavy praise on the OP for Just Going Out And Trying Something, but yes, consult with other projects to avoid duplication of effort. :)

Honestly, it probably is. :) Not a bad sign as in you are a bad person, but bad sign as in this is an attractor space of Bad Thought Experiments that rationalist-identifying people seem to keep falling into because they're interesting.

I think "upskill" is another one of these.

What is NMC?

(For anyone who doesn't know: NVC stands for Nonviolent Communication. I would highly recommend it.)

Agreed that this is a problem! Thankfully there are a lot of integrations with Beeminder that automatically enter data. You can hook up hundreds of different applications to it through IFTTT or Zapier.

What happened to this? It seems like the Tumblr is defunct?

The Elephant and the Rider.

I think my go-to here would be Low of Solipsism from Death Note. As an aspiring villain being resurrected, I can't think of anything more dastardly.

0MrMind
That's interesting, you think of yourself as an aspiring villain? What does that entail?

Is that for real or are you kidding? Can you link to it?

3Lightwave
He's mentioned it on his podcast. It won't be out for another 1.5-2 years I think. Also Sam Harris recently did a TED talk on AI, it's now up.

Did this ever get answered?

I think the names you chose were quite distracting from the problem, at least for me. See paragraphs 4-6 in this article for why: http://lesswrong.com/lw/gw/politics_is_the_mindkiller/

Not to lower signal-to-noise, but - I really liked this comment. It shows of a fine mind made cynical, a delicate sarcasm born of an impinging upon by a horrific, Cthulhian reality.

"People are crazy, the world is mad."

2buybuydandavis
Thank you. At first I didn't agree with the "horrific, Cthulhian reality", but I think that's one of the Orwellian problems. Bureaucracy's are infuriating and frustrating and horrific, but they are the way they are for reasons, like gravity. If you're willing to stare into the abyss, it's not hard at all to see why things are the way they are. An institution is a machine, indifferent to our wants and intentions. Make the machine wrong, and it will be a meat grinder for everyone involved. But's there's the other, more human Orwellian horror. People really are different in the head, running different algorithms. Or more to the point, they aren't aliens, I am. That manager's meeting was a peek behind the curtain to true and habitual Doublethink. In their heads, the epistemic truth algorithm was completely inoperative. The Lie was completely True until I opened my yap. People can be frustrating and infuriating - but they too are what they are for reasons, like gravity, and they're not mysterious at all if you stare into their abyss. The Hitch had a catchy phrase - "just two chromosomes away from a chimpanzee". We aren't so smart, rational, or sane. All sorts of "mysteries" dissolve in the light of that.

At what point do you guys estimate CFAR will scale such that economically disadvantaged individuals such as myself will be able to afford a retreat? In the next few years will there be more of a focus on making money off of increased demand from business, heavier-pocketbook type individuals, or lowering costs for hungry student types?

I would love nothing more than to go, if only it was cheaper.

8Viliam_Bur
Data point: me and my girlfriend did get a scholarship last year. (But if we were literally hungry students, even that probably would not be enough.) For us, the most expensive thing was actually travelling to USA. So for people like us it would be most helpful if CFAR could expand to other continents. But I completely understand that this is not the top priority of CFAR today. It could be a higher priority for people living far from the Bay Area. So if your situation is similar, starting a rationalist group in your community could be helpful. Just show people around you LW and HPMoR and see what happens. I hope that within a year or two it will be possible to have a CFAR (or CFAR-like) workshop in Europe too. PS: I know some people value their internet anonymity, but if you disclose where you live, you increase your chances of meeting other rationalists personally, which is a great experience. With a small group, you could already start doing some rationality exercises described on LW.

OP says:

Scholarships and financial aid are available in some circumstances. So if you're interested in attending, definitely apply, and mention you'd like to be considered for this. We'll set up a call to discuss.

Hi. 18 years old. Typical demographics. 26.5-month lurker and well-read of the Sequences. Highly motivated/ambitious procrastinator/perfectionist with task-completion problems and analysis paralysis that has caused me to put off this comment for a long time. Quite non-optimal to do so, but... must fight that nasty sunk cost of time and stop being intimidated and fearing criticism. Brevity to assure it is completed - small steps on a longer journey. Hopefully writing this is enough of an anchor. Will write more in future time of course.

Finally. It is writt... (read more)