Comment author: IlyaShpitser 01 February 2016 02:31:48PM *  -1 points [-]

.

Comment author: Bryan-san 01 February 2016 03:53:10PM 3 points [-]

For both if true and if not true: do you think posting this publicly is productive or a good idea when Clarity just said he didn't want to cross pollinate?

Comment author: Tem42 27 January 2016 11:37:59PM 0 points [-]

Unfortunately, I think many of the people who come to LessWrong are in the position of having read about 50-75% of the content of the sequences through other sources, and may become frustrated by the lack of clear indication within the sequences as to what the next post actually includes.... it is very annoying to read through a couple of pages only to find that this section has just been a wordy setup to reviewing basic physics.

Comment author: Bryan-san 28 January 2016 07:41:43PM 0 points [-]

What % do you define as "many"? Those percentages of content already known sound very high to me in regards to the first 1/3rd of the Sequences. (I'm still working on the rest so can't comment there.) Also, they can use the Article Summaries to test out whether they've seen the concept before and then read the full article or not. I don't recommend just reading the summaries though. I think a person doing that would be doing a disservice to themselves because of the reasons supplied by Vaniver above.

Comment author: Fluttershy 28 January 2016 10:07:24AM *  2 points [-]

I'm trying to help a dear friend who would like to work on FAI research, to overcome a strong fear that arises when thinking about unfavorable outcomes involving AI. Thinking about either the possibility that he'll die, or the possibility that an x-risk like UFAI will wipe us out, tends to strongly trigger him, leaving him depressed, scared, and sad. Just reading the recent LW article about how a computer beat a professional Go player triggered him quite strongly.

I've suggested trying to desensitize him via gradual exposure; the approach would be similar to the way in which people who are afraid of snakes can lose their fear of snakes by handling rope (which looks like a snake) until handling rope is no longer scary, and then looking at pictures of snakes until such pictures are no longer scary, and then finally handling a snake when they are ready. However, we've been struggling to think of what a sufficiently easy and non-scary first step might be for my friend; everything I've come up with as a first step akin to handling rope has been too scary for him to want to attempt so far.

I don't think that I'll even be able to convince my friend that desensitization training will be worth it at all--he's afraid that the training might trigger him, and leave him in a depression too deep for him to climb out of. At the same time, he's so incredibly nice, and he really wants to help with FAI research, and maybe even work for MIRI in the "unlikely" (according to him) event that he is able to overcome his fears. Are there reasonable alternatives to, say, desensitization therapy? Are there any really easy and non-scary first steps he might be okay with trying if he can be convinced to try desensitization therapy? Is there any other advice that might be helpful to him?

Comment author: Bryan-san 28 January 2016 07:20:34PM 2 points [-]

If someone has anxiety about a topic, I suggest they go after all the normal anxiety treating methods. SSC has a post about Things that Sometimes Work If You Have Anxeity, though actually going to see a therapist and getting professional help would likely help more.

If he wants to try exposure therapy, good results have apparently recently occurred from doing that while on propranalol.

Comment author: lifelonglearner 25 January 2016 06:33:33PM 0 points [-]

That's really true.

This started as an effort to catalog my own planning processes, but I have tons more to learn.

I'll definitely be thinking more about the points you've raised (what good rational planning looks like/good research), but I know that I, too, haven't got the whole picture in my head yet.

I would like to add more to this idea of good planning as I learn more. Do you have any suggestions for further reading I might benefit from (and eventually write about)?

Comment author: Bryan-san 28 January 2016 07:02:22PM 0 points [-]

Immediate ideas that come to mind: lots of CFAR goal-oriented techniques like goal factoring, pre-hindsight, murphyjitsu, seeking strategic updates, and urge propagation. You can learn those at CFAR itself or Anna might be writing up something on them at some point during this year.

From other stuff I've been exposed to: Generating 3rd option alternatives Noticing and rejecting Fool's Choices (presented with A but not B and B but not A, which you reject and then find a way to obtain both A and B) being sure to write down actual models for decision trees and assign probabilities to them finding people who failed in the past and avoid their failures thinking about what someone cleverer or craftier than you would do asking someone who is cleverer and craftier than you what they would do etc.

Comment author: IlyaShpitser 28 January 2016 03:46:07PM *  2 points [-]

I actually think self-driving cars are more interesting than strong go playing programs (but they don't worry me much either).

I guess I am not sure why I should pay attention to EY's opinion on this. I do ML-type stuff for a living. Does EY have an unusual track record for predicting anything? All I see is a long tail of vaguely silly things he says online that he later renounces (e.g. "ignore stuff EY_2004 said"). To be clear: moving away from bad opinions is great! That is not what the issue is.


edit: In general I think LW really really doesn't listen to experts enough (I don't even mean myself, I just mean the sensible Bayesian thing to do is to just go with expert opinion prior on almost everything.) EY et al. take great pains to try to move people away from that behavior, talking about how the world is mad, about civiliational inadequacy, etc. In other words, don't trust experts, they are crazy anyways.

Comment author: Bryan-san 28 January 2016 06:25:31PM 1 point [-]

In what specific areas do you think LWers are making serious mistakes by ignoring or not accepting strong enough priors from experts?

Comment author: Gleb_Tsipursky 27 January 2016 06:10:36AM 0 points [-]

For those who wish to downvote it, I'm curious about your motivations. Want to optimize my modeling of LWs :-)

Comment author: Bryan-san 27 January 2016 04:15:19PM 4 points [-]

I'm curious: what were your direct motivations for posting this in a thread instead of as a comment in the Open or Media threads?

Comment author: Bryan-san 25 January 2016 03:32:52PM 1 point [-]

This article looks like a good Part 1 of Many. I would normally expect this article to be followed by several more that go into detail about what good, rational planning actually looks like and how to do effective and useful research on topics like these.

Breaking things down into smaller parts and doing research sound like good ideas #1 and #2 of 20 or 30 needed to do really awesome planning.

Comment author: Bryan-san 13 January 2016 09:01:52PM 3 points [-]

Nate Soares' recent post "The Art of Response" on Minding Our Way talks about effective response patterns that people develop to deal with problems. What response patterns do you use in life or in your field of expertise that you have found to be quite effective?

Comment author: Bryan-san 07 January 2016 03:02:57PM 8 points [-]

Finally completed my dieting goal of losing 20% of my original body weight.

Comment author: Tem42 22 December 2015 03:19:15AM 0 points [-]

Double-sided? How does that work?

Comment author: Bryan-san 22 December 2015 04:24:26AM 1 point [-]

You put the person's name on both sides of the badge (this is a flat badge on a lanyard) so that if it gets turned around it's still visible.

View more: Prev | Next