Don't Be Afraid of Asking Personally Important Questions of Less Wrong
Related: LessWrong as a social catalyst I primarily used my prior user profile asked questions of Less Wrong. When I had an inkling for a query, but I didn't have a fully formed hypothesis, I wouldn't know how to search for answers to questions on the Internet myself, so I asked them on Less Wrong. The reception I have received has been mostly positive. Here are some examples: * I asked for a cost-benefit analysis of deliberately using nicotine for its nootropic effects. * Back when I was trying to figure out which college major to pursue, I queried Less Wrong about which one was worth my effort. I followed this up with a discussion about whether it was worthwhile for me to personally, and for someone in general, to pursue graduate studies. Other student users of Less Wrong benefit from the insight of their careered peers: * A friend of mine was considering pursuing medicine to earn to give. In the same vein as my own discussion, I suggested he pose the question to Less Wrong. He didn't feel like it at first, so I posed the query on his behalf. In a few days, he received feedback which returned the conclusion that pursuing medical school through the avenues he was aiming for wasn't his best option relative to his other considerations. He showed up in the thread, and expressed his gratitude. The entirely of the online rationalist community was willing to respond provided valuable information for an important question. It might have taken him lots of time, attention, and effort to look for the answers to this question by himself. * My friends, users Peter Hurford and Arkanj3l, have had similar experiences for choosing a career and switching majors, respectively. In engaging with Less Wrong, with the rest of you, my experience has been that Less Wrong isn't just useful as an archive of blog posts, but is actively useful as a community of people. As weird as it may seem, you can generate positive externalities that improve the lives of others by merely writ
I peruse her content occasionally but I wasn't aware that she is widely recognized as the quality of her analysis/commentary varying so wildly, and often particularly lacklustre outside of her own field. Gwern mentioned that Gary Marcus has apparently said as much in the past when it comes to her coverage of AI topics. I'll refrain from citing her as a source in the future.