Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Yvain 12 October 2014 03:34:04AM 4 points [-]

Any particular implementation details on OCEAN? Exact same as last time?

Comment author: peter_hurford 13 October 2014 03:01:18PM 1 point [-]

Why not directly include the 10-item Big Five in the survey itself?

Comment author: Evan_Gaensbauer 08 October 2014 08:36:50PM *  1 point [-]

Gunnar_Zarncke also commented that I should at least turn my above comment into a post in Discussion. Before I do that, or if I go on to post it to Main. if the reception goes well enough, I'd like to strengthen my own post by including your experience in it. I mean, the point I made above seems to be making enough headway on the few things I did alone, and if the weight of your clout as a well-known effective altruist, and rationalist, is thrown behind it, I believe we could make even more traction in generating positive externalities by encouraging others.

I remember there was a 'Less Wrong as a social catalyst' thread several months ago we both posted in, found valuable, and got great receptions for the feedback we provided. I might mine the comments there for similar experiences, message some users, and see if they don't mind doing this. If you know of other friends, or peers, on Less Wrong, who have had a similar experience, I'd encourage you to get them on board as well. The more examples we can provide, of a more diverse base of users, the stronger case we can build. In doing so, I'd attribute you as a co-author/collaborator/provider of feedback when I make this a post in its own right.

Comment author: peter_hurford 09 October 2014 06:27:01AM 3 points [-]

Sounds good to me. I've wanted to write a "what EA/LW has done for me" post for awhile and may still do so.

Comment author: Evan_Gaensbauer 06 October 2014 10:11:49AM *  30 points [-]

Don't Be Afraid of Asking Personally Important Questions of Less Wrong

With my prior user profile, I primarily asked questions of Less Wrong. When I had an inkling for a query, but I didn't have a fully formed hypothesis, I wouldn't know how to search for answers to questions on the Internet myself, so I asked them on Less Wrong. My model for posing such questions was thus:

Insofar as it's appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.

The reception I received was mostly positive. Here are some examples:

  • I asked for a cost-benefit analysis of using deliberately using nicotine for its nootropic effects

  • Back when I was trying to figure out which college major to pursue, I queried Less Wrong about what was worth me doing. I followed this up with a discussion about whether it was worthwhile for me to personally, and for someone in general, to pursue graduate studies.

  • Later, an effective altruist friend of mine was considering pursuing medicine to earn to give. In the same vein as my own discussion, I suggested he pose the question to Less Wrong. He didn't feel like it at first, so I posed the query on his behalf. In a few days, he received feedback which returned the conclusion that pursuing medical school through the avenues he was aiming for wasn't his best option relative to his other considerations. He showed up in the thread, and expressed his gratitude. The entirely of the online rationalist community willing to respond provided valuable information for an important question. It might have taken him lots of time, attention, and effort to look for the answers to this question.

In engaging with Less Wrong, with the rest of you, my experience has been that Less Wrong isn't just useful as an archive of blog posts, but is actively useful as a community of people. As weird as it may seem, you can generate positive externalities that improve the lives of others by merely writing a blog post. This extends to responding in the comments section too. Less Wrong may be one of few online communities for which even the comments sections are useful, by default.

For the above examples, even though they weren't the most popular discussions I started, and likely didn't get as much traffic, it's because of the feedback they received that made them more personally valuable to me than any others.

At the CFAR workshop I attended, I was taught two relevant skills:

  • Value of Information Calculations: formulating a question well, and performing a Fermi estimate, or back-of-the-envelope question, in an attempt to answer it, generates quantified insight you wouldn't have otherwise anticipated.

  • Social Comfort Zone Expansion: humans tend to have a greater aversion to trying new things socially than is maximally effective, and one way of viscerally teaching System 1 this lesson is by trial-and-error of taking small risks. Posting on Less Wrong, especially, e.g., in a special thread, is really a low-risk action. The pang of losing karma can feel real, but losing karma really is a valuable signal that one should try again differently. Also, it's not as bad as failing at taking risks in meatspace.

When I've received downvotes for a comment, I interpret that as useful information, try to model what I did wrong, and thank others for correcting my confused thinking. If you're worried about writing something embarrassing, that's understandable, but realize it's a fact about your untested anticipations, not a fact about everyone else using Less Wrong. There are dozens of brilliant people with valuable insights at the ready, reading Less Wrong for fun, and who like helping us answer our own personal questions. Users shminux, and Carl Shulman are exemplars of this.

This isn't an issue for all users, but I feel as if not enough users are taking advantage of the personal value they can get by asking more questions. This comment is intended to encourage them.

Comment author: peter_hurford 06 October 2014 04:04:18PM 6 points [-]

I had a similar experience asking about my career choices.

Comment author: peter_hurford 03 October 2014 11:28:06PM 4 points [-]

Thanks for your hard work on putting this together! It's so inspiring to see everyone's profiles!

Comment author: peter_hurford 25 September 2014 05:25:40PM 4 points [-]

I like Julia Wise's thoughts in "Cheerfully".

(Personally I aim for donate 20%, invest 20%.)

Comment author: gjm 15 September 2014 11:30:00AM 11 points [-]

These are 10 different propositions. Fortunately I disagree with most of them so can upvote the whole bag with a clear conscience, but it would be better for this if you separated them out.

Comment author: peter_hurford 15 September 2014 02:49:43PM 3 points [-]

I agree with this meta-comment. Should I downvote it?

Comment author: JoshuaFox 08 September 2014 02:45:38PM 10 points [-]

Can someone point me to estimates given by Luke Muehlhauser and others as to MIRI's chances for success in its quest to ensure FAI? I recall some values (of course these were subjective probability estimates with large error bars) in some lesswrong.com post.

Comment author: peter_hurford 11 September 2014 03:37:54PM 3 points [-]
Comment author: Gunnar_Zarncke 30 August 2014 02:48:03PM *  2 points [-]

They asked curious questions :-)

I mainly relayed what blob had reported of his polyphasic sleep experiment during the Berlin meetup. And I tried to summarize what I knew about polyphasic sleep from the links and LW in general.

I also relayed that my second oldes son (8) by himself developed strongly segmented sleep with siesta during winter but fell back into mostly normal sleep after two weeks.

Comment author: peter_hurford 31 August 2014 02:07:22AM *  2 points [-]

Neat!

I mainly relayed what blob had reported of his polyphasic sleep experiment during the Berlin meetup

Any link or description of that?

Comment author: paper-machine 30 August 2014 03:46:19PM 1 point [-]

You're being uncharitable. "[It's] likely [that X]" doesn't exclude the possibility of non-X.

If you know nothing about a probability distribution, it is more likely that it has one absolute maximum than more than one.

Comment author: peter_hurford 31 August 2014 02:06:41AM 10 points [-]

Maybe I am being uncharitable, but when Sophronius asks "[c]an somebody explain to me why people generally assume that the great filter has a single cause?" and you reply "I don't think anyone really assumes that", I have to admit that I've always seen people think of the Great Filter in terms of one main cause (e.g., look to the poll in this thread where people choose one particular cause), and not in terms of multiple causes.

Though, you're right that no one has said that multiple causes is outright impossible. And you may be right that one main cause makes a lot more sense. But I do think Sophronius raises a question worth considering, at least a bit.

Comment author: paper-machine 30 August 2014 12:41:57PM 3 points [-]

I don't think anyone really assumes that.

Comment author: peter_hurford 30 August 2014 02:43:56PM 2 points [-]

From the article:

The real filter could be a combination of an early one and a late one, of course. But, unless the factors are exquisitely well-balanced, its likely that there is one location in civilizational development where most of the filter lies (ie where the probability of getting to the next stage is the lowest).

That doesn't sound like it admits the possibility of twelve, independent, roughly equally balanced filters.

View more: Next