Comment author: Bryan-san 22 December 2015 12:39:08AM 0 points [-]

CFAR uses double sided badges and they helped me substantially in memorizing people's names by the end of the workshop.

In response to LessWrong 2.0
Comment author: Bryan-san 21 December 2015 10:01:55PM *  1 point [-]

I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.

High value is assigned to many original top posters. This leads me to three questions:

  1. What demand will LW 2.0 satisfy that will keep these prominent original posters returning to LW regularly in the future over the course of 2+ years?

  2. Do you think the reasons that led them to leave the first time can be prevented this time and will not be soon reiterated?

  3. How can new top posters as good or better than the old ones be found/recruited/marketed to?

Comment author: IlyaShpitser 21 December 2015 08:20:18PM 6 points [-]

Rationalist community needs to learn a little humility. Do you realize the disparity in intellectual firepower between "you guys" and theoretical physicists?

Comment author: Bryan-san 21 December 2015 09:01:31PM 1 point [-]

Could you expand on this further? I'm not sure I understand your argument. Also, intellectual humility or social humility?

In response to LessWrong 2.0
Comment author: Bryan-san 21 December 2015 08:17:52PM 8 points [-]

A few points:

  1. I would hate to see LW close and I don't think that would be a helpful step in getting people exposed to rationality unless a new central hub rose to take its place. I found LW through HPMOR just this year and have very little idea of what LW looked in it's supposed glory days. Things aren't great now, but if LW had been completely dead I likely wouldn't have moved from wanting to be rational to reading 600+ pages of Rationality:From AI to Zombies, making tons of connections and rationalist friends, attending CFAR, starting a LW meetup in my area, and more. A completely dead website would have given the impression of a dead philosophy that was abandoned by the people who followed it because it wasn't actually that useful after all.

  2. Decreases to the level of polish, rigor, and rationality knowledge publicly deemed necessary before posting in the various areas could be helpful (in current LW or a LW 2.0). I mainly post in Open and Stupid Question threads because of this.

  3. People here can be pretty cold and harsh in their replies. I've also heard of issues regarding downvote brigades or mass downvoting of people's posts due to personal disagreements. If this place really is full of "unquiet spirits" then a method of removing them, discouraging that kind of conduct, or changing them into kind benevolent spirits should be included in the works.

Comment author: fubarobfusco 20 December 2015 09:08:13PM *  4 points [-]

Well, it depends on what you mean by "rationality". Here's something I posted in 2014, slightly revised:


If not rationality, then what?

LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims:

  1. Obtain a better model of the world by updating on the evidence of things unpredicted by your current model.
  2. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.

Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, and correct predictions enable goal-accomplishing actions. And the way to have correct beliefs is to update your beliefs when their predictions fail.

We can call these the rules of Bayes' world, the world in which updating and prediction are effective at accomplishing human goals. But Bayes' world is not the only imaginable world. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?

To be clear, I'm not talking about alternatives to Bayesian probability as a mathematical or engineering tool. I'm talking about imaginable worlds in which Bayesian probability is not a good model for human knowledge and action.


Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it.

In the world of heroic myth, it is not oracles (good predictors) but rather heroes and villains (strong-willed people) who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Oracles possess the truth to arbitrary precision, but they accomplish nothing by it. Heroes and villains come to their predicted triumphs or fates not by believing and making use of prediction, but by ignoring or defying it.


Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the external world are relatively close to our priors; not much updating is needed there — but our goals are not known to us initially. In fact, we may be thoroughly deceived about what our goals are, or what satisfying them would look like.

We might consider this to be Buddha's world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. In this world, when we choose actions that are unsatisfactory, it isn't so much because we are acting on faulty beliefs about the external world, but because we are pursuing goals that are illusory or empty of satisfaction.


There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes' world. Each of these models should relate prediction, action, and goals in different ways: We might imagine Lovecraft's world (knowledge causes suffering), Qoheleth's world (maybe similar to Buddha's), Job's world, or Nietzsche's world.

Each of these models of the world — Bayes' world, Cassandra's world, Buddha's world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes' world, what evidence might suggest that we are actually in one of the others?

Comment author: Bryan-san 21 December 2015 08:08:43PM 1 point [-]

This is a perspective I hadn't seen mentioned before and helps me understand why a friend of mine gives low value to the goal-oriented rationality material I've mentioned to him.

Thank you very much for this post!

In response to LessWrong 2.0
Comment author: V_V 04 December 2015 10:58:04AM 10 points [-]

My two cents:

  • Merge Main and Discussion

  • Make new content more visible. Right now the landing page, and in particular the first screen, mostly consists of boilerplate. You have to scroll or click in order to view if new content has been posted. In the current attention scarce era of Facebook and Twitter streams, this is not ideal.

  • Discourage/ban Open threads. They are an unusual thing to have on a an open forum. They might have made sense when posting volume was higher, but right now they further obfuscate valuable content.

In response to comment by V_V on LessWrong 2.0
Comment author: Bryan-san 21 December 2015 07:02:16PM 8 points [-]

•Discourage/ban Open threads. They are an unusual thing to have on a an open forum. They might have made sense when posting volume was higher, but right now they further obfuscate valuable content.

I don't think this is a practical idea. The site is hostile enough to new users who lack much rationality knowledge and perspective on the content. The Open threads (and even moreso the Stupid Question threads) give people a place to pose questions and try out ideas that they aren't confident enough in to make into Discussion posts. People are less harsh in those threads (although I've seen people be harsh in stupid questions threads) and it provides a chance to participate in content without having read the 1700 pages of The Sequences or having lurked on the site for 2+ years.

Meetup : San Antonio Meetup: CFAR Techniques

1 Bryan-san 21 December 2015 06:33PM

Discussion article for the meetup : San Antonio Meetup: CFAR Techniques

WHEN: 27 December 2015 02:00:00PM (-0600)

WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230

Bubble tea, frozen yogurt, and discussion at Yumi Berry! All are welcome!

New Meetup to discuss rationality and all things LessWrong and meet the local community. Look for the sign that says Less Wrong!

Discussion article for the meetup : San Antonio Meetup: CFAR Techniques

Comment author: Bryan-san 19 December 2015 04:26:39PM 0 points [-]

What are the strongest arguments that you've seen against rationality?

Meetup : San Antonio Meetup: Seeking Strategic Updates

1 Bryan-san 15 December 2015 04:10PM

Discussion article for the meetup : San Antonio Meetup: Seeking Strategic Updates

WHEN: 20 December 2015 02:00:00PM (-0600)

WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230

Bubble tea, frozen yogurt, and discussion at Yumi Berry! All are welcome!

New Meetup to discuss rationality and all things LessWrong and meet the local community. Look for the sign that says Less Wrong!

Discussion article for the meetup : San Antonio Meetup: Seeking Strategic Updates

Comment author: iarwain1 09 December 2015 04:56:58PM *  1 point [-]

I'm from Baltimore, MD. We have a Baltimore meetup coming up Jan 3 and a Washington DC meetup this Sun Dec 13. So why do the two meetups listed in my "Nearest Meetups" sidebar include only a meetup in San Antonio for Dec 13 and a meetup in Durham NC for Sep 17 2026 (!)?

Comment author: Bryan-san 09 December 2015 08:21:06PM 1 point [-]

Whoever is running the meetup needs to make Meetup Posts for each meeting before they show up on the sidebar. IIRC regular meetups are often not posted there if the creator forgets about it. You can ask the person who runs the meetups to post them on LW more often or ask them if you can post them in their stead.

I run the San Antonio meetup and you are very welcome to attend here if it's the nearest one to you!

View more: Prev | Next