About a month ago, Anna posted about the Importance of Less Wrong or Another Single Conversational Locus, followed shortly by Sarah Constantin's http://lesswrong.com/lw/o62/a_return_to_discussion/
There was a week or two of heavy-activity by some old timers. Since there's been a decent array of good posts but not quite as inspiring as the first week was and I don't know whether to think "we just need to try harder" or change tactics in some way.
Some thoughts:
- I do feel it's been better to quickly be able to see a lot of posts in the community in one place
- I don't think the quality of the comments is that good, which is a bit demotivating.
- on facebook, lots of great conversations happen in a low-friction way, and when someone starts being annoying, the person's who's facebook wall it is has the authority to delete comments with abandon, which I think is helpful.
- I could see the solution being to either continue trying to incentivize better LW comments, or to just have LW be "single locus for big important ideas, but discussion to flesh them out still happen in more casual environments"
- I'm frustrated that the intellectual projects on Less Wrong are largely silo'd from the Effective Altruism community, which I think could really use them.
- The Main RSS feed has a lot of subscribers (I think I recall "about 10k"), so having things posted there seems good.
- I think it's good to NOT have people automatically post things there, since that produced a lot of weird anxiety/tension on "is my post good enough for main? I dunno!"
- But, there's also not a clear path to get something promoted to Main, or a sense of which things are important enough for Main
- I notice that I (personally) feel an ugh response to link posts and don't like being taken away from LW when I'm browsing LW. I'm not sure why.
Curious if others have thoughts.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in... than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to... misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it's like one person saying: "I don't have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad." And the other person replying: "You don't need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%."
I don't have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body. Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don't have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
... (read more)