I provide our monthly place to discuss Less Wrong topics that have not appeared in recent posts. Work your brain and gain prestige by doing so in E-prime (or not, as you please).
I provide our monthly place to discuss Less Wrong topics that have not appeared in recent posts. Work your brain and gain prestige by doing so in E-prime (or not, as you please).
One of the things I worry about the LW community is that we'll get too enamored of signaling rationality and become afraid to bounce ideas off each other that we're not quite sure are rigorous enough. This thought is brought on by the seemingly small number of top level posts that people try to make (given how many members we have). Are people refraining from posting after they see that one such thread gets voted down? I know how discouraging that can be, and it seems to me that it would be worse for the budding rationalist who is unsure of themselves. To some extent these monthly open threads help, but even in these comments seem pretty conservative. What say you LW? Are we capable of entertaining ideas without adopting them or is the risk of diluting the signal to noise ratio too high?
edit: it occurs to me that maybe I'd like to see another tab up top called "passing thoughts" or some such. The S:N ratio of the regular posts could be kept up and shorter posts could go there. We could all agree simply to not hold it against each other if we make a gaffe in these shorter posts (obvious mistake). I think this would be valuable because even if an idea is flawed it can generate great discussion (one of the reasons I enjoy hacker news). As an added bonus, this could serve as a space for "Ask LW" posts without disrupting anything.
Apropos my comment here, what do you guys think of making The Simple Math of Everything a reality?
The LW community probably has a diverse enough group of scholars to cover most of the major fields, and if not, we all have contacts. Splitting it up into sections for different individuals to write would make the project much easier to complete, provided someone is coordinating everything. What do you guys think?
Speaking of improving the LW website, I'd like to see little homepages for each user to provide a self-introduction, which I know has been suggested before, but in particular I'd like to see an implementation of userboxes as Wikipedia does (see my homepage there). This would allow people a standardized and easy-to-reference way to show their opinions on various common issues on the site.
Here's an open topic: why did Overcoming Bias and Less Wrong split?
It certainly looks to me like the technical shift was just an excuse for certain parts of that community to split away.
If this social explanation is true, what implications does this have?
I need relationship advice and I trust the wisdom and honesty of this community more than most of my friends. I created a new account to ask this question.
I'm with an incredibly compassionate, creative woman. She excels at her job, which is a "helping profession," and one which I believe improves social welfare far more than most. The sex is outstanding.
But she loves magical thinking, she is somewhat averse to expected-utility calculations, my atheism, etc. She is, by her own admission, subject to strong swings of emotion and at greater tha...
How should I go about deciding whether to continue this?
With science!
Specifically, the science of John Gottman. Short version: irreconcilable differences of viewpoint are not an intrinsic bar to a long-lasting relationship. The most potent relationship poison is contempt.
it would appear to the average person that most rational types are only moderately successful while all the extremely wealthy people are irrational. due to not seeing the whole sample space (that larger proportion of rational people enjoy moderate success vs a tiny fraction of irrational people who enjoy major success) I don't think a lot of our arguments gain traction with people. Most people infer from outliers as a matter of course.
Now combine this with the idea that signaling rationality is also signaling that we think we deserve more status and deci...
It would be useful if everyone used quote markup more often; it can be difficult to figure out what someone's agreeing to when they say "I agree" and there are a dozen posts separating the agreement from the original post.
Is anybody aware of an academic course which tries to teach OB / LW type stuff? I might someday have the opportunity to teach such a class.
Has anyone made and organized a list of important posts that could serve as the rough draft for a syllabus?
One quick question - quantum game theory - useful or a mathematical gimmick? I can follow the math, but it's not clear to me if it makes any sense in the real world or not. Can anybody who took a closer look at it tell me what they think?
Suppose I decide that I'm going to partake in a pleasurable activity. How far removed must the decision be from the activity before the decision is no longer reinforced by operant conditioning?
Educate me, LW hive mind. Robin Hanson has mentioned that prediction markets can give not just probability assessments on discrete sets of outcomes but also probability distributions over such assessments that let us know how (un)certain the market is concerning a particular assessment (or at least, that's how I interpreted his words). Does anyone have links to descriptions of such methods?
In case it hasn't already been posted by somebody, here's a nice talk about irrational behaviour and loss aversion in particular.
Perhaps this is a known issue, but since I haven't seen it discussed, I thought I'd mention that images don't seem to work in some (all?) of the old posts imported from Overcoming Bias. See for example:
The first few pics in that particular post is available from an external server if you click them, but I don't see them inline. The last picture seems to have been hosted at Overcoming Bias, and is no longer accessible.
Has anyone read: "3:16 Bible Texts Illuminated by Donald E. Knuth"
http://www.amazon.com/3-16-Bible-Texts-Illuminated/dp/0895792524
...?
Would you all please recommend books on many-worlds? I liked The End of Time but I thought the treatment of MWI was too cursory.
Hi,I have a question I haven't seen adressed after a qucik search. A friend of mine has been diagnosed with mild paranoid schizophrenia after he attacked his brother and got hospitalized thereafter. this was 2 years ago.he got (and still gets) medical treatment (some sort of neuroleptica, I suppose), but not much more. it sort of helped, he has a nice job and some superficial friendships (he never had great interest in things social). Now, the paranoia has surfaced again. I guess it was there all along, but nobody knew for sure. We're afraid it'll get wo...
And for another suggestion for the site itself, it should be possible to tag posts (especially articles, possibly comments) by language, and let users pick what languages they want to see. The interface wouldn't necessarily have to be translated; it would just be nice to have some support for multilingualism.
I'm considering donating to World Vision UK. Does anyone know much about them?
More generally, is there an easy way to find out how good a charity is? Are there reviews done by third parties?
This might not be the right place to ask, but I'll try anyway:
I read an online paper/article on global dictatorship / totalitarianism as an existential threat a while ago, but I can't find it anymore. I've probably found it on OB / SIAI's website or something like that in the first place, but can't find it there now. Would anyone know of such an article (or any good article on the topic, for that matter)?
Two links that might foster discussion:
http://www.philosophersnet.com/games/
Fun online rationality and anti-bias oriented games. I particularly enjoyed "Staying Alive" (testing conceptions of selfhood). And
http://bloggingheads.tv/diavlogs/20086
Great discussion, I hadn't seen Gendler before but Bloom is always good. Reminded me a little of the IAT discussion here a few months ago.
Any fellow OB/LW-ers attending OSCON this year? If you are interested in meeting up there let me know. Perhaps respond to this comment or email me directly and we'll see what we can work out. (You can contact me via gmail at user name evtujo).
What a critic might say about Less Wrong:
1) The purpose of the pursuit of rationality is to increase an individual's understanding of and power over their environment and the people in it.
2) The only way to establish rational thinking in a group of people not otherwise disposed towards it* is to establish a group norm of praising rational thinkers and shaming the irrational, by an established standard of rationality.
Therefore:
Rationalists are power-seekers, and the pursuit of rationality is inherently elitist and exclusionary.
*That is to say, the vast majority of people.
Would you even care enough to respond?
Anticipating critics and responses to them is largely a waste of time, if they are determined to be against you. Whatever you say will only be fodder for the next attack, and you are wasting precious time and energy being pinned down by their fire.
What we want is responses for people who are not the critics, but may have heard the critics' arguments. That's a considerably less-demanding audience.
How to design utility functions for safe AIs?
Make a utility function which will only emit positive values if the AI is disabled at the moment the solution to your precise problem is found. Ensure that the utility function will emit smaller values for solutions which took longer. Ensure the function will emit higher values for world which are more similar to the world as it would have been without the AI interfering.
This will not create friendly AI, but an AI which tries to minimize its interference with the world. Depending on the weights applied to the three parts, it might spontaneously deactivate though.
New rationalist blog: "gullibility is bad for you":
I don't know about the rest of the audience, but I'd really appreciate a worldbuilding writeup, or maybe even just a glossary, explaining the cultural/technological backdrop of 3WC in more detail than the story provides.
There are some worlds for which I have devised huge cultural, technological, and historical backdrops but this is not one of them.