While I find I have benefitted a great deal from reading posts on OB/LW, I also feel that, given the intellectual abilities of the people involved, the site does not function as an optimally effective way to acquire the art of rationality. I agree that the wiki is a good step in the right direction, but if one of the main goals of LW is to train people to think rationally, I think LW could do more to provide resources for allowing people to bootstrap themselves up from wherever they are to master levels of rationality.
So I ask: What are the optimal software, methods, educational tools, problem sets, etc. the community could provide to help people notice and root out the biases operating in their thinking. The answer may be sources already extant, but I have a proposal.
Despite being a regular reader of OB/LW, I still feel like a novice at the art of rationality. I realize that contributing one's ideas is an effective way to correct one's thinking, but I often feel as though I have all these intellectual sticking points which could be rooted out quite efficiently--if only the proper tools were available. As far as my own learning methods go, assuming a realistic application of curren...
This writer of this essay (seen on Reddit) is a true practical rationalist and role model for all of us.
It's not just because she made a good decision and didn't get emotionally worked up. She was able to look behind the human level and all its status and blame games, see her husband as a victim of impersonal mental forces (don't know if she knows evo psych, but she certainly has an intuitive grasp of some of the consequences) , and use her understanding to get what she wants. And she does it not in a manipulative or evil kind of way, but out of love and a desire to hold her family together.
This has probably been requested before, and maybe I'm requesting it in the wrong place, but... Dear LW Powers-That-Be, any chance of a Preview facility for comments? It seems like I edit virtually every comment I make, straight after posting, for typos, missing words, etc. I find the input format awkward to proofread.
I recently had an idea that seemed interesting enough to post here: "Shut Up and Multiply!", the video game
The basic idea of this game is that before each level you are told some probabilities, and then when the level starts you need to use these probabilities in real time to achieve the best expected outcome in a given situation.
The first example I thought of is a level where people are drowning, and you need to choose who to save first, or possibly which method to use to try to save them, in order to maximize the total number of people saved.
D...
This (IIRC) imported Overcoming Bias post has mangled text encoding ("shĹnen anime than shĹjo", including a high control character; the structure suggests that this is UTF-8 data reinterpreted as some other encoding, then converted to HTML character references). This suggests that there may be a general problem, in which case all the imported OB posts should be fixed en masse.
This comment doesn't really go anywhere, just some vague thoughts on fun. I've been reading A Theory of Fun For Game Design. It's not very good, but it has some interesting bits (have you noticed that when you jump in different videogames, you stay in there air for the same length of time? Apparently game developers all converged on an air time that feels natural, by trial and error). At one point the author asserts that having to think things through consciously is boring, but learning and using unconscious skills is fun. So a novice chess player gets bor...
The Unpleasant Truth Party Game
I wanted to make this idea a new post, but apparently I need karma for that. So I'll just put it here:
The aim is to come up with sentences that are informative, true and maximally offensive. Each of the participants comes up with a sentence. The other participants rate the sentence for two values, how offensive it is on a scale from 0 (perfectly inoffensive) to 1 (the most unspeakable thing imaginable), and how informative it is from 0 (complete gibberish or an utterly obvious untruth) to 1 (immensely precise and true beyond ...
Imagine you find a magic lamp. You polish it and, as expected, a genie pops out. However, it's a special kind of genie and instead of offering you three wishes it offers to make you an expert in anything, equal to the greatest mind working in that field today, instantly and with no effort on your part. You only get to choose one subject area, with "subject area" defined as anything offered as a degree by a respectable university. Also if you try to trick the genie he'll kick you in the nads*.
So if you could learn anything, what would you learn?
*T...
On the subject of advice to novices, I wanted to share a bit I got out of Understanding Uncertainty. This is going to seem painfully simple to a seasoned bayesian, but it's not meant for you. Rather, it's intended for someone who has never made a probability estimate before. Say a person has just learned about the bayesian view of probability and understands what a probability estimate is, actually translating beliefs into numerical estimates can still seem weird and difficult.
The book's advice is to use the standard balls-in-an-urn model to get an intuit...
A very common belief here is that most human behaviour is based on Paleolithic genes, and only trivial variations are cultural (memetic), coming from fresh genes, or from some other sources.
But how strong is the evidence of Paleogenes vs memes vs fresh genes (vs everything else)?
Fresh genes are easy to test - different populations would have different levels of such genes, so we could test for that.
An obvious problem with Paleogenes is that there aren't really that many genes to work with. Also, do we know of any genetic variations that alter these behavio...
I'm curious if Eliezer (or anyone else) has anything more to say about where the Born Probabilities come from. In that post, Eliezer wrote:
But what does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? [...] I don't know. It's an open problem. Try not to go funny in the head about it.
Fair enough. But around the same time, Eliezer ...
Warning: this is pure speculation and might not make any sense at all. :-)
So, let's suppose PCT is by and large an accurate model of human behavior. Behavior, then, is a by-product of the difference of the reference signal and the perception signal. What we feel as doing, is generated by first setting some high-level reference signal, which then unfolds as perception signals to control systems at lower levels and so on until it arrives at muscular level.
This whole process takes certain amount of time, especially when the reference signal is modified at a ...
What strategies do you people who aren't me have to detect lies? And by 'people who aren't me' I mean verbal people.
In order to understand what people are saying, even to parse sentences, I have to build a bit of a model of personality/motivation. This means I comprehend that one is building oneself up before I can even know what you think I should think highly of one for. The structure of dark arts is visible before the contents of the message: repetition of 'facts' in absence of evidence, comparing someone I don't like and someone one doesn't want me to ...
I would be interested to see a top level post in which the community agrees on specific heuristics which are more detrimental than others and as a result more important to eliminate.
For example: The community would agree to a high agreement level that confirmation bias would significantly help their lives to eliminate, while the community may not agree to the same level on the necessity to eliminate the information bias.
This would help narrow down the more significant biases such that we could focus on tests and games which would help us eliminate these bi...
What happens if you run a mind under fully homomorphic encryption that theoretically could be decrypted but never is, and then throw away the mind's result and the key?
Edit: Homomorphic, not holomorphic. Thanks, Douglas_Knight.
I was just reading the comments on The Strangest Thing an AI Could Tell You and saw a couple of references to the infamous AI Box Experiments. Which caused me to realize that I hadn't seen anything else related to them for months at least.
So I ask: have any more of these games been played? Or have any more details been released about the games known to have occurred?
Is it okay to be completely off-topic in an open thread?
I found something fascinating not too long ago.
A number of days ago I was arguing with AngryParsley about how to value future actions; I thought it was obvious one should maximize the total utility over all people the action affected, while he thought it equally self-evident that maximizing average utility was better still. When I went to look, I couldn't see any posts on LW or OB on this topic.
(I pointed out that this view would favor worlds ruled by a solitary, but happy, dictator over populous messy worlds whose average just happens to work out to be a little less than a dictator's might be; he poin...
One thing that I've been wondering about (but not enough to turn it into a proper thread) is how to talk about consequentialist morality. Deontologists can use thought experiments, because they're all about rules, and getting rid of unnecessary real world context makes it easier for them.
Consequentialists cannot use tricks like that - when asked if it's ok to torture someone in a "ticking bomb" scenario, answering that real world doesn't work like that due to possibility of mistakes, how likely is torture to work, slippery slope, potential abuse of torturing power once granted etc. is a perfectly valid reply.
So if we cannot really use thought experiments, how are we supposed to talk about it?
So, I'm reading over Creating Friendly AI, when I come across this:
"Um, full disclosure: I hate moral relativism with a fiery vengeance . . ."
I think, Whaat? Whaaat? Whaaaaat? Is that Eliezer Yudkowsky saying that? Is Eliezer Yudkowsky claiming that moral propositions are in fact properties of the universe itself and not merely of human minds?
The three explanations each of which I'd like to see are that Eliezer Yudkowsky isn't saying what I think of "moral relativism" as meaning, that Eliezer Yudkowsky no longer believes this, and that...
I have a question for lesswrong readers. Please excuse any awkwardness in phrasing or diction--I am not formally trained in philosophy. What do you consider to be the "self"? Your physical body, your subconscious and conscious processes combined, consciousness, or something else? Also, do you consider your "past selves" and "future selves" to be part of a whole with your "present self," and to what extent? For an example of why the distinction might be important, let's say that one night, you sleepwalk and steal a th...
Due to an unfortunate accident with a particle accelerator, you are transformed into a mid-level deity and the rest of the human race is wiped out. Experimenting with your divinity, you find you have impressive though limited levels of control over the world, but not enough finesse or knowledge to rewire the minds of intelligent creatures or create new ones.
With humanity gone, you discover that the only intelligent race left in the universe is the Pebble Sorters.
Do you use your newfound powers to feed starving Pebblesorters, free their slaves, slay their t...
Is the Blue Brain Project an existential risk? Henry Markram, its leader, claims that a full model of a human brain will be constructed within 10 years. Once that's done it will be relatively simple to educate it. And as the successful completion of the project depends on gathering a lot of information about how the brain works, we can expect that information to be available to the artificial human, which in principle permits controlled self-modification.
Where can I read more about perceptual control theory? I'd like a description of reshuffling more detailed than "stuff gets reshuffled".
A very common belief here is that most human behaviour is based on Paleolithic genes, and only trivial variations are cultural (memetic), coming from fresh genes, or from some other sources.
...
If preference for large breasts was genetic, surely there might be a family somewhere with some mutation which would prefer small breasts. Do we have any evidence of that?
Maybe there are also different ideas of what is and isn't a trivial variation.
Here's an interesting article I just found through HN: http://www.dragosroua.com/training-your-focus/
I was just thinking: this site is the result of splitting off from overcomingbias.com earlier this year. With its new format and functionality, comments and posts get ratings. But all of Eliezer Yudkowsky's posts from before the split don't have ratings comparable to more recent posts, because that would require people to go back through the old posts mod them up.
Some people have done so, but not enough that their ratings accurately compare with more recent top-level posts.
I suggest that everyone take the time to go back to the Eliezer_Yudkowsky top-lev...
I don't know if it is appropriate to even post this thing, but I didn't find a single thread which talks about the kind of music people in this forum listen to. Has it ever happened that you have used rationality to decide the kind of music you should be listening to? Like all the other things, even listening to music needs "training" (the ears in this case). Music is art-form, so can it be quantified? One might get the same satisfaction listen to MJ or Pat Metheny. But if it happens that you have to choose only two records to listen to for the rest of your life, can rationality help there?
Here's our place to discuss Less Wrong topics that have not appeared in recent posts. If something gets a lot of discussion feel free to convert it into an independent post.