Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I have poked around here on and off but someone recently led me back to the site. After taking the long break, I am ready to jump back into the sequences but have a favor to ask. Is it okay if I post my thoughts on each as I read them in the comments for that post? I don't know what impact that would have on any feeds, threads or whatnot. Things have changed a lot since I was last active.
To be extremely clear, these aren't going to be inherently nice. They won't be inherent un-nice either but it helps me to process something by responding to it critically. This means challenging it; squeezing it; banging my head against it. I will get a lot of it wrong. My questions may not make sense. I could start mucking up the place with untrained thoughts.
So, being wary of fools, I give you a chance to just nip it all in the bud if you guys have moved on from the Old Posts. I left off somewhere around Fake Morality so... I still have a lot of work ahead of me. If it would help to post a few and then get feedback, that works too.
Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it. And it doesn't lie.
A quick peek at Omega's presence on LessWrong reveals Newcomb's problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.
Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn't obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?
The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.
Note: This is a description pieced together many, many years after my younger self subconsciously created it. This is part of my explanation of how I ended up me. I highly doubt all of this was as neatly defined as I present it to you here. Just know: The me in this post is me between the age of self-awareness and 17 years old. I am currently 25.
An action based belief system asks what to do when given a specific scenario. The input is Perceived Reality and the output is an Action. Most of my old belief system was built with such beliefs. A quick example: If the stop light is red, stop before the intersection.
These beliefs form a network of really complicated chains of conditionals:
- If the stop light is red
- And you are not stopped
- Stop in the next available space before the intersection
Illusions are cool. They make me think something is happening when it isn't. When offered the classic illusion pictured to the right, I wonder at the color of A and B. How weird, bizarre, and incredible.
Today I looked at the above illusion and thought, "Why do I keep thinking A and B are different colors? Obviously, something is wrong with how I am thinking about colors." I am being stupid when my I look at this illusion and I interpret the data in such a way to determine distinct colors. My expectations of reality and the information being transmitted and received are not lining up. If they were, the illusion wouldn't be an illusion.
The number 2 is prime; the number 6 is not. What about the number 1? Prime is defined as a natural number with exactly two divisors. 1 is an illusionary prime if you use a poor definition such as, "Prime is a number that is only divisible by itself and 1." Building on these bad assumptions could result in all sorts of weird results much like dividing by 0 can make it look like 2 = 1. What a tricky illusion!
An optical illusion is only bizarre if you are making a bad assumption about how your visual system is supposed to be working. It is a flaw in the Map, not the Territory. I should stop thinking that the visual system is reporting RGB style colors. It isn't. And, now that I know this, I am suddenly curious about what it is reporting. I have dropped a bad belief and am looking for a replacement. In this case, my visual system is distinguishing between something else entirely. Now that I have the right answer, this optical illusion should become as uninteresting as questioning whether 1 is prime. It should stop being weird, bizarre, and incredible. It merely highlights an obvious reality.
A fun game you can play on LessWrong is to stop just as you are about to click "comment" and make a prediction for how much karma your comment will receive within the next week. This will provide some quick feedback about how well your karma predictors are working. This exercise will let you know if something is broken. A simpler version is to pick from these three distinct outcomes: Positive karma, 0 karma, negative karma.
What other predictors are this easy to test? Likely candidates match one or more of the following criteria:
- Something we do on a regular (probably daily) basis
- An action that has a clear starting point
- Produces quick, quantifiable feedback (e.g. karma, which is a basic number)
- An action that is extremely malleable so we can take our feedback, make quick adjustments, and run through the whole process again
- An ulterior goal other than merely testing our predictors so we don't get bored (e.g. commenting at LessWrong, which offers communication and learning as ulterior goals)
- Something with a "sticky" history so we can get a good glimpse of our progress over time
I have a terrifying confession to make: I believe in God.
This post has three prongs:
First: This is a tad meta for a full post, but do I have a place in this community? The abstract, non-religious aspect of this question can be phrased, "If someone holds a belief that is irrational, should they be fully ousted from the community?" I can see a handful of answers to this question and a few of them are discussed below.
Second: I have nothing to say about the rationality of religious beliefs. What I do want to say is that the rationality of particular irrationals is not something that is completely answered after their irrationality is ousted. They may be underneath the sanity waterline, but there are multiple levels of rationality hell. Some are deeper than others. This part discusses one way to view irrationals in a manner that encourages growth.
Third: Is it possible to make the irrational rational? Is it possible to take those close to the sanity waterline and raise them above? Or, more personally, is there hope for me? I assume there is. What is my responsibility as an aspiring rationalist? Specifically, when the community complains about a belief, how should I respond?
Alice must answer the multiple-choice question, "What color is the ball?" The two choices are "Red" and "Blue." Alice has no relevant memories of The Ball other than she knows it exists. She cannot see The Ball or interact with it in any way; she cannot do anything but think until she answers the question.
In an independent scenario, Bob has the same question but Bob has two memories of The Ball. In one of the memories, The Ball is red. In the other memory, The Ball is blue. There are no "timestamps" associated with the memories and no way of determining if one came before the other. Bob just has two memories and he, somehow, knows the memories are of the same ball.
If you were Alice, what would you do?
If you were Bob, what would you do?
While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?
The catch is that these problems cannot be inherently fuzzy problems. "How do I become less wrong?" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?
See also: Open problems at LW:Wiki
Playing to learn
I like losing. I don't even think that losing is necessarily evil. Personally, I believe this has less to do with a desire to lose and more to do with curiosity about the game-space.
Technically, my goals are probably shifted into some form of meta-winning — I like to understand winning or non-winning moves, strategies, and tactics. Actually winning is icing on the cake. The cake is learning as much as I can about whatever subject in which I am competing. I can do that if I win; I can do that if I lose.
I still prefer winning and I want to win and I play to win, but I also like losing. When I dive into a competition I will like the outcome. No matter what happens I will be happy because I will either (a) win or (b) lose and satiate my curiosity. Of course, learning is also possible while watching someone else lose and this generally makes winning more valuable than losing (I can watch them lose). It also provides a solid reason to watch and study other people play (or play myself and watch me "lose").
The catch is that the valuable knowledge contained within winning has diminishing returns. When I fight I either (a) win or (b) lose and, as a completely separate event, (c) may have an interesting match to study. Ideally I get (a) and (c) but the odds of (c) get lower the more I dominate because my opponents could lose in a known fashion (by me winning in an "old" method). (c) should always be found next to (b). If there is a reason I lost I should learn the reason. If I knew the reason I should not have lost. Because of this, (c) offsets the negative of (b) and losing is valuable. This makes winning and losing worth the effort. When I lose, I win.
Personally, I find (c) so valuable that I start getting bored when I no longer see anything to learn. If I keep winning over and over and never learn anything from the contest I have to find someone stronger to play or start losing creatively so that I can start learning again. Both of these solutions set up scenarios where I am increasing my chances to lose. Mathematically, this starts to make sense if the value of knowledge gained and the penalty of losing combine into something greater than winning without learning anything. (c - b > a) My hunches tell me that I value winning too little and curiosity is starting to curb my desire to win. I am not playing to win; I am playing to learn.
View more: Next