xkcd's Up-Goer Five comic gave technical specifications for the Saturn V rocket using only the 1,000 most common words in the English language.

This seemed to me and Briénne to be a really fun exercise, both for tabooing one's words and for communicating difficult concepts to laypeople. So why not make a game out of it? Pick any tough, important, or interesting argument or idea, and use this text editor to try to describe what you have in mind with extremely common words only.

This is challenging, so if you almost succeed and want to share your results, you can mark words where you had to cheat in *italics*. Bonus points if your explanation is actually useful for gaining a deeper understanding of the idea, or for teaching it, in the spirit of Gödel's Second Incompleteness Theorem Explained in Words of One Syllable.

As an example, here's my attempt to capture the five theses using only top-thousand words:

  • Intelligence explosion: If we make a computer that is good at doing hard things in lots of different situations without using much stuff up, it may be able to help us build better computers. Since computers are faster than humans, pretty soon the computer would probably be doing most of the work of making new and better computers. We would have a hard time controlling or understanding what was happening as the new computers got faster and grew more and more parts. By the time these computers ran out of ways to quickly and easily make better computers, the best computers would have already become much much better than humans at controlling what happens.
  • Orthogonality: Different computers, and different minds as a whole, can want very different things. They can want things that are very good for humans, or very bad, or anything in between. We can be pretty sure that strong computers won't think like humans, and most possible computers won't try to change the world in the way a human would.
  • Convergent instrumental goals: Although most possible minds want different things, they need a lot of the same things to get what they want. A computer and a human might want things that in the long run have nothing to do with each other, but have to fight for the same share of stuff first to get those different things.
  • Complexity of value: It would take a huge number of parts, all put together in just the right way, to build a computer that does all the things humans want it to (and none of the things humans don't want it to).
  • Fragility of value: If we get a few of those parts a little bit wrong, the computer will probably make only bad things happen from then on. We need almost everything we want to happen, or we won't have any fun.

If you make a really strong computer and it is not very nice, you will not go to space today.

Other ideas to start with: agent, akrasia, Bayes' theorem, Bayesianism, CFAR, cognitive bias, consequentialism, deontology, effective altruism, Everett-style ('Many Worlds') interpretations of quantum mechanics, entropy, evolution, the Great Reductionist Thesis, halting problem, humanism, law of nature, LessWrong, logic, mathematics, the measurement problem, MIRI, Newcomb's problem, Newton's laws of motion, optimization, Pascal's wager, philosophy, preference, proof, rationality, religion, science, Shannon information, signaling, the simulation argument, singularity, sociopathy, the supernatural, superposition, time, timeless decision theory, transfinite numbers, Turing machine, utilitarianism, validity and soundness, virtue ethics, VNM-utility

New Comment
82 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Prisoner's Dilemma

You and I play a game.
We each want to get a big score.
But we do not care if the other person gets a big score.

On each turn, we can each choose to Give or Take.
If I Give, you get three points. If I Take, I get one point.
If you Give, I get three points. If you Take, you get one point.

If we both Give, we both will have three points.
If we both Take, we both will have one point.
If you Give and I Take, then I will have four points and you will have no points.
If you Take and I Give, then I will have no points and you have will four points.

We would both like it if the other person would Give, because then we get more points.
But we would both like to Take, because then we get more points.

I would like it better if we both Give than if we both Take.
But if I think you will Give, then I would like to Take so that I can get more points.
It is worst for me if I Give and you Take.
And you think just the same thing I do — except with "you" and "me" switched.

If we play over and over again, then each of us can think about what the other has done before.
We can choose whether to Give or Take by thinking about what the other person has done before.
If you always Give... (read more)

It would help if I knew that you and I think exactly the same way.

If this is true, then when I decide to Give, I know you will Give too.

5Pentashagon
Also, if we just know how each other thinks (we don't have to think the same) and I can show for sure that you will Give to me if I Give to you and that you will Take from me if I Take from you, then I will Give to you.

Effective Altruism

"Doing Good in the Most Helping Way"- It is good to try to help people. It is better to help people in the best way possible. You should look at what actually happens when you try to help people in order to find out how well your helping worked. If we look at lots of different ways of helping people, then we can find out which way is best. You should give your money to the people who are best at helping people.

Where we live, and in places like it, everyone has lots more money than most people who live in other places. That means we have lots that we can give away to the people in the other places. It might be a good idea to try to make lots of money so that you can give away even more!

-1[anonymous]
Hi, I'm new to LessWrong and haven't read the morality sequence and haven't read many arguments for effective altruism, so could you elaborate on this sentiment? I agree with this kind of movement because intuitively it feels really good to help people and it feels really bad to know that people or animals are suffering. I think it's quite certain that there other minds similar to mine and these minds are capable of same kind of feelings that I am. I wouldn't want other people to feel the same kind of bad feelings that I have sometimes felt, but I know there are minds who experience more than a million times the worst pain I've ever felt. Still, there are some people, who think rationality is about always thinking about only one's own well-being, who might disagree with this. They might say, that the well-being of other minds doesn't affect your mind directly. So if you don't know about it, it's irrelevant to you. Some of these people may also try to minimize the effect of the natural empathy by acknowledging that the being who is suffering is different from you. They could be your enemies or someone who is not "worth" your efforts. It's easier to cope with the fact that an animal who belongs into a different species is suffering than someone in your family. Or consider someone who has a different skin color and whose people behave strangely and who sometimes have violent and "primitive" habits are suffering on the other side of the world (note, this is not what I think, but what I've heard other people say... they basically think some people are a bit like the baby eating aliens) - are their suffering worth less? Intuitively it feels that way because they don't belong into your tribe. Anyway, these minds are still capable of same kind of suffering. The question still stands, if someone is "rationally" interested in one's own well-being only, and if someone only cares about other minds to the extent of how they affect your own mind through the natural empathy ref
8Viliam_Bur
Hi, welcome to LW! I will reply to your comments here in one place, instead of each of them separately. No. It is okay to ask, and it is also okay to disagree. Just choose a proper place. For example, this is an article about "explaining hard ideas with simple words", therefore this discussion being here is quickly getting off-topic. You are not speaking about explaining effective atruism using simple words, but about misconceptions people have about rationality and altruism. That's a different topic; and now the whole comment tree is unrelated to the original article. Don't worry, it happens. Just know that the proper place to ask questions like this is usually the latest Open Thread, and sometimes there is a special thread (like the "stupid" questions) for that. (I would say this is actually the website's fault, for not making the Open Thread more visible.) Then of course such person will act rationally by caring only about their own well-being, and considering others only to the degree they influence this specific goal. For example, a rational sociopath. -- Sometimes we speak about paperclip maximizers, to make it more obvious (and less related to specific details of sociopathy or whatever). For a paperclip maximizer, it is rational to maximize the number of paperclips, and to care about human suffering only as much as it can influence the number of paperclips. So for example, if people would react to their suffering by destroying paperclips, or if they would respond to paperclip maximizer's help by building many new paperclips out of gratitude, then the paperclip maximizer could help them. The paperclip maximizer could even pretend it cares about human suffering, if that helps to maximize the number of paperclips in the future. -- But we are not trying here to sell effective altruism to paperclip maximizers, nor to sociopaths. Only to people who (a) care about suffering of others, and (b) want to be reflectively consistent (want to care about what they would
5daenerys
How I read this: "Hi! I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists) that has already been written that answers my question, can you write a response that explains the whole of morality?" To start off with, you seem to be using the term "rationality" to mean something completely different than what we mean when we say it. I recommend Julia Galef's Straw Vulcan talk.
0[anonymous]
You slightly misunderstood what I meant, but maybe that's understandable. I'm not a native English speaker and I'm quite poor at expressing myself even in my native language. You don't have to be so condescending, I was just being curious. Do you usually expect people to read all the sequences before they can ask questions? If so, I apologize because I didn't know this rule. I can come back here after a few months when I've read all the sequences. Okay, sorry. I just wanted to be honest. I have read most of the sequences listed on the sequences page. The morality sequence is quite big and reading it seems a daunting task because I have books related to my degree that I'm supposed to be reading and they are of bigger importance to me at the moment. I thought there could be a quick answer to this question. But if you have any specific blog posts related to this issue in mind, please link them! I'm aware of that. With quotation marks around the word I was signaling that I don't really think it's real rationality or the same kind of rationality LessWrong people use. I know that rationalist people don't think that way. It's just that in some economic texts people to use the word "rationality" to mean that: a "rational" agent is only interested in his own well-being. I have read relevant blog posts on LessWrong and I think I know this concept. People think rational people are supposed to be some kind of emotional robots who don't have any feelings and otherwise thinking like modern-day computer, very mechanically and not being very flexible in their thinking etc. In reality people can use instrumental rationality to achieve the emotionally desired goals they have or use epistemic rationality to find out what their emotionally desired goals really are?
6Rob Bensinger
Keep in mind that this "rationality" is just a word. Making up a word shouldn't, on its own, be enough to show that something is good or bad. If self-interest is more "rational" than helping others, then you should be able to give good reasons for that with other words that are more clear and simple. People get very confused when they start thinking that what they actually want matters less than some piece of paper saying what they Should or Shouldn't want. Even if some made-up idea says you Shouldn't want to help others except to make yourself happy, why should that matter more to me than what I actually want, which is just to help people? This is a lot like Mr. Yudkowsky's "being sad about having to think and decide well".
0[anonymous]
Btw, that link is really good and it made me think a bit differently. I've sometimes envied others for their choices and thought I'm supposed to behave in a certain way that is opposite to that... but actually what matters is what I want and how I can achieve my desires, not how I'm supposed to act.
1Rob Bensinger
Right! "I should..." is a means for actually making the world a better place. Don't let it hide away in its own world; make it face up to the concerns and wishes you really have.
0[anonymous]
I think the gist is that we all live inside our own bubbles of consciousness and can only observe indirectly what is inside other people's bubbles. Everything that motivates you or makes you do anything is inside that bubble. If you expand this kind of thinking, it's not really important what is inside those other bubbles, only how they affect you. But this is kinda contrived philosophy.
4Jayson_Virissimo
Which texts are you referring to? I have about a dozen and none of them define rationality in this way.
2[anonymous]
Okay. I was wrong. It seems I don't know enough and I should stop posting here.
3Rob Bensinger
I think the problem might be confusing connotation and denotation. 'Rational self-interest' is a term because most rationality isn't self-interested, and most self-interest isn't rational. But when words congeal into a phrase like that, sometimes they can seem to be interchangeable. And it doesn't help that aynrand romanticism psychodarwinism hollywood.
0[anonymous]
Yep, the Ayn Rand type of literature is what originally brought this to my mind. I also read a book about economic sociology which told about the prisoner's dilemma and it said the most "rational" choice is to always betray your partner (if you only play once) and Nash was surprised when people didn't behave this way
5Rob Bensinger
That's a roughly high-school-level misunderstanding of what the Prisoner's Dilemma means, though I suppose it makes sense to be surprised that humans care about each other if you'd never met a human, and it did make sense to be confused by why humans care about each other until we recognized that (uncertainly) iterated dilemmas and kin selection were involved. I believe a great many people on LessWrong also reject the economic consensus on this issue, however; they think that two rational agents can cooperate in something like a classical PD, provided only that they have information about one another's (super)rationality. See True Prisoner's Dilemma and Decision Theory FAQ. In the real world, most human interactions are not Prisoner's Dilemmas, because in most cases people prefer something that sounds like '(Cooperate, Cooperate)' to '(Cooperate, Defect)'. whereas in the PD the latter must have a higher payoff.
0[anonymous]
This is what was said: "It (game theory) assumes actors are more rational than they often are in reality. Even Nash faced this problem when some economists found that real subjects responded differently from Nash's prediction: they followed rules of fairness, not cold, personal calculation (Nassar 1998: 199)" Yeah, I remember reading that some slightly generous version of tit-for-tat is the most useful tactic in prisoner's dilemma at least if you're playing several rounds.
2Jayson_Virissimo
The reason I ask is because I have heard this claim many times, but have never encountered an actual textbook that taught it, so I'm not sure if it has any basis in reality or is just a straw man (perhaps, designed to discredit economics, or merely an honest misunderstanding of the optimization principle).
2Nisan
Welcome to Less Wrong! Your comment would be more appropriate in the welcome thread. Also: Uh oh! You have used non-permitted words (lesswrong, morality, sequence, arguments, effective, altruism, elaborate, sentiment)
0[anonymous]
I have already posted in there. Do you mean I should only post there until I mature enough that I can post here?
2Nisan
Oh, ok. The open threads are a good place to ask questions. If you aren't satisfied with the response you get there, you can try here.
1mare-of-night
I would say that "doing good in the most helping way" only matters if you want things to be good for other people (or animals). A person who thinks well might want things to be good for other people, or want things to be good for themselves, or want there to be lots of things to hold paper together - to think well means to do things the best way to get what they want, but not to want any one thing. Knowing whether you want things to be good for other people, or just want things to be good for yourself but feel sad when things are bad for other people, is sort of like a different thing people think about here. Sometimes we think about if we should want a thing that makes us think we have what we want, even though we are really just sitting around with the thing on our heads . If I want to think that things are good for other people (because it will make me happy and the biggest thing I want is to be happy), then I can get what I want by changing what I think. But if what I want is for things to be good for other people (even if it does not make me happy), then the only way I can get what I want is to make things better for other people (and so I want to do good in the most helping way). I should say, I think a little different about good from most people here. Most people here think that you can want something, but also think that it is bad. I think that if you think you want something that is bad, you are probably confused about what you want, and you would stop wanting the bad thing if you thought about it enough and felt how bad it was. I am not totally sure that I am right about this, though. (See also: good about good)
0[anonymous]
I don't think it's really possible to argue against this idea. If you're only interested in your own well-being, then doing things which do not increase your own well-being will not help you achieve your goal.
0Rob Bensinger
But how happy or sad other minds are does change how happy or sad I am. Why would it be looking out for myself better if I ignored something that changes my life in a big way? And why should I pretend to only care about myself if I really do care about others? Or pretend to only care about how others cause changes in me, when I do in fact care about the well-being of people who don't change me? Suppose I said to you that it's bad to care about the person you're going to be. After all, you aren't that person now. That person's thoughts and concerns are outside of the present you. And that person can't change anything for the present you. That wouldn't be a very good reason to ignore the person I'll become. After all, I do want the person I'm going to be to be happy. I don't need to give reasons showing why I should care about myself over time. I just need to note that I do in fact care about myself over time. How is this different, in any important way that changes the reasoning above, from noting that I do in fact care about other people in their own right? If people only cared about other people as ways to get warm good feels for themselves, then people would be happy to change themselves to get warm good feels both when others are happy and when others are sad. People also wouldn't care about people too far away to cause changes for them. But if I send a space car full of people far away from me, I still want them to be happy even after they're too far away to ever change anything for me again. That's a fact about how I am. Why should I try to change that?
-1[anonymous]
I guess that makes sense. When people say things like "I want a lot of money", "I want to live in a fulfilling relationship", "I want to climb mt. everest", the essential quality of these desires is that they are real and actually happen roughly the same way you picture it in your mind. No one says things like "I want to have the good feeling of living in a fulfilling relationship whether or not I actually live in one"... no. Because it's important that they're actually real. You can say the same thing about helping others - if you don't want other people to suffer, then it's important that they actually don't suffer. It's a bit different. You will eventually become the person you are in the future, but it's impossible to never get inside the mind of someone else, at least not directly. How would you actually change yourself? It's very difficult in practice. But people don't care about far away people so much as they care about people that are similar to you. When westerners get in trouble in developing countries, people make a big effort to get them safe and mostly ignore all suffering that is going on around that. People send less money to people in developing countries than say, war veterans or people at home. You shouldn't. I'm the same way, I try to help people for the sake of helping them. But there are some people who are only interested in their own well-being and I'm just thinking how I could argue with them.
1Rob Bensinger
Yes! I think that's a lot like what I was talking about. Present-you won't. Present-you will go away and never know that will happen. You-over-time may change from present-you to you-to-come, but I wasn't talking about you-over-time. Also, mind reading could change this some day, maybe. Yes, but even if it weren't possible at all, and we thought it were possible, whether we wished for it could say a lot about what we really want. Yes, but that's very different from saying that people don't care about far away people at all except in so far as they get changed by them. If it were completely easy for you to in a flash make the lives of everyone you'll never know about ten times as good, for free, you would want to do that.

There is a beautifully executed talk by Guy Steele where he only uses one-syllable words or words explicitly defined in the talk: Growing a Language.

6Pablo
Brilliant. Here's a transcript.
2Rob Bensinger
This is great! Does anyone have a version that isn't all choppy?

I love this. Small words help me understand huge things.

I think this is good. I am happy about this.

Pascal's wager: If you don't do what God says, you will go to Hell where you will be in a lot of pain until the end of time. Now, maybe God is not real, but can you really take that chance? Doing what God says isn't even that much work.

Pascal's mugging: I tell you "if you don't do what I say, something very bad will happen to you." Very bad things are probably lies, but you can't be sure. And when they get a lot worse, they only sound a little bit more like lies. So whatever I asked you to do, I can always make up a story so bad that it's safer to give in.

6RRand
(With slightly more fidelity to Mr. Pascal's formulation:) You have nothing to lose. You have much to get. God can give you a lot. There might be no God. But a chance to get something is better than no chance at all. So go for it.
1Viliam_Bur
Nitpicking: You have little to lose. (For example, you waste your Sunday mornings. That's not nothing.)
[-]DSimon110

Mr. Turing's Computer

Computers in the past could only do one kind of thing at a time. One computer could add some numbers together, but nothing else. Another could find the smallest of some numbers, but nothing else. You could give them different numbers to work with, but the computer would always do the same kind of thing with them.

To make the computer do something else, you had to open it up and put all its pieces back in a different way. This was very hard and slow!

So a man named Mr. Babbage thought: what if some of the numbers you gave the computer were what told it what to do? That way you could have just one computer, and you could quickly make it be a number-adding computer, or a smallest-number-finding computer, or any kind of computer you wanted, just by giving it different numbers. But although Mr. Babbage and his friend Ms. Lovelace tried very hard to make a computer like that, they could not do it.

But later a man named Mr. Turing thought up a way to make that computer. He imagined a long piece of paper with numbers written on it, and imagined a computer moving left and right that paper and reading the numbers on it, and sometimes changing the numbers. This computer coul... (read more)

The Halting Problem (Part One)

A plan is a list of things to do.
When a computer runs, it is doing the things that are written in a plan.
When you solve a problem like 23 × 3, you are also following a plan.

Plans are made of steps.
To follow a plan, you do what each plan step says to do, in the order they are written.
But sometimes a step can tell you to move to a different step in the plan, instead of the next one.
And sometimes it can tell you to do different things if you see something different.
It can say "Go back to step 4" ... or "If the water is not hot yet, wait two minutes, then go back to step 3."

Here is a plan:

  1. Walk to the store.
  2. Buy a food.
  3. Come back home.
  4. You're done!

Here is another plan:

  1. Walk to the store.
  2. Buy a food.
  3. Come back home.
  4. Go back to step 1.

There is something funny about the second plan!
If we started following that plan, we would never stop.
We would just keep walking to the store, buying a food, and walking back home.
Forever.
(Or until we decide it is a dumb plan and we should stop following it.
But a computer couldn't do that.)

You may have heard songs like "The Song That Never Ends" or "Ninety-Nine Bottles of Drinks on the W... (read more)

The Halting Problem (Part Two)

Can we have plans for thinking about other plans? Yes, we can!

Suppose that we found a plan, and we did not know what kind of plan it is.
Maybe it is a plan for how to make a food.
Or maybe it is a plan for how to go by car to another city.
Or maybe it is a plan for how to build a house. We don't know.
Can we have a plan for finding out?

Yes! Here is a plan for telling what kind of plan it is:

  1. Get paper and a writing stick.
  2. Start at the first step of the plan we are reading.
  3. Read that step.
    (Do not do the things that step says to do!
    You are only reading that plan, not following it.
    You are following this plan.)
  4. Write down all of the names of real things (like food, roads, or wood) that the step uses.
    Do not write down anything that is not a name of a real thing.
    Do not write down numbers, action words, colors, or other words like those.
  5. Are there more steps in the plan we are reading?
    If so, go to the next step of the plan we are reading, and go back to go back to step 3 of this plan.
    If not, go on to step 6 of this plan.
  6. Look at the paper that we wrote things down on.
  7. If most of the things on the paper are food and kitchen things, say that the plan is a plan f
... (read more)
5fubarobfusco
The Halting Problem (Part Three) Let's imagine that we have a plan for reading other plans and saying if they will end. Our imaginary plan is called E, for Ending. We want to know if a plan like E is possible. We do not know what the steps of plan E are. All we know is that we are imagining that we can follow plan E to read another plan and say whether it will end or not. (We need a name for this other plan. We'll call it X.) But wait! We know there are plans that sometimes end, and sometimes go on forever. Here is one — Plan Z: 1. Start with a number. 2. If our number is zero, stop. 3. Make our number smaller by 1. 4. Go to step 2. Plan Z will always stop if the number we start with is bigger than zero and is a whole number. But if our number is one-half (or something else not whole) then Z will never end. That is because our number will go right past zero without ever being zero. Plan Z is not really whole by itself. It needs something else that we give it: the number in step 1. We can think of this number as "food" for the plan. The "food" is something Z needs in order to go, or even to make sense. Some food is good for you, and some is bad for you ... ... and whether Z ends or not depends on what number we feed it. Plan Z ends if we feed it the number 1 or 42, but not if we feed it the number one-half. And so when we ask "Will plan X end?" we really should ask "Will plan X end, if we feed F to plan X?" So in order to follow plan E, we need to know two things: a plan X, and a something called F. (What kind of something? Whatever kind X wants. If X wants a number, then F is a number. If X wants a cookie, then F is a cookie. If X wants a plan to read, then F is a plan.) Following E will then tell us if X-fed-with-F will end or run forever. Now here is another plan — Plan G: 1. Start with a plan. Call it plan X. 2. Follow plan E to read plan X, with plan X as food. This will tell us if plan X will end or not. 3. Now, if E told us that X never
0DSimon
So what is Mr. Turing's computer like? It has these parts: 1. The long piece of paper. The paper has lines on it like the kind of paper you use in numbers class at school; the lines mark the paper up into small parts, and each part has only enough room for one number. Usually the paper starts out with some numbers already on it for the computer to work with. 2. The head, which reads from and writes numbers onto the paper. It can only use the space on the paper that is exactly under it; if it wants to read from or write on a different place on the paper, the whole head has to move up or down to that new place first. Also, it can only move one space at a time. 3. The memory. Our computers today have lots of memory, but Mr. Turing's computer has only enough memory for one thing at a time. The thing being remembered is the "state" of the computer, like a "state of mind". 4. The table, which is a plan that tells the computer what to do when it is each state. There are only so many different states that the computer might be in, and we have to put them all in the table before we run the computer, along with the next steps the computer should take when it reads different numbers in each state. Looking closer, each line in the table has five parts, which are: * If Our State Is this * And The Number Under Head Is this * Then Our Next State Will Be this (or maybe the computer just stops here) * And The Head Should write this * And Then The Head Should move this way Here's a simple table: Happy 1 Happy 1 Right Happy 2 Happy 1 Right Happy 3 Sad 3 Right Sad 1 Sad 2 Right Sad 2 Sad 2 Right Sad 3 Stop Okay, so let's say that we have one of Mr. Turing's computers built with that table. It starts out in the Happy state, and its head is on the first number of a paper like this: 1 2 1 1 2 1 3 1 2 1 2 2 1 1 2 3 What will the paper look like after the computer is done? Try pretending you are the comput
0[anonymous]
I'm actually surprised that Turing machines were invented before anyone ever built an actual computer.
0ikrase
What about Babbage's Analytical Engine, which would have been turing-complete had it been constructed? Although it took a Countess to figure out that programming could be a thing...
0bogdanb
I see your point (I sometimes get the same feeling), but if you think about it, it’d be much more astonishing if someone built a universal computer before having the idea of a universal computer. It’s not really common to build something much more complex than a hand ax by accident. Natural phenomena are often discovered like that, but machines are usually imagined a long time before we can actually build them.
2[anonymous]
Yeah, that's a good point. Turing must have been one of the first people to realize that there's a "maximum amount of flexibility" a computer can have, so to speak, where it's so flexible it can do anything that any computer can.

I spent the better part of November writing miniature essays in this. It's really quite addictive. My favourites:

  • Parallax and cepheid variables (Dead stars that flash in space)

  • Basic linear algebra (four-sided boxes of numbers that eat each other)

  • The Gold Standard (Should a bit of money be the same as a bit of sun-colored stuff that comes out of the ground?)

  • The Central Limit Theorem (The Middle Thing-It-Goes-To Idea-You-Can-Show-Is-True-With-Numbers - when you take lots of Middle Numbers of lots of groups, it looks like the Normal Line!)

  • Complex numbers ("I have just found out I can use the word 'set'. This makes me very happy.")

  • Utility, utilitarianism and the problems with interpersonal utility comparison ("If you can't put all your wants into this order, you have Not-Ordered Wants")

  • The triumvirate brain hypothesis ("when you lie down on the Mind Doctor's couch, you are lying down next to a horse, and a green water animal with a big smile")

  • Arrow's Impossibility Theorem ("If every person making their mark on a piece of paper wants the Cat Party more than the Dog Party, then the Dog Party can't come out higher in the order than the

... (read more)
[-]twanvl190

The Central Limit Theorem (The Middle Thing-It-Goes-To Idea-You-Can-Show-Is-True-With-Numbers - when you take lots of Middle Numbers of lots of groups, it looks like the Normal Line!)

Does it really simplify things if you replace "limit" with "thing-it-goes-to" and theorem with "idea-you-can-show-is-true-with-numbers"? IMO this is a big problem with the up-goer five style text: you can still try to use complex concepts by combining words. And because you have to describe the concept with inadequate words, it becomes actually harder to understand what you really mean.

There are two purposes of writing simple English:

  • writing for children
  • writing for non-native speakers

In both cases is "sun-colored stuff that comes out of the ground" really the way you would explain it? I would sooner say something like: "yellow is the color of the sun, it looks like . People like shiny yellow metal called gold, because there is little of it".

I suppose the actual reason we are doing this is

  • artificially constrained writing is fun.

If your boyfriend or girlfriend has a different meaning for 'box' than you do, and you give them a line, not only will they be cross with you, but you will be wrong, and that is almost as bad

"give them a line" and "be cross with you" are expressions that make no sense with the literal interpretation of these words.

Using the most common 1,000 words is not really about simplifying or clarifying things. It's about imposing an arbitrary restriction on something you think you're familiar with, and seeing how you cope with it.

There are merits to doing this beyond "it's fun". When all your technical vernacular is removed, you can't hide behind terms you don't completely understand.

1Sabiola
In fact, I'm not sure what "give them a line" means. Give them a line like this ------------- instead of a box? From context, it could also mean 'just make something up'. (English is not my first language, in case you couldn't tell.) **googles** Yes, it turns out that "give someone a line" can mean "to lead someone on; to deceive someone with false talk" (or "send a person a brief note or letter", but that doesn't make sense in this context). Still can't tell which type of line is meant.
4sixes_and_sevens
I was quoting a single sentence of my mini-essay. "Give them a line" probably doesn't make much sense out of context. The original context was that a line segment is a degenerate case of a rectangle (one with zero width). You can absolutely say a line segment is a rectangle (albeit a degenerate case of one). However, if your partner really wanted a rectangle for their birthday, and you got them a line segment, they may very well be super-pissed with you, even if you're using the same definition of "line segment" and "rectangle". If you're not using the same definition, or even if you're simply unsure whether you're using the same definition, then when you get your rectangle-wanting partner a line segment for their birthday, not only would they be pissed with you, but you may also be factually incorrect in your assertion that the line segment is a rectangle for all salient purposes.
1fubarobfusco
There's also several meanings of "box", such as: * a package (as might be used to hold a gift) * to punch each other for sport (as in boxing) * a computer (in hobbyist or hacker usage) * a quadrilateral shape (as in the game Dots and Boxes) ... and the various Urban Dictionary senses, too. (Heck, if one of my partners talked about getting a box, it might mean a booster box of Magic cards.)
2satt
I don't know why that one caught my eye, but here I go. You've probably seen the number line before, a straight line from left to right (or right to left, if you like) with a point on the line for every real number. A real number, before you ask, is just that: real. You can see it in the world. If I point to a finger on my hand and ask, "how many of these do I have?", the answer is a real number. So is the answer to "how tall am I?", and the answer to "How much money do I have?" The answer to that last question, notice, might be less than nothing but it would still be real for all that. Alright, what if you have a number line on a piece of paper, and then turn the paper around by half of half a turn, so the number line has to run from top to bottom instead of left to right? It's still a number line, of course. But now you can draw another number line from left to right, so that it will go over the first line. Then you have not one line but two. What if you next put a point on the paper? Because there is not one number line but two, the point can mean not just one real number but two. You can read off the first from the left-to-right line, and then a second from the top-to-bottom line. And here is a funny thing: since you still have just one point on the paper, you still have one number — at the very same time that you have two! I recognize that this may confuse. What's the deal? The thing to see is that the one-number-that-is-really-two is a new kind of number. It is not a real number! It's a different kind of number which I'll call a complete number. (That's not the name other people use, but it is not much different and will have to do.) So there is no problem here, because a complete number is a different kind of number to a real number. A complete number is like a pair of jeans with a left leg and a right leg; each leg is a real number, and the two together make up a pair. Why go to all this trouble for a complete number that isn't even real? Well, sometime
7[anonymous]
For what it's worth, I dislike the term "real number" precisely because it suggests that there's something particularly real about them. Real numbers have a consistent and unambiguous mathematical definition; so do complex numbers. Real numbers show up in the real world; so do complex numbers. If I were to tell someone about real numbers, I would immediately mention that there's nothing that makes them any more real or fake than any other kind of number. Unrelatedly, my favorite mathematical definition (the one that I enjoy the most, not the one I think is actually best in any sense) is essentially the opposite of Up-Goer Five: it tries to explain a concept as thoroughly as possible using as few words as possible, even if that requires using very obscure words. That definition is:
0satt
I thought I might get some pushback on taking the word "real" in "real number" literally, because, as you say, real numbers are just as legitimate a mathematical object as anything else. We probably differ, though, in how much we think of real & complex numbers as showing up in the real world. In practice, when I measure something quantitatively, the result's almost always a real number. If I count things I get natural numbers. If I can also count things backwards I get the integers. If I take a reading from a digital meter I get a rational number, and (classically) if I could look arbitrarily closely at the needle on an analogue meter I could read off real numbers. But where do complex numbers pop up? To me they really only seem to inhere in quantum mechanics (where they are, admittedly, absolutely fundamental to the whole theory), but even there you have to work rather hard to directly measure something like the wavefunction's real & imaginary parts. In the macroscopic world it's not easy to physically get at whatever complex numbers comprise a system's state. I can certainly theorize about the complex numbers embodied in a system after the fact; I learned how to use phasors in electronics, contour integration in complex analysis class, complex arguments to exponential functions to represent oscillations, and so on. But these often feel like mere computational gimmicks I deploy to simplify the mathematics, and even when using complex numbers feels completely natural in the exam room, the only numbers I see in the lab are real numbers. As such I'm OK with informally differentiating between real numbers & complex numbers on the basis that I can point to any human-scale quantitative phenomenon, and say "real numbers are just right there", while the same isn't true of complex numbers. This isn't especially rigorous, but I thought that was a worthwhile way to avoid spending several introductory paragraphs trying to pin down real numbers more formally. (And I expect
0[anonymous]
As far as I know, the most visible way that complex numbers show up "in the real world" is as sine waves. Sine waves of a given frequency can be thought of as complex numbers. Adding together two sine waves corresponds to adding the corresponding complex numbers. Convoluting two sine waves corresponds to multiplying the corresponding complex numbers. Since every analog signal can be thought of as a sum or integral of sine waves of different frequencies, an analog signal can be represented as a collection of complex numbers, one corresponding to the sinusoid at each frequency. This is what the Fourier transform is. Since convolution of analog signals corresponds to multiplication of their Fourier transforms, now a lot of the stuff we know about multiplication is applicable to convolution as well.

Earlier attempts to do something like this: by us, by other people

9FiftyTwo
http://tenhundredwordsofscience.tumblr.com/
2Username
Additionally, though without the strict word limit, http://simple.wikipedia.org/wiki/Main_Page

CEV

Rogitate the nerterological psephograph in order to resarciate the hecatologue from its somandric latibule in the ipsographic odynometer, and thereby sophronize the isangelous omniregency.

It's hard to even imagine how to make a mind - build a brain - that does what's 'right', what it 'should'. We, the humans who have to build that mind, don't know what's right a lot of the time; we change our minds about what's right, and say that we were wrong before.

And yet everything we need has to be inside our minds somewhere, in some sense. Not upon the stars is it written. What's 'right' doesn't come from outside us, as a great light from the sky. So it has to be within humans. But how do you get it out of humans and into a new mind?

Start with what's really there in human minds. Then ask what we would think, if we knew everything a stronger mind knew. Ask what we would think if we had years and years to think. Ask what we would say was right, if we knew everything inside our own minds, all the real reasons why we decide what we decide. If we could change, become more the people we wished we were - what would we think then?

Building a mind which will figure all that out, and then do it, is about as close as we can now imagine to building something that does what's 'right', starting from only what's already there in human minds and brains.

1Shmi
This should go to http://www.reddit.com/r/explainlikeimfive/

Utilitarianism : Care the same whether everyone is happy; if they live near or if they live far, if you like them or if you not like them; everyone.

[-]gjm110

I don't think either this, or anything else in this subthread, captures it. Let me have a go.

People like some things and not others. For each person, we can give a number to each thing that says how much they like it or don't. Suppose you must do one of two things. For each, look at how the world will be if you do it -- every thing in the world -- and all the people in the world, and add up all those numbers saying whether they like the things or not. Then do the thing that gives the biggest total.

Those numbers should be such that if one of two things will happen, each as often as the other, the number for this is half way between the numbers for those two things. And they should be such that each person will always do what makes their numbers biggest. And if two people care the same about a thing, they should give it the same number. We can't really make all those things true, but we do the best we can.

(What if you must do one of two things, and one makes there be more people, or fewer people, or other people? That is hard and I will not try to say what to do then.)

It's not perfect but I think it captures the key points: equal weights for all, consider all people, add up utilities, utilities should correspond to people's preferences. And it owns up to some of the difficulties that I can't solve in upgoer5 language because I can't solve them at all.

3Nornagest
Hmm. That's part of it, but it doesn't seem to capture the full scope of the philosophy; you seem to be emphasizing its egalitarian aspects more than the aggregation algorithm, and I think the latter's really the core of it. Here's my stab at preference utilitarianism: An act is good if it helps people do what they want and get what they need. It's bad if it makes people do things they don't want, or if it keeps them from getting what they need. If it gives them something they want but also makes them do something they don't want just as much, it isn't good or bad. There are no right or wrong things to want, just right or wrong things to do. Also, it doesn't matter who the people are, or even if you know about them. What matters is what happens, not what you wanted to happen.
2twanvl
That is not what utilitarianism means. It means doing something is good if what happens is good, and doing something is bad if what happens is bad. It doesn't say which things are good and bad.
9CronoDAS
[this post is not in Up-Goer-5-ese] The name for the type of moral theory in which is "consequentialism." Utilitarianism is a kind of consequentialism.
4twanvl
You are right, I was getting confused by the name. And the wikipedia article is pretty bad in that it doesn't give a proper concise definition, at least none that I can find. SEP is better. It still looks like you need some consequentialism in the explanation, though.
6Jayson_Virissimo
I have yet to find a topic, such that, if both Wikipedia and SEP have an article about it, the Wikipedia version is better.
-3gjm
Any topic for which Wikipedia and SEP don't both have articles suffices :-). I think you mean: "I have yet to find a topic on which both Wikipedia and SEP have articles, and for which the Wikipedia article is better." With which I strongly agree. SEP is really excellent.
2jefftk
You're not using English "if".
-2gjm
I'm using one variety of "if", used in some particular contexts when writing in English. I was doing so only for amusement -- of course I don't imagine that anyone has trouble understanding Jayson_Virissimo's meaning -- and from the downvotes it looks as if most readers found it less amusing than I hoped. Can't win 'em all. But it's no more "not English" than many uses of, e.g., the following words on LW: "friendly", "taboo", "simple", "agency", "green". ("Friendly" as in "Friendly AI", which means something much more specific than ordinary-English "friendly"; "taboo" as in the technique of explaining a term without using that term or other closely-related ones; "simple" in the sense of Kolmogorov complexity, according to which e.g. a "many-worlds" universe is simpler than a collapsing-wave-function one despite being in some sense much bigger and fuller of strange things; "agency" meaning the quality of acting on one's own initiative even when there are daunting obstacles; "green" as the conventional name for a political/tribal group, typically opposed to "blue".)

Recent trends in my field of research, syntactic parsing

We've been trying for a long time to make computers speak and listen. Here is what has been happening with the part I work on for the last few years, or at least the part I'm excited about.

What makes understanding hard is that what you are trying to understand can mean so many different things. SO many different things. More than you think!! In fact the number grows way out of line with the number of words.

Until a few years ago, the number one idea we had was to figure out how to put together just a f... (read more)

The AI Box Experiment:

The computer-mind box game is a way to see if a question is true. A computer-mind is not safe because it is very good at thinking. Things good at thinking have the power to change the world more than things not good at thinking, because it can find many more ways to do things. Many people ask: "Why not put this computer-mind in a box so that it can not change the world, but tell guarding-box people how to change it?"

But some other guy answers: "That is still not safe, because computer-mind can tell guarding-box people m... (read more)

Complexity and Fragility of Value, My take: When people talk about the things they want, they usually don't say very many things. But when you check what things people actually want, they want a whole lot of different things. People also sometimes don't realize that they want things because they have always had those things and never worried that they might lose them.

If we were to write a book of all the things people want so a computer could figure out ways to give people the things they want, the book would probably be long and hard to write. If there w... (read more)

The Prime Number Theorem

A group of four people can stand two by two, but a group of five people can only stand five in a line. The number of numbers like five, and not like four, between two numbers, is the number of times you take two times what you had, starting at one, to get between the two numbers if the two numbers are close.

Bayesianism (probabilism, conditioning, priors as mathematical objects):

Let a possible world be a way the world could be. To say something about the world is to say that the actual world (our world) is one of a set of possible worlds. Like, to say that the sky is blue is to say that the actual world is one of the set of possible worlds in which the sky is blue. Some possible worlds might be ours for all we know (maybe they look like ours, at least so far). For others we are pretty sure that they aren't ours (like all the possible worlds where the sky is pi... (read more)

So, are we going to localize the LW wiki from English to Simple English?

Quantum Field Theory

Not me and only tangentially related, but someone on Reddit managed to describe the basics of Quantum Field Theory using four-letter words or less. I thought it was relevant to this thread, since many here may not have seen it.

The Tiny Yard Idea

Big grav make hard kind of pull. Hard to know. All fall down. Why? But then some kind of pull easy to know. Zap-pull, nuke-pull, time-pull all be easy to know kind of pull. We can see how they pull real good! All seem real cut up. So many kind of pull to have!

But what if all kind of pull were j

... (read more)

Reminds me of the E minmal language http://www.ebtx.com/lang/readme2.htm which uses only 300 words including prepositions and tenses, inflections etc. (all words at http://www.ebtx.com/lang/eminfrm.htm ). These 300 are meant to exhaustively decompose most language, physical and everyday concepts down to - well - a minimum.

The first paragraph of the prisoners dilemma (FREPOHU HAAPOZ) might be

VI DILO DU PLAEM PLAL. VIZ CHAAN KRAAN MO ROL. DIBRA VIZ NAA CHON DIER DEPO HUEV VEN MO ROL.

[-][anonymous]00

Cognitive Biases

In the world, things happen for reasons. When anything happens ever, there's a reason for it- even if you don't know what it is, or it seems strange. Start with that: nothing has ever happened without a cause. (Here we mean "reason" and "cause" like how a ball rolling into another ball will knock it over, not like good or bad. Think about it- it makes sense.)

If you're interested in knowing more about the world, often, you want to know the real reason things happen (or the reason other things DON'T happen, which can be ju... (read more)

2Rob Bensinger
This works as a beginning to explaining this group of ideas. I like the focus on passed-down-change, but I want us to do more to exactly pick out what we mean here. It's especially important to note: 1. A brain wrong-going (bias) is different from many other kinds of problems and troubles your brain can have. 2. A brain wrong-going is sometimes about doing what you want, rather than about knowing what's true. 3. And the idea of a brain wrong-going can't be explained without also explaining the idea of a brain short-cut (heuristic).
[-]Shmi00

My attempt at describing a black hole:

When a very big and very bright star runs out of stuff to burn, it can not stay big any longer and gets smaller and smaller, until it becomes nothing, but this nothing is just as heavy as before the star died. If you are close to it, you get sucked in and die. If light gets close to it, it dies, too. There is no escape. Since light can not escape, no one can see the place where the now dead star used to be. That is why this dead star is called black.

Words sorely missed: hole, curvature, density, vacuum, horizon.

Quantum Mechanics

When you try to understand how very small things work, you realize that you can't use the same kind of ideas which you used to explain how bigger things like cars and balls work. So one of the things you realize is that very small things care about how you look at them. Suppose you have a room with two doors. With big things, if you opened one door and saw a red ball inside and then you opened the other door, you would also see a red ball. But with small things, it could happen that you open one door, see a red ball, open the other door s... (read more)