Why are people "put off by rationality"?

3 adamzerner 05 August 2014 06:15PM

I was reading reviews of HPMOR on Goodreads and I noticed that the people who didn't like the book were essentially "put off by the rationality". They thought Harry was arrogant and condescending.

Then I was thinking, a lot of people are "put off by rationality" for similar reasons. What a shame. There's a lot of value in spreading rationality, and this seems to be a big obstacle in doing so.

Any thoughts on how to make people less "put off by rationality"? I think the core issues are:

  1. In some cases, people think it's rude to suggest to someone that they're wrong. (I have a vague idea of when, but am having trouble articulating it. Can anyone articulate this well?). Edit: EY has articulated (part?) of what I'm getting at. He calls it the status slapdown emotion.
  2. People pattern-match the tone to "smart aleck"?

What do rationalists think about the afterlife?

-16 adamzerner 13 May 2014 09:46PM

I've read a fair amount on Less Wrong and can't recall much said about the plausibility of some sort of afterlife. What do you guys think about it? Is there some sort of consensus?

Here's my take:

  • Rationality is all about using the past to make predictions about the future.
  • "What happens to our consciousness when we die?" (may not be worded precisely, but hopefully you know what I mean).
  • We have some data on what preconditions seem to produce consciousness (ie. neuronal firing). However, this is just data on the preconditions that seem to produce consciousness that can/do communicate/demonstrate its consciousness to us.
  • Can we say that a different set of preconditions doesn't produce consciousness? I personally don't see reason to believe this. I see 3 possibilities that we don't have reason to reject, because we have no data on them. I'm still confused and not too confident in this belief though.
  • Possibility 1) Maybe the 'other' conscious beings don't want to communicate their consciousness to us.
  • Possibility 2) Maybe the 'other' conscious beings can't communicate their consciousness to us ever.
  • Possibility 3) Maybe the 'other' conscious beings can't communicate their consciousness to us given our level of technology.
  • And finally, since we have no data, what can we say about the likelihood of our consciousness returning/remaining after we die? I would say the chances are 50/50. For something you have no data on, any outcome is equally likely (This feels like something that must have been talked about before. So side-question: is this logic sound?).

Edit: People in the comments have just taken it as a given that consciousness resides solely in the brain without explaining why they think this. My point in this post is that I don't see why we have reason to reject the 3 possibilities above. If you reject the idea that consciousness could reside outside of the brain, please explain why.

A medium for more rational discussion

10 adamzerner 24 February 2014 05:20PM

It would be cool if online discussions allowed you to 1) declare your claims, 2) declare how your claims depend on each other (ie. make a dependency tree), 3) discuss the claims, and 4) update the status of the claim by saying whether or not you agree with it, and using something like the text shorthand for uncertainty to say how confident you are in your agreement/disagreement.

I think that mapping out these things visually would allow for more productive conversation. And it would also allow newcomers to the discussion to quickly and easily get up to date, rather than having to sift through tons of comments. On this note, there should also probably be something like an answer wiki for each claim to summarize the arguments and say what the consensus is.

I get the feeling that it should be flexible though. That probably means that it should be accompanied by the normal commenting system. Sometimes you don't actually know what your claims are, but need to "talk it out" in order to figure out what they are. Sometimes you don't really know how they depend on each other. And sometimes you have something tangential to say (on that note, there should probably be an area for tangential comments, or at least a way to flag them as tangential).

As far who would be interested in this, obviously this Less Wrong community would be interested, and I think that there are definitely some other online communities that would (Hacker News, some subreddits...).

Also, this may be speculating, but I would hope that it would develop a reputation for the most effective way to have a productive discussion. So much so that people would start saying, "go outline your argument on [name]". Maybe there'd even be pressure for politicians to do this. If so, then I think this could put pressure on society to be more rational.

What do you guys think?

 

EDIT: If anyone is actually interested in building this, you definitely have my permission (don't worry about "stealing the idea"). I want to build it, but 1) I don't think I'm a good enough programmer yet, and 2) I'm busy with my startup.

EDIT: Another idea: if you think that a statement commits an established fallacy, then you should be able to flag it (like this). And if enough other people agree, then the statement is underlined or highlighted or something. The advantage to this is that it makes the discussion less "bulky". A simple version of this would be flagging things as less than DH6. But there are obviously a bunch of other things worth flagging that Eliezer has talked about in the sequences that are pretty non-controversial.

EDIT: Here is a rough mockup of how it would look. Notes: 

- The claims should show how many votes of agreement/disagreement they got. Probably using text shorthand for uncertainty.

- The claims should be colored green if there is a lot of agreement, and red if there is a lot of disagreement.

- See edit above. Commenting in the discussion should be like this. And you should be able to flag statements as fallacious in a similar way. If there is enough agreement about the flag, the statement should be underlined in red or something.

Is love a good idea?

1 adamzerner 22 February 2014 06:59AM

I've searched around on LW for this question, and haven't seen it brought up. Which surprises me, because I think it's an important question.

I'm honestly not sure what I think. One one hand, love clearly leads to an element of happiness when done properly. This seems to be inescapable, probably because it's encoded in our DNA or something. But on the other hand, there's two things that really make me question whether or not love is a good idea.

1) I have a very reductionist viewpoint, on everything. So I always ask myself, "What am I really trying to optimize here, and what is the best way to optimize it?". When I think about it, I come to the conclusion that I'm always trying to optimize my happiness. The answer to the question of, "why does this matter?" is always, "because it makes me happy". So then, the idea of love bothers me, because you sort of throw rational thinking out the window, stop asking why something actually matters, and just decide that this significant other intrinsically matters to you. I question whether this type of thinking is optimal, and personally, whether or not I'm even capable of it.

2) It seems so obsessive, and I question whether or not it makes sense to obsess so much over one thing. This article actually explores the brain chemicals involved in love, and suggests that the chemicals are similar to those that appear in OCD.

Finally, there's the issue of permanence. Not all love is intended to be permanent, but a lot of the time it is. How can you commit to something so permanently? This makes me think of the mind projection fallacy. Perhaps people commit it with love. They think that the object of their desire is intrinsically desirable, when in fact it is the properties of this object that make it desirable. These properties are far from permanent (I'd go as far as to say that they're volatile, at least if you take the long view). So how does it make sense to commit to something so permanently?

So my take is that there is probably a form of love that is rational to take. Something along the lines of enjoying each others company, and caring for one another and stuff, but not being blindly committed to one another, and being honest about the fact that you wouldn't do anything for one another, and will in fact probably grow apart at some point. 

What do you guys think? 

Rethinking Education

2 adamzerner 15 February 2014 05:22AM

Problems

Problems have bottlenecks. To solve problems, you need to overcome each bottleneck. If you fail to overcome just one bottleneck, the problem will go unsolved, and your effort will have been fruitless.

In reality, it’s a little bit more complicated than that. Some bottlenecks are tighter than others, and some progress might leak through, but it usually isn’t anything notable.

 

Education

There is a lot wrong with education. Attempts are being made to improve it, but they’re glossing over important bottlenecks. Consequently, progress is slowly dripping through. I think that it’d be a better use of our time to take the time to think through each bottleneck, and how it can be addressed.

I have a theory of how we can overcome enough bottlenecks such that progress will fall through, instead of drip through.

Consider how we learn. Say that you want to learn parent concept A. To do this, it’ll require you to understand a bunch of other things first 

My groundbreaking idea: make sure that students know A1…An before teaching them A.

https://www.dropbox.com/s/4gnwamufalg5gqo/learning.jpg

The bottlenecks to understanding A are A1…An. Some of these bottlenecks are tighter than others, and in reality, there are constraints on our ability to teach, so it’s probably best to focus on the tighter bottlenecks. Regardless, this is the approach we’ll need to take if we want to truly change education.

 

How would this work?

1) Create a dependency tree.

2) Explain each cell in the tree.

3) Devise a test of understanding for each cell in the tree.

4) Teach accordingly.

 

Where does our system fail us?

  • When you’re in class and the teacher is explaining A when you still don’t get, say A2 and A5.
  • When you’re in class and the teacher is explaining A, when she never thought to explain A2 and A5.
  • When you’re reading the textbook and you’re confused, but you don’t even know what child concepts you’re confused about.
  • When you memorize for the test/assignment instead of properly filling out your dependency tree.
  • When being too far ahead or behind the class leads to a lack of motivation.
  • When lack of interest in the material leads to lack of motivation.
  • When physical distractions divert your attention (tired, uncomfortable, hungry…).

 

 

My proposal

I propose that we pool all of our resources and make a perfect educational web app. It would have the dependency trees, have explanations for each cell in each tree, and have a test of understanding for each cell in each tree. It would test the user to establish what it is that he does and doesn’t know, and would proceed with lessons accordingly.

In other words, usage of this web app would be mastery-based: you’d only proceed to a parent concept when you’ve mastered the child concepts.

 

Motivation

Motivation would be another thing to optimize.

One way to do this would be to teach things to students at the right times. Lack of interest is often due to lack of understanding of child concepts, and thus lack of appreciation for the beauty and significance of a parent concept. By teaching things to students when they’re able to appreciate them, we could increase students’ motivation. 

Another way to optimize motivation would be to do a better job of teaching students things that are useful to them (or things that are likely to be useful to them). In todays system, students are often times forced to memorize lots of details that are unlikely to ever be useful to them.

By making teaching more effective, I think motivation will naturally increase as well (it’ll eliminate the lack of motivation that comes with the frustration of bad teaching).

 

Pooling of resources

The pooling of resources to create this web app is analogous to how resources were pooled for Christopher Nolan to make a really cool movie. When you pool resources, a lot more becomes possible. When you don’t pool resources, the product often sucks. Imagine what would happen if you tried to reproduce Batman at a local high school. This is analogous to what we’re trying to do with education now.

 

How would this look?

I’m not quite sure. Technically, kids could just sit at home on their computers and work through the lessons that the web app gives them… but I sense that that wouldn’t be such a good idea. It’d probably be best to require kids to go to a “school-like institution”. Kids could work through the lessons by themselves, ask each other for help, work together on projects, compete with each other on projects etc.

 

Certificates

I envision that credentials would be certificate-based. You’d get smaller certificates that indicate that you have mastered a certain subject. Today, the credentials you get are for passing a grade, or passing a class, or getting a degree. They’re too big and inflexible. For example, maybe the plant unit in intro to biology isn’t necessary for you. Smaller certificates allow for more flexibility.

 

Deadlines

Deadlines are a tough issue. If they exist, there’s a possibility that you have to cram to meet the deadline, and cramming isn’t optimal for learning. However, if they don’t exist, students probably won’t have the incentive to learn. For this reason, I think that they probably do have to exist.

My first thought is that deadlines should be personalized. For example, if I moved 50 steps and the deadline was at 100 steps, the next deadline should be based on where I am now (step 50), not where the deadline was (step 100).

My second thought is that deadlines should be rather loose, because I think that flexibility and personalization are important, and that deadlines sacrifice those things.

My third thought, is that students should be given credit for going faster. In our one-size-fits-all system now, you can’t get credit for moving faster than your class. I think that if you want to work harder and make faster progress, you should be able to and you should be given credentials for the knowledge that you’ve acquired. Given the chance, I think that many students would do this. I think this would allow students to really thrive and pursue their interests.

 

Tutoring

I think that it’d be a good idea to require tutoring. Say, in order to get a certificate, after passing the tests, you’d have to tutor for x hours.

Tutoring helps you to master the concept, because having to explain something will expose the holes in your understanding. See The Feynman Technique.

Tutoring allows for social interaction, which is important.

 

Social Atmosphere

The social atmosphere in these “schools” would also be something to optimize. It's not something that people think too much about, but it has a huge impact on how people develop, and thus on how society develops.

I’m not sure exactly what would be best, but I have a few thoughts:

The idea of social value is horrible. In schools today, you grow up caring way too much about how you look, who you’re friends with, how athletic you are, how smart you are, how much success you have with the opposite sex… how “good” you are. This bleeds into our society, and does a lot to cause unhappiness. It should be avoided, if possible.

Relationships are based largely on repeated, unplanned interactions + an environment that encourages you to let your guard down. I think that schools should actively provide these situations to students, and should allow you to experience these situations with a variety of types of people (right now you only get these repeated, unplanned interactions with the cohort of students you happen to be with, which limits  you in a lot of ways).

 

Rationality

I propose that rationality be a core part of the curriculum (the benefits of making people better at reasoning would trickle down into many aspects of life). I think that this should be done in two ways: the first is by teaching the ideas of rationality, and the second is by using them.

The ideas of rationality can be found right here. Some examples:

After the ideas are taught, they should be practiced. The best way that I could think of to do this is to have kids write and critique essays (writing is just thought on paper, and it’s often easier to argue in writing than it is in verbal conversation). Students could pick a topic that they want to talk about, make claims, and argue for them. And then they could read each others’ essays, and point out what they think are mistakes in each others’ reasoning (this should all be supervised by a teacher, who should probably be more of a benevolent dictator, and who should also contribute points to the discussions).

I think that some competition and social pressure could be useful too; maybe it’d be a good idea to divide students into classes, where the most insightful points are voted upon, and the number of mistakes committed would be tallied and posted.

 

Writing

Right now, essays in schools are a joke. No one takes them seriously. Students b.s. them, and teachers barely read them, and hardly give any feedback. And they’re also always on english literature, which sends a bad message to kids about what an essay really is. Good writing isn’t taught or practiced, and it should be.

 

Levels of Action

Certain levels of action have impacts that are orders of magnitude bigger than others. I think that improving education this much would be a high level action, and have many positive effects that’ll trickle down into many aspects of society. I’ll let you speculate on what they are.

How to illustrate that society is mostly irrational, and how rationality would be beneficial

-2 adamzerner 14 February 2014 06:16AM

Does anyone know of a good article that illustrates how society is generally irrational, and how making society more rational would have huge benefits, because it'd be a very high level action?

I'm writing an essay about how to improve education, and one of my proposals is that a core part of the curriculum should be rationality. I believe that doing this would have huge benefits to society, and want to explain why I think this, but I'm having trouble. Any thoughts?

Edit: Part of Raising the Sanity Waterline talks about common ways in which people are irrational. However, they're all links to longer Less Wrong articles. Preferably, I'd like to illustrate it in a few sentences/paragraphs.

How big of an impact would cleaner political debates have on society?

4 adamzerner 06 February 2014 12:24AM

See this Newsroom clip.

Basically, their news network is trying to change the way political debates work by having the moderator force the candidates to answer the questions that are asked of them, not interrupt each other, justify arguments that are based on obvious falsehoods etc.

How big of a positive impact do you guys think that this would have on society?

My initial thoughts are that it would be huge. It would lead to better politicians, which would be a high level of action. The positive effects would trickle down into many aspects of our society.

The question then becomes, "can we make this happen?". I don't see a way right now, but the idea has enough upside to me that I keep it in the back of my mind in case I come up with a plausible way of implementing the change.

Thoughts?

Salary or startup? How do-gooders can gain more from risky careers

5 adamzerner 05 February 2014 10:54PM

 

See http://80000hours.org/blog/12-salary-or-startup-how-do-gooders-can-gain-more-from-risky-careers.

The expected value of risky careers like startups if often much higher than less risky careers. However, this is more than offset by peoples' risk-aversiveness due to diminishing marginal utility. But... if you're an effective altruist, the money you make doesn't have diminishing marginal value. So... it seems that risky careers like startups are a good choice, if you're trying to maximize your positive impact.

What do you guys think?

Why don't more rationalists start startups?

-3 adamzerner 20 January 2014 07:29AM

My motivation behind this post stems from Aumann's agreement theorem. It seems that my opinions on startups differ from most of the rationality community, so I want to share my thoughts, and hear your thoughts, so we could reach a better conclusion.

I think that if you're smart and hard working, there's a pretty good chance that you achieve financial independence within a decade of the beginning of your journey to start a startup. And that's my conservative estimate.

"Achieve financial independence" only scratches the surface of the benefits of succeeding with a startup. If you're an altruist, you'll get to help a lot of other people too. And making millions of dollars will also allow you the leverage you need to make riskier investments with much higher expected values, allowing you to grow your money quickly so you could do more good.

A lot of this is predicated on my belief that you have a good chance at succeeding if you're smart and hardworking, so let me explain why I think this.


 

Along the lines of reductionism, "success with a startup" is an outcome (I guess we could define success as a $5-10M exit in under 10 years). And outcomes consist of their components. My argument consists of breaking the main outcome into it's components, and then arguing that the components are all likely enough for the main outcome to be likely.

I think that the 4 components are:

  1. Devise an idea for a product that creates demand.
  2. Build it.
  3. Market and sell it.
  4. Things run smoothly (some might call this luck).

The Idea

Your idea has to be for a product or service (I'll just say product to keep things simple) that creates demand, and can be met profitably. In other words, make something people want (this article spells it out pretty well).

What could go wrong?

  • Failure to think specifically about benefits. These articles explain what I mean by this better than I could.
  • Failure to understand customers. To put yourself in their minds and understand what it is that they do and don't want. This is distinct from the first bullet point. You could have a specific benefit in mind, but be wrong about whether it's something your customer really wants (or about how badly they want it).
  • Failure to research competitors. Maybe you came up with a great idea, but it turns out that it exists already.
The big issue here is the first bullet point. As spelled out by Eliezer's article, people are horrible at thinking specifically about the benefits that their idea will bring customers. They're horrible at moving down the ladder of abstraction. They think more along the lines of "we connect people" instead of "we let you talk to your friends". Even YC applicants (probably the best startup accelerator in the world) suffer from this problem immensely. I think that this problem is the single biggest cause of failure for startups. (They say that 90% of startups fail? Well >99% of people can't think concretely.) However, I think that it's something that could be avoided with willpower, reading the LessWrong sequences, and taking some time to practice your new habit.

The second bullet point shouldn't be too hard, once your thinking becomes specific. And the third one is mostly a matter of taking a few days to do some research.


Build It

What I mean by 'build it' is pretty straightforward: take that idea you had, and make it real.

What could go wrong?
  • Our society doesn't have the technological or scientific progress necessary to build the product. For example, I have an idea for a machine that teleports you from one place to another. Unfortunately, we as a society aren't at a point where someone could build that.
  • You personally don't have the skills to build it.
  • You don't work hard enough. Maybe you try, and find that you don't have the willpower. Maybe you try, find that you do have the willpower, but realize that the amount of work it take isn't worth it to you.
  • You can't find people with the skills to work on it with you (cofounders).
  • You can't raise money from investors to hire people to help you build it.
  • The people you work with/hire aren't good enough to build the product you envisioned.
There's probably other things that could go wrong that I can't think of, but I think this is enough to work with for now.

First bullet point: you really just have to avoid unfeasible ideas. Doesn't sound too hard. I guess this could be a problem for someone at the forefront of their field, trying to push the boundaries, but who makes an error in judging what's buildable. However, I think that there's plenty of ideas that don't run you this risk.

Second bullet point: if you don't have the skills, then get them. There's plenty of resources available to learn. For one, it only takes a couple months to get the skills you'll need to build a decent website. Or you could invest more time to study something like engineering or design, which will increase your options of what ideas you could build.

Third bullet point: if you don't have willpower, it'll be pretty tough to succeed. Possible, but pretty tough. I don't recommend trying.

Fourth bullet point: thats just another thing that limits the ideas you could build successfully. Some ideas you can't build without a cofounder/cofounders, and some you can. Finding a cofounder shouldn't be too difficult though.

Fifth bullet point: this is actually a tough one. A lot of ideas will require at least seed funding (tens/hundreds of thousands of dollars) to build. There are definitely a bunch of ideas that you could build without any investment, but they're the minority. So let's say you have an idea that does require investment, but you're having trouble raising money (which I think would be understandable). Basically, I'd say that you should focus on peeling away the layers of risk. By following doing that, reading up on fundraising and using Angel List, I think you'd have a pretty good shot at raising the money you need. Still though, I think not being able to find an investor is a legitimate risk.

Sixth bullet point: I've never hired anyone before, but it doesn't seem that hard. Doing a good job optimizing your hires seems like something you'd have to be skilled at, but satisficing to the point that they could do a sufficient job building the product you envision seems to be something that any reasonable person can do.


Market and Sell It

Once you think up your product and build it, you then have to sell it to your customers. This means reaching them, convincing them, and distributing to them.

What could go wrong?
  • You're unable to communicate clearly to your customers what benefits they'll be receiving if they use your product.
  • You're unable to persuade them. (There are other elements to persuasion aside from clear communication).
  • You didn't reach enough people. Maybe you didn't advertise enough. Maybe you thought word would spread, and it didn't.
  • You're having distribution problems (delivering the product to your customer).
  • PR problems. Something goes wrong and you obtain a bad reputation.
First bullet point: see The Idea.

Second bullet point: First of all, read that book (Influence by Robert Cialdini). I'm no expert on persuasion, but I think taking a little time to read a few books would make you sufficiently good at it. And it's not that hard to persuade people when you've got a product that they love.

Third bullet point: I'm no expert on this either. However, I do hear that internet ads nowadays make it pretty easy and affordable to reach a targeted and good sized audience. Also, as always for things you don't know too much about, read up on it and educate yourself. I don't know enough about this to argue it well, and I don't feel too strongly about it, but I get the sense that this is unlikely to prevent success. Doing this stuff seems like it'd be sufficient.

Fourth bullet point: I don't know much about distribution. It seems that distribution is really only a problem for certain types of businesses. For them, I guess that's something you have to take into account before you go forth with an idea. Otherwise, it doesn't seem like to big a deal.

Fifth bullet point: I guess this is something that could kill a business. To a reasonable person though, it doesn't seem like too big a risk.


Things Running Smoothly

Obviously, crazy things could happen. However, they don't seem too likely.

What could go wrong?
  • Legal issues (current). Maybe you did something illegal and didn't realize it (ex. copyright infringement), and sanctions or a lawsuit killed your startup.
  • Legal issues (future). Maybe new laws were enacted that killed your startup.
  • Something in your personal life goes wrong that requires you to quit.
  • Your competitors innovate and beat you out. Or a big company decides to enter the market, and crushes you.
  • Scientific findings lead to your product being obsolete.
  • Macroeconomic conditions change, which somehow leads to people not wanting your product.
  • Political/social conditions lead to people not wanting your product.
Most of these seem like they have pretty low probabilities of happening. Low enough where they don't influence the overall likelihood of success too much. Especially if you're doing something that genuinely helps people (if so, it's less likely that things like legal/economic/political/social changes will end up hurting you).

Regarding competitors beating you out, that's something that sounds like a big risk, but actually doesn't happen as often as you'd think. You'd think that if a startup comes across an innovative idea, that big companies that are hundreds or thousands of times the size of that startup would just copy the idea and execute it themselves, given that the big company has so many resources. Somehow that doesn't happen too often. Big companies just seem slow to adapt. By the time they react, the startup usually has momentum, which often times causes the big company to acquire the startup, or lose market share. So just based off of my understanding of what actually tends to happen, this risk seems to be something to note, but not something to really worry about (see lesson #4).


Conclusion

Given all of this, I think that if you're smart and hard working, you should have *at least* an 80-90% chance at succeeding at a startup. Again... you have to think about what specific benefits your idea provides... you have to map out how it'll be built, and work hard at doing so... and you have to read up on marketing, and work hard at it. As I argue above, the components all seem very doable, and thus the parent outcome seems very achievable.

I really mean for this article to be a starting point for discussion. I think that if we outline the components and discuss each one, we'll make a lot of progress in coming to an agreement. So let me know which components you think I omitted, and which components you think I'm mistaken about.


PS: A lot of people seem to disregard startups as something they don't know much about, and aren't too interested in. Why? Success = millions of dollars. Aren't you curious as to how likely that success is? If there's an outcome you desire, shouldn't you be interested in how achievable it is?

View more: Prev