Hi, do you read the LessWrong website, but haven't commented yet (or not very much)? Are you a bit scared of the harsh community, or do you feel that questions which are new and interesting for you could be old and boring for the older members?

This is the place for the new members to become courageous and ask what they wanted to ask. Or just to say hi.

The older members are strongly encouraged to be gentle and patient (or just skip the entire discussion if they can't).

Newbies, welcome!

 

The long version:

 

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

 

A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

All recent posts (from both Main and Discussion) are available here. At the same time, it's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion. They are also available in a book form.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

New Comment
62 comments, sorted by Click to highlight new comments since: Today at 7:54 PM

I've been lurking for a very long time, more than six years I think. Lots of sentences come to mind when I think, "Why haven't I posted anything before?" Here are a few:

  1. "LessWrong forum is just like any other forum" Well my sample size is low, but... I don't care what you tell yourselves; what I observe is people constantly talking past each other. And if, in reading an article or comment, a possible comment comes to mind; hot-on-its-heels is the thought that there isn't really any point in posting it, because the replies would all be of a type drawn from some annoying archetypes, like (A) I agree! But I have nothing to add. (B) You're wrong. Let me Proceed to Misrepresent in some way. (The Misrepresentation is "guaranteed" to be unclear because it insists on starting on its own terms and not mine). And if I yet start a good-natured chain of comments, suddenly I find myself talking about the other person's stuff and not the ideas that motivated my original comment. Which I probably wouldn't have commented on for its own sake. And as soon as a comment has one reply, people stop thinking about it as an entity in its own right. Don't you dare dismiss these thoughts so quickly!

  2. "It's been done. Well, mostly." Eliezer wrote so many general, good, posts - where do I even find a topic sufficiently different that I'm not, for the most part, retreading the ground? Posts by other people are just completely different: instead of the post having a constructive logic of its own, references are given. The type of evidence given by the references is of a totally different sort. Instead of being about rationality, such posts seem to be about "random topic 101"? Ok this isn't very clear.

  3. So few comments are intelligible. What are these people even arguing about? It's not clear! How can one comment on this in good faith? Note that the posts you observe are, therefore, more likely to come from people who are not stopped from posting by having unclear or absurd interpretations of parent comments.

Lesswrong seems like it should be a good place. The sequences are a fantastic foundation, but there's so little else! I'm subjectively pretty sure that E.Y. thinks Lesswrong is failing. Of course one may hope "for the frail shrub to start growing".

In the hope that some people read (and continue to read) this charitably, let me continue. Consider the following in the welcome thread itself.

" We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. "

Err, what? Why on Earth should I immediately give you my life story? Or even, Do these questions make sense? "What I value?" On a rationalist forum you are expecting the English language to contain short sentences that encode a good meaning of "value"? Yeah, yeah, whatever. Taking a breath, --- ---. . . Why do you even want to know?

How about just, "you can post who you are, what you're doing ... found us here, if you want."

I should not have to post such things to get basic community acceptance. And you have no right to be so interested in someone who has, as yet, not contributed to lesswrong at all! Surely THAT is the measure of worth within the Lesswrong site. Questions about things not related to Lesswrong should be "expensive" to ask - at least requiring a personal comment.

Oh, I think I initially found lesswrong via Bostrom via looking at the self-sampling hypothesis? It's kind of hazy.

I don't think anyone is suggesting that you "have to post such things to get basic community acceptance". Only that a thread in which newcomers do so might be a welcoming place, especially for newcomers who for whatever reason find LW intimidating. It seems clear that that isn't your situation; you are probably not the target audience for the proposal.

(Which doesn't mean you wouldn't be welcome in a welcome/newbie thread. Just that you probably wouldn't get as much out of it as some other people.)

And, er, welcome to Less Wrong :-).

Yeah, hi :-) . Well, technically I didn't say that anyone WAS suggesting it. I like your interpretation much better of course! And there could be people who respond well to the "we'd love to know -" formulation. Apparently I don't! I tried to give you a vague idea of why I felt that way at least.

Since I've got to offer something, try this paragraph:

It seems a little weird to expect a newcomer to adapt to lesswrong by having a special thread, where nothing really unique to lesswrong is mentioned. That other guy before me in the thread seems to have instinctively talked about only lesswrong-related things in his experience. But, perhaps you can only expect that to happen with people who ALREADY know something about lesswrong - proper lurkers rather than true newcomers? So, maybe there should be something like a newbie-thread-for-one-of-the-core-sequences, where the older members would try to adjust the newcomer as to how the words should be read - because we all know that there are people who read one of Eliezer's posts and immediately proceed to misinterpret something badly without realising? And... that sounds very close to the "questions which are new and interesting to you but could be old and boring for the older members"...

You've just been treated to: me working out the kinks I felt in the welcome page. I guess it was already doing what I wanted, and I'm not adding anything really new. Weird.

You know, I actually do have a question. I've never felt like I really understand what a utility function is supposed to be doing. Wait, that's not right. More like, I've never felt like I understand how much of the utility function formalism is inevitable, versus how much is a hypothetical model. There are days when I feel it's being defined implicitly in a way that means you can always talk about things using it, and there are days when I'm worried that it might not be the right definition to use. Does that make sense? Can anyone help me with this (badly-phrased) question?

It seems a little weird to expect a newcomer to adapt to lesswrong by having a special thread, where nothing really unique to lesswrong is mentioned.

I don't think the point of the special thread is so much to teach people LW-specific things to enable them to participate, as to overcome shyness and intimidation and the like. That's a problem people have everywhere, and doesn't call for anything LW-specific (except in so far as the people here are unusual, which they might be). In some cases, a newcomer's shyness and intimidation might be because they feel they don't know or understand something, and they could ask about that -- but, again, similar things could happen anywhere and any LW-specific-ness would come out of the specific questions people ask.

I've never felt like I understand how much of the utility function formalism is inevitable, versus how much is a hypothetical model.

So there's a theorem that says that under certain circumstances an agent either (behaves exactly as if it) has a utility function and tries to maximize its expected value, or is vulnerable to certain kinds of undesirable outcome. So, e.g., if you're trying to build an AI that you trust with superhuman power then you might want it to have a utility function.

But humans certainly don't behave exactly as if we have utility functions, at least not sensible ones. It's often easy to get someone to switch between preferring A over B and preferring B over A just by changing the words you use to describe A and B, for instance; and when trying to make difficult decisions, most people don't do anything much like an expected-utility calculation.

And the vNM theorem, unsurprisingly, makes a bunch of technical assumptions that don't necessarily apply to real people in the real world -- and, further, to get from "if you don't do X you will run into trouble Y" to "you should do X" you need to know that the adverse consequences of doing X aren't worse than Y, which for resource-limited agents like us they might be. (Indeed, doing X might be simply impossible for us, or for whatever other agents we're considering; e.g., if you care about the welfare of people 100 years from now, evaluating your "utility function"'s expectation would require making detailed probabilistic predictions about all the ways the world could be 100 years from now; good luck with that!)

It's fairly common in these parts to talk as if people have utility functions -- to say "my utility function has a large term for such-and-such", etc. I take that to be shorthand for something more like "in some circumstances, understood from context, my behaviour crudely resembles that of an agent whose utility function has such-and-such properties". Anyone talking about humans' utility functions and expecting much more precision than that is probably fooling themselves.

Does that help at all, or am I just telling you things you already understand well?

Thanks for that explanation of utility functions, gjm, and thanks to protostar for asking the question. I've been struggling with the same issue, and nothing I've read seems to hold up when I try to apply it to a concrete use case.

What do you think about trying to build a utility TABLE for major, point-in-time life decisions, though, like buying a home or choosing a spouse?

P.S. I'd upvote your response to protostar, but I can't seem to make that happen.

I can't seem to make that happen.

You need, I think, 10 points before you're allowed to upvote or downvote anything. The intention is to make it a little harder to make fake accounts and upvote your own posts or downvote your enemies'. (Unfortunately it hasn't made it hard enough to stop the disgruntled user who's downvoting almost everything I -- and a few other people -- post, sometimes multiple times with lots of sockpuppet accounts.)

build a utility table

I'm not sure exactly what you mean by a utility table, but here is an example of one LWer doing something a bit like a net utility calculation to decide between two houses.

One piece of advice I've seen in several places is that when you have a big and difficult decision to make you should write down your leading options and the major advantages and disadvantages of each. How much further gain (if any) there is from quantifying them and actually doing calculations, I don't know; my gut says probably usually not much, but my gut is often wrong.

The words "utility function" here are usually used in two quite different meanings.

In meaning 1 they are specific and refer to the VNM utility function which gjm talked about. However, as he correctly mentioned, "humans certainly don't behave exactly as if we have utility functions, at least not sensible ones". Note: specifically VNM utility functions.

In meaning 2 these words are non-specific and refer to an abstract concept of that what you would want and would choose if given the opportunity. For example if you want to talk about incentives but do not care about what precisely would incentivise an agent, you might abstract his actual desires out of the picture and talk about his utility in general. This utility is not the VNM utility. It's just a convenient placeholder, a variable that we (usually) do not need the value of.

nothing I've read seems to hold up when I try to apply it to a concrete use case.

That's because humans don't have VNM utility functions and even if they did, you wouldn't be able to calculate your own on the fly.

trying to build a utility TABLE for major, point-in-time life decisions

What would it look like?

In the home purchase decision use case, I'm currently working with a "utility table" where the columns list serious home purchase candidates, and one column is reserved for my current home as a baseline. (The theory there is I know what my current home feels like, so I can map abstract attribute scores to a tangible example. Also, if a candidate new home fails to score better overall than my current home, there's no sense in moving.)

The rows in the utility table list various functions or services that a home with its land might perform and various attributes related to those functions. Examples:

  • Number of bedrooms (related to family size & home uses like office, library)
  • Floor plan features (count number of desirable features from list - walk-in closet in MBR, indoor laundry, etc)
  • Interior steps (related to wheelchair friendliness)
  • Exterior steps (ditto)
  • Roof shape (related to leak risk/water damage & mold repair costs, also roof replacement frequency)
  • Exterior material (related to termite risk/repair costs, earthquake risk)
  • Size of yard (related to maintenance costs, uses such as entertaining, horses, home business)
  • Slope, elevation, sun exposure, wind exposure factors
  • Social location factors (distance to work, distance to favorite grocery store, walkable to public transit, etc)
  • Wildfire risk zone
  • FEMA flood risk zone
  • Number of access/evacuation routes
  • Price
  • Square footage
  • Cost per square foot
  • Total housing cost per month
  • Etc

Each of these gets a set of possible values defined, and the possible values are then ranked from 1 to n, where 1 is less desirable and n is more desirable. A rank of 0 is assigned to outright aversive conditions such as being located in a high wildfire risk zone or in a FEMA 100-year flood zone or a multi-story home (your rankings will vary). I then normalize the rank scores for each row to a value between zero and 1.

One squirrelly feature of my system is that some of the row score ranks are not predefined but dynamic. By that I mean that the actual base value before scoring -- such as the price of the house -- is rank ordered across all the columns rather than placed in a standard price interval that is given a rank relative to other price intervals. This means that the ranks assigned to each of the home candidates can change when a new home is added to the table. (And yes, I have to renormalize when this happens, because n increments by 1.)

Then I sum up the scores for each candidate, plus my baseline existing home, and see which one wins.

It all sounds logical enough, but unfortunately it's not as easy as it sounds. It's hard to optimize across all possible home choices in practice, because candidate homes have a basically random arrival rate on the market and they can't all be compared at once. You can't even wait to make pairwise comparisons, because -- at least in Southern California -- any reasonably affordable, acceptable home is likely to be snapped up for cash by an investor or flipper within days of coming on the market unless you make an offer first, right then.

Another problem with the serial arrival rate of candidate homes is that you can get fixated on the first house you see on Zillow or the first house your real estate agent trots out for you to visit. I've got hacks outside of the utility table (as I'm calling it) for getting around that tendency, but I want the utility table to work as tool for preventing fixation as well.

Just trying to create a utility table has helped me tremendously with figuring out what I want and don't want in a home. That exercise, when combined with looking at real homes, also taught me that most things I thought were absolutes on paper actually were not so when I got to looking at real houses and real yards and real neighborhoods and experiencing what the tradeoffs felt like to occupy. The combination of experience and analysis has been a good tool for updating my perceptions as well as my utility table. Which is why I think this might be a useful tool: it gives me method of recording past experience for use in making rapid but accurate judgments on subsequent, serially presented, one-time-opportunities to make a home purchase.

But I've also had a lot of trouble making the scoring work. First I tried to weight each row by how important I considered it to be, but that made things too easy to cheat on.

Then I tried to weight rows by probability of experiencing the lifestyle function or cost or risk involved. For example, I sleep every night and don't travel much, so a functional bedroom matters basically 99.9% of the time. The risk of wildfire, on the other hand, is lower, but how do I calculate it? This county averages about 8 major fires a year -- but what base to I use to convert that to a percentage? Divide 365 days per year into 8 fire events per year, or into 8 times the average days duration of the fire? Or should I count the number of homes burned per year as a percentage of all homes in the county? These latter statistics are not easily obtained, unlike the count of fires per year. Plus, I plan to live here 30 years, and here has more fires than elsewhere, while sleeping probability is unaffected by location. How do I account for that? And then there's the fact that only one fire would be catastrophic and possibly life-ending, while one night of bad sleep can be shrugged off. In the end, I couldn't find any combination of probabilities and costs that was commensurable across all the rows and not subject to cheating.

I also tried giving negative numbers to aversive situations like high wildfire risk and FEMA flood zones, but all that did was make an always crummy but safe house and an always spectacular but occasionally life-threatening house look exactly the same. This just didn't feel right to me.

So I ended up just taking the total unweighted but normalized rank scores for each house, and supplementing that with a separate count of the negatives. That gives me two ways to score the candidate homes, and if the same house wins on both measures, I consider that an indicator of reliability.

By keeping score on all the homes I seriously consider making an offer on, I think I can make a pretty good serial judgment on the current candidate even if I can't optimize concurrently. Or so I believe.

Is this a reasonable approach? I doubt there's anything Bayesian whatsoever about it, but I really don't care as long as the method is reasonable and doesn't have any obvious self-deception in it.

made things too easy to cheat on

What do you mean "cheat"? Presumably you want to buy a house you like, not just the one that checks the most boxes in a spreadsheet.

So I ended up just taking the total unweighted but normalized rank scores for each house, and supplementing that with a separate count of the negatives

That doesn't look like a reasonable procedure to me. So whether a house has exterior steps gets to be as important as the price? One of the reasons such utility tables have limited utility is precisely the weights. They are hard to specify but naive approaches like making everything equal-weighted don't look to lead to good outcomes.

Effectively you need to figure out the trade-offs involved (e.g. "am I willing to pay $20K more for a bigger yard? How about $40K?") and equal weights for ranks are rather unhelpful.

I agree that making a list of things you need and value in a house is a very useful exercise. But you can't get to the level of completeness needed to make the whole thing work the way you want it to work. You mention updating this table on the basis of your perceptions and experience, but if your ranks are equal-weighted anyway, what do you update?

With respect to the houses serially appearing before you, a simplified abstraction of this problem has an optimal solution.

Thanks much for the link to the Secretary Problem solution. That's will serve perfectly. Even if I don't know the total number of houses that will be candidates for serious consideration, I do know there's an average, which is (IIRC) six houses visited before a purchase.

As for cheating ... what I mean by that is deluding myself about some aspects of the property I'm looking at so that I believe "this is the one" and make an offer just to stop the emotional turmoil of changing homes and spending a zillion dollars that I don't happen to possess. "Home sweet home" and "escape the evil debt trap" are memes at war in my head, and I will do things like hallucinate room dimensions that accommodate my furniture rather than admit to myself that an otherwise workable floor plan in a newly gutted and beautifully renovated yet affordable home is too dang small and located in a declining neighborhood. I take a tape measure and grid paper with me to balk the room size cheat. But I also refer to the table, which requires me to check for FEMA flood zone location. This particular candidate home was in a FEMA 100-year flood zone, and the then-undeveloped site had in fact been flooded in 1952. That fact was enough to snap me out of my delusion. At that point the condition of the neighboring homes became salient.

The extent to which self-delusion and trickery are entwined in everyday thought is terribly disheartening, if you want to know the truth.

On weighting my functional criteria based on dollars, the real estate market has worked out a marvelous short circuit for rationality. Houses are no longer assessed for value based on an individual home's actual functional specifications. The quantity of land matters (so yard size matters to price). Otherwise, overwhelmingly, residential properties are valued for sale and for mortgages based on recent sales of "comparable" homes. "Comparable" means "the same square footage & same number of bedrooms and baths within half of mile of your candidate home." The two homes can otherwise be completely dissimilar, but will nevertheless be considered "comparable". No amount of improvements to the house or yard will change the most recent sale price of the other homes in the neighborhood. What this means is that sales prices are just for generic shelter plus the land, where the land is most of the value and neighborhood location is most of the land value. So the price of the home you're looking at is not really very closely tied to anything you might value about the home. This makes it very difficult to come up with a reasonable market price for, say, an indoor laundry versus laundry facilities in the garage. It's certainly beyond my meager capacity to calibrate the value of home amenities based on dollars.

I'm told it wasn't this way in the 1950s, but given the history of land scams in the U.S., which go all the way back to colonial land scams in Virginia, I have my doubts that prices for real estate were ever rational.

But I'll try to find something for weights. Back to the drawing board. And thanks for your help.

The extent to which self-delusion and trickery are entwined in everyday thought is terribly disheartening, if you want to know the truth.

In some areas that's not terrible. The thing is, if you're building a bridge you want that bridge to not fall down and that will or will not happen regardless of your illusions, delusions, and sense of accomplishment. However if you're picking something to make you happy, this no longer applies. Now your perception matters.

Let's say you are looking at a house that checks off all the checkboxes, but on a instinctual, irrational level you just hate it. Maybe there's something about the proportions, maybe there's some barely noticeable smell, maybe there's nothing at all you can articulate, but your gut is very clearly telling you NO.

Do not buy this house.

The reverse (your gut is telling you YES) is iffier for reasons you're well aware of. However my point is still valid -- when doing or buying things (at least partially) for the experience they will give you, you need to accommodate your perceptions and self-delusions, if only because they play a role in keeping you happy.

Houses are no longer assessed for value based on an individual home's actual functional specifications.

Um, not sure about that. See, you can assess anything you want but you still need a buyer. You still need someone to come and say "This is what I will part with all my savings and get into debt for". No one obligates you to buy a house which is priced "fairly" on comparables but does not satisfy you.

Markets are generally quite good at sorting these things out and the real estate market is not sufficiently screwed up to break this, I think.

Secretary Problem [...] will serve perfectly

Beware! The optimal solution depends a lot on the exact problem statement. The goal in the SP is to maximize the probability that you end up with the best available option, and it assumes you're perfectly indifferent between that and all other possible outcomes.

That Wikipedia page discusses one variant, where each candidate has a score chosen uniformly at random between 0 and 1, and all you learn about each candidate is whether it's the best so far. Your goal is to maximize your score. With that modification, the optimal strategy turns out to be to switch from "observe" to "accept next best-so-far" much sooner than with the original SP -- after about sqrt(n) candidates.

Your actual situation when buying a house is quite different from either of these. You might want to hack up a little computer program that simulates a toy version of the house-buying process, and experiment with strategies.

And as soon as a comment has one reply, people stop thinking about it as an entity in its own right.

Yeah, I know the feeling. Or when a comment or two below an article drag the whole discussion in a completely different direction. But as you say, it's "just like any other forum". How could this be prevented? Replying before reading other comments has a high risk of repeating what someone else said. Having the discipline to read the original comment again and try to see it with fresh eyes is difficult.

Instead of being about rationality, such posts seem to be about "random topic 101"?

There are topics that Eliezer described pretty well. Not saying that useful stuff cannot be added, but the lowest hanging fruit has been probably already picked. But there are also areas that Eliezer did not describe althogh he considered them important. Quoting from Go Forth and Create the Art!:

defeating akrasia and coordinating groups (...) And then there's training, teaching, verification, and becoming a proper experimental science based on that. And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...

Some of these things were addressed. There are about dozen articles on procrastination, and have the Less Wrong Study Hall. CFAR is working on the rationality curriculum, although I would like to see much more visible output.

I think we are quite weak at developing the introductory literature, and public relations in general. I don't feel we have much to offer to a mildly interested outsider to make them more interested. A link to Sequences e-book and... what is the next step? Telling them to come here and procrastinate reading the debates we have here? I don't know myself what is the next step other than "invent your own project, possibly with cooperation of other people you found through LW".

I feel that a fully mature rationalist community would offer the newbie rationalists some more guidance. So here is the opportunity for those who want to see the community grow: to find out what kind of guidance it would be, and to provide it. Are we going to find smart people and teach them math? Teach existing scientists how to understand and use p-values properly? Or organize Procrastinators Anonymous meetups? Make a website debunking frequent irrational claims? Support startups in exchange for a pledge to donate to MIRI?

Why on Earth should I immediately give you my life story?

I'm pretty sure that's supposed to be a conversational starter. Feel free to keep any secrets you want.

Hi there! I didn't sign up before because this community tends to comment what I want to say most of the time anyway, and because signup hurdles are a thing and lack of OpenID support makes me frustrated.

I've been reading LW intermittently for about one and a half years now; whilst integrating these concepts in my life is something I tend to find hard, I have picked some of these up. Specifically anchoring effects and improving my ability to spot "the better action". It's still hard to actually take such actions; I'll find myself coming up with a better plan of action and then executing the inferior plan of action anyway.

I've been horrified at a few of my past mistakes; one of them was accidental p-hacking. (Long story!)

One of the things I had to do for my college degree was performing research. I picked a topic (learning things) and got asked to focus on a key area (I picked best instructional method for learning how to play a game). We had to use two data collection methods; I wanted to do an experiment because that was cool, and I added a survey because if I'm going to have to ask lots of people to do something for me, I might as well ask those same people to do something else. Basically I'm lazy.

My experiment consisted of a few levels (15) in which you have to move a white box to various shapes by dragging it about. I had noticed that teaching research focused on "reading" "doing" "listening" and "seeing" types, (I forgot the specific words, something about Kinestetic, Audititive, Visual... - learning). So I translated to "written text", "imagery", "sounds and spoken text", and "interactivity" to model the reading, seeing, listening and doing respectively.

Then I made each level test a combination of learning methods. First "learning by doing" only. Here's a box. Here's a green circle. Here's a red star. Go.

Most people passed in 5 seconds or in 1 minute. This after I added a background which was dotted so that you'd see a clear white box and not a black rectangle, and a text "this is level 1, experiment!". Some people would think it was still loading without this text. I didn't include the playtesters in the research result data.

After that it showed you 4 colored shapes and a arrow underneath, and a button "next" below it. Hitting next moves you to level 2, where a white box is in the center of the screen, and various colored shapes are surrounding the white box. Dragging the white box over the wrong shape sends you back to the screen with the 4 colored shapes and the arrow. This was supposed to be "imagery".

Then the next screen after that was an audio icon and a "next button". I had recorded myself saying various colored shapes, and people were told at this screen something like "black circle, red triangle, blue star, green square". The idea being you'd have to remember various instructions and act upon them. Hitting the next button brings you to the surrounded white box again. Each level had a different distribution of shapes to prevent memorizing the locations.

Then the 4th text level was just text instructions ("drag the white box over the green circle, then the red star ...")

Then after that came combinations - voiced text, text where I had put the shapes in images on the screen as well, shapes + voice saying what they were... for interactivity, I skipped the instruction screen and just went with text appearing in the center of the screen, and then the text changes when you perform the correct action (else level resets). This to simulate tutorials like "press C to crouch" whenever you hit the first crouch obstacle.

I had recorded the time spent on the instruction screen, the total time for each level, and per attempt, the time between each progress step and failure. So 1.03 seconds to touch the first shape, 0.7 to touch the second, 0.3 to touch a third wrong one, then 0.5 to touch the first, 0.4 to touch the second, 0.8 to touch the third and 1.0 to touch the fourth and level complete.

The idea was that I could use this to see how "efficient" people were at understanding the instructions, both in speed and correctness.

(FYI, N=75 or so, out of a gaming forum with 700 users)

Then I committed my grave sin and took the data, took excel's "correlate" function, and basically compared various columns until I got something with a nice R. This after trying a few things I had thought I would find and seeing non-interesting results.

I "found" that apparently showing text and images in interactive form "learn as you go" was best - audio didn't help much, it was too slow. Interactivity works as a force multiplier and does poorly on its own.

But these findings are likely to be total bogus because, well, I basically compared statistics until I found something with a low chance to randomly occur.

... What scares me not is not that I did this. What scares me is that I turned this in, got told off for "not including everything I checked", thought this was a stupid complaint because look I found a correlation, voiced said opinion, and still got a passing grade (7/10) anyway. And then thought "Look, I am a fancy researcher."

I could dig it up if people were interested - the experiment is in English, the research paper is in Dutch, and the data is in an SQL database somewhere.


This is probably a really long post now, so I'll write more if needed instead of turning this into a task to be pushed down todo lists forever.

I believe the whole idea of learning styles is a pseudoscience, so you not finding more correlations could actually be the correct answer... which almost no one cares about, because negative findings are boring.

Publication bias is probably even greater sin than p-hacking, because in theory any study that found some result using p-hacking could be follow by a few failed attempts at replication. Except that those failed attempts at replication usually don't get published.

The idea of learning styles as "fits better to a specific person" wasn't interesting to me - instead I took it as inspiration for natural division of "ways people could learn this thing in general".

As for publication bias, I don't think anyone published their research. ... but if there had been a really interesting result, I bet someone would have tried to get their research published somehow.

Hello friends! I have been orbiting around effective altruism and rationality ever since a friend sent me a weird Harry Potter fanfiction back in high school. I started going to Seattle EA meetings on and off a couple years ago, and have since read a bunch of blogs, made friends who were into existential risk, started my own blog, graduated college, and moved to Seattle.

I went to EA Global this summer, attend and occasionally help organize Seattle EA/rationality events, and work in a bacteriophage lab. I plan on studying international security and biodefense. I recently got back from a trip to the Bay Area, that gaping void in our coastline that all local EA group leaders are eventually sucked into, and was lucky to escape with my life.

I'm gray on the LessWrong slack, and I also have a real name. I had a LW account back early in college that I used for a couple months, but then I got significantly more entangled in the community, heard about the LW revitalization, and wanted a clean break - so here we are. In very recent news, I'm pleased to announce in celebration of finding the welcome thread, I'm making a welcome post.

I wasn't sure if it would be tacky to directly link my blog here, so I put it in my profile instead. :)

Areas of expertise, or at least interest: Microbiology, existential risk, animal ethics and welfare, group social norms, EA in general.

Some things I've been thinking about lately include:

  • How to give my System 1 a visceral sense of what "humanity winning" looks like
  • What mental effects hormonal birth control might have
  • Which invertebrates might be able to feel pain
  • What an alternate system of taxonomy based on convergent evolution, rather than phylogeny, would look like
  • How to start a useful career in biorisk/biodefense

I discovered SSC and LW a ~couple months ago, from (I think) a Startpage search which led me to Scott's lengthy article on IQ. Only browsed for a while, but last night rediscovered this after I read Doing Good Better and went to the EA website. I remember CFAR from a Secular Student Alliance conference two years ago.

I like Scott's writing, but I have no hard science training unfortunately.

I have realized that I've become rather used to my comfort zone, and have sort of let my innate intelligence stagnate, when I like to think it still has room to grow. I had psychological testing six years ago that put my IQ at 131 which, if I interpret the survey results correctly, puts me near the bottom of this community? Despite that, I find the philosophical elements of Yudkowsky fascinating [not so much the more mathematical stuff]. At least, this site has made me sit at a computer longer than I'm accustomed to.

It seems from EY's writing that LW wanted to be a homogeneous community of like-minded (in both senses) people, but I am curious to what extent rationalists engage in outreach (other than CFAR I guess) towards more average individuals. Because that changes how one writes. Or is there a tacit resignation that more average people just won't care or grok it; that smarter individuals should focus on their own personal growth and happiness? But then I remember Scott's writing and seeming compassion, and also the percentage of users who are social-democratic, so it seems like there would be higher demand for actually communicating with the outgroup.

I entered the humanities because I wanted to be a professor and I like to write, I like foreign languages, didn't think I would be interested in heavier things (took some psychology and philosophy as a postbac) but now I'm too far into my MA where I'm not sure I could get into an additional Master's program in something meaty and then pursue a better, more intellectually stimulating career.

Ultimately I just want to teach and "help" people. So, that's where I'm at. I read/skimmed DGB yesterday in one sitting while in the middle of yet another existential depression that my shrink thinks was caused by going off an opioid. I can't remember the last time I consumed a book in one sitting.

This was longer than I intended. Thank you.

Welcome to Less Wrong!

I think a properly tested IQ of 131 would put you more or less in the middle for the LW community. (It would put you a little below the average self-report for LW members who have had proper professional IQ tests done, but I would guess that having such tests done correlates with higher IQ in this community. And, alas, people sometimes make mistakes, or report things that flatter them while ignoring things that don't, or just flat-out lie, and all those things will introduce a bit of upward bias in the results.)

An IQ of 131 would also put you solidly in the region where for most things less outrageously IQ-heavy than, say, theoretical physics IQ is unlikely to be what limits you.

I think there's a reasonable amount of rationalist outreach going on.

  • You already mentioned CFAR, which does it on a relatively large scale, formally, for money.
  • One LW member, Gleb Tsipursky, has an organization called "Intentional Insights" whose stated goal is to spread rationality to more "ordinary" people.
    • (Gleb and his organization are rather controversial around here for various reasons, and in particular there is not widespread agreement that they are actually doing any good, but they're certainly doing outreach.)
  • EY wrote a big Harry Potter fanfic to bring rationalist ideas (and Less Wrong, and MIRI) to the attention of a wider audience.
  • Various rationalists have blogs with an outreachy component. For instance, Put a Number on it!.
  • As any religious fanatic will tell you, the most effective outreach is often done at an individual level. There are, e.g., plenty of rationalists on Facebook just being conspicuously reasonable. I'm sure most rationalists' friends are far from being a random sample of "average individuals", of course.

Are you still on the path to becoming a professor? It seems to me that being a professor in any field has to score pretty highly on the "intellectually stimulating career" metric.

Thank you (for the information)!

Yeah, I had a psychologist do a full battery of tests to determine if I did indeed have ADD. (Isn't it funny how regular physicians can just prescribe you drugs as a kid for behavioral/mental conditions?!)

I feel like I have heard of the Harry Potter fanfic before, also oddly enough tied to my memory of the SSA conference where CFAR had a table... Hmm.

As far as professorships go, I study German where any tenure-track job will have dozens upon dozens of applicants. I also study Classics. I'm more interested in education in general and pedagogy, and actually being in the classroom. I used to be a stage actor, and I always liked giving in-class presentations, and people tell me I am preternaturally talented at that.

It's intellectually stimulating half the time; when you're reading turgid academic prose for the other half, that's when I'm not sure what I enjoy writing is actually publishable and if it would make a difference. I know 80,000 Hours talks about how the job doesn't have to provide meaning, but I think I would prefer that whatever I do for 40, 50, 60 hours a week indeed would provide that. For example, I looked into App Academy, and I know Buck is a member here, but I'm not sure I could spend my work life sitting down and looking at a computer screen, though that's just a personal preference of course (even considering that I could make way more money than being a professor and be able to donate much more).

Basically my concern is that the way we raise and educate children is simply blind inheritance, and a vicious cycle of parents punishing children and teachers punishing students because that's what happened to them. The fact that we still have classrooms where rows of desks face a teacher in the front of a classroom, preserving the environment that has existed for centuries is so absurd to me. We accept these traditions, and don't stop to think, "hey, maybe we could do this differently."

You probably already know it, but just to be sure, there are alternative approached to teaching, e.g. the Montessori education. But it seems that most of the education system just continues by inertia. So, a few people do stop and think how to make things differently, it's just that the majority ignores them.

Right, that's a good example. And then the normal people stigmatize that sort of thing, as if Montessori kids are weird.

Sometimes I suspect that the teaching profession may attract the wrong kind of people. (Speaking about the elementary and high schools. Universities are a bit different, e.g. they do research, they deal with adults, etc.)

When you think about it, teacher is a servant of the government, nominated to impart the cultural wisdom to children. Think about what psychological type would this job description attract most. To say it mildly, probably not the "open to experience" ones. (In the youngest classes, it also gets mixed with the "loves little children" ones.)

I was a teacher shortly, and I remember how shocked were my students, when I answered one of their questions with "I don't know". It was like I broke some taboo. I asked: "Guys, you asked me something which is outside the scope of the lesson, outside the scope of what is taught at this school, so it's not an incompetence on my part to not know that. And it's impossible to know everything, even within the subject one teaches. So if you ask me a question and I don't know the answer, what exactly did you expect me to do?" After a while the students concluded that they would expect me to just make something up, because in their experience that's what an ordinary teacher would do. It's not because they would prefer to get a bullshit answer, but because they accepted "teachers being unable to admit not knowing something" to be a perfectly normal part of the world.

Now try to take this kind of people and make them admit that, essentially, they were doing their whole jobs wrong. That many things they believe necessary are actually harmful, a large part of their "knowledge about teaching" is actually a myth, and the part that isn't a myth is probably still somehow exaggerated and dogmatized. They are not going to take it well.

Now think about the people above them in the power ladder. The school inspection is former teachers, probably the most dogmatic of them, who already don't even have the feedback that comes from actually teaching the kids. My short experience with them suggests they are completely insane. They are the ones who will take the stopwatch, measure how many minutes during the lesson you spent doing "teamwork", and judge the whole lesson by this number alone, ignoring everything else. (Unless instead of "teamwork" their momentary obsession happens to be something else.) And the layer above them, the bureaucrats in the department of education, they are not even teachers, they don't know fuck about anything, they are merely creating more paperwork for everyone else, based on the recently popular buzzwords. The whole system is insane.

(This description is based on my country, maybe it is slightly less insane at other places.)

I think the most important part of rationality is doing the basic stuff consistently. Things like noticing the problem that needs to be solved and actually spending five minutes trying to solve it, instead of just running on the autopilot. At some level of IQ, having the right character traits (or habits, which can be trained) could provide more added value than extra IQ points; and I believe you are already there.

I find the philosophical elements of Yudkowsky fascinating

Does it also make you actually do something in your life differently? Otherwise it's merely "insight porn". (This is not a criticism aimed specifically at you; I suspect this is how most readers of this website use it.)

I am curious to what extent rationalists engage in outreach (other than CFAR I guess) towards more average individuals. Because that changes how one writes.

I think the main problem is that we don't actually know how to make people more rational. Well, CFAR is doing some lessons, trying to measure the impact on their students and adjusting the lessons accordingly; so they probably already do have some partial results at the moment. That is not a simple task; to compare, teaching critical thinking at universities actually does not increase the critical thinking abilities of the students.

So, at this moment we want to attract people who have a chance of contributing meaningfully to the development of the Art of how to make people more rational. And then, when we have the Art, we can approach the average people and apply it on them.

"to compare, teaching critical thinking at universities actually does not increase the critical thinking abilities of the students"

That's sad to hear.

Thank you for the advice. My primary concern is definitely to establish more rational habits. And then also to learn how to better learn.

Just like the Sequences say somewhere, putting a label "cold" on a refrigerator will not actually make it cold. Similarly, calling a lesson "critical thinking" does not do anything per se.

When I studied psychology, we had a lesson called "logic". It was completely unconnected to anything else; all I remember is drawing tables for boolean expressions "A and B", "A or B", "A implies B", "not A", and filling them with ones and zeroes. If you were able to fill the table correctly for a complex expression, you passed. It was a completely mechanical action; no one understood why the hell are we doing that; it was completely unconnected to anything else. So, I guess this kind of lesson actually didn't make anyone more "logical".

Instead we could have spent the time learning about cognitive biases, even the trivial ones, and how it applies to the specific stuff we study. For example, psychologists are prone to see "A and B" and conclude "A implies B" if it fits their prejudice. Just having one lesson that would give you dozen examples of "A and B", and you would have to write "maybe A causes B, or maybe B causes A, or maybe some unknown C causes both A and B, or maybe it's just a coincidence" would probably be more useful then the whole semester of "logic"; it could be an antidote against all that "computer games cause violence / sexism" stuff, if someone would remember the exercise.

But even when teaching cognitive biases, people are likely to apply them selectively to the stuff they want to disbelieve. I am already tired of seeing people abusing Popper this way (for example, any probabilistic hypothesis can be dismissed as "not falsifiable" and therefore "not scientific"), I don't want to give them even more ammunition.

I suspect that on some level this is an emotional decision to make -- you either truly care about what is true and what is bullshit, or you prefer to seem clever and be popular. A university lesson cannot really make you change this.

puts me near the bottom of this community?

No, I don't think so. Self-reported IQs from a self-selected group have a bias. I'll let you guess in which direction :-)

I am curious to what extent rationalists engage in outreach (other than CFAR I guess) towards more average individuals

There's Gleb Tsipursky and his Intentional Insights, but from my point of view this whole endeavour looks quite unfortunate. YMMV, of course.

"No, I don't think so. Self-reported IQs from a self-selected group have a bias. I'll let you guess in which direction :-)"

Of course, but I guess that I would expect a site helping its members to "Overcome Bias" would provide more trustworthy data! :)

"More trustworthy" != trustworthy.

Haha, yes indeed.

It seems from EY's writing that LW wanted to be a homogeneous community of like-minded (in both senses) people, but I am curious to what extent rationalists engage in outreach (other than CFAR I guess) towards more average individuals. Because that changes how one writes. Or is there a tacit resignation that more average people just won't care or grok it;

When trying to influence people on a meaningful level it's seldom useful to simple try to address the average person.

There are people in this community who do outreach. Gleb tries to do outreach via http://intentionalinsights.org/. http://www.clearerthinking.org does a bit of out-reach that's near to this community. James Miller has his podcast.

In general there also a need for research. CFAR doesn't see it's mission primarily as outreach but primarily as developing a new way to do rationality. The mission of this website is "refining the art of rationality".

There no inherent reason to do outreach and developing new ideas at the same time. Both are worthy causes and idea development isn't just about focusing on one's own growth and happiness.

A lot of energy that goes into compassionate outreach also goes into EA and not rationality as such.

Good points. I guess why I'm ultimately interested in education is that these individual inclinations begin early, and one can foster them or beat them out, as with curiosity. I could see why outreach for adults would be more difficult. And of course if a child benefits from an EA intervention, then they might become more interested in their own education if they have rationalist role models, and so on and so on until they discover rationality of their own accord.

I guess why I'm ultimately interested in education is that these individual inclinations begin early, and one can foster them or beat them out, as with curiosity.

It's not easy to provide education for children if neither the government nor their parents want it.

At the moment there aren't rationality interventions for which a solid evidence base exists that proves that they work in a way that would make it easy to pitch those interventions to the school system. The first step is to create effective interventions.

There's nothing to be gained by holding classes where children are taught the names of the logical fallacies. There's no evidence that it helps. Pushing such classes would be about trying to push an ideology while ignoring the the core of what rationality is actually about.

It's not easy to provide education for children if neither the government nor their parents want it.

When I think about the incentives of most stakeholders in education system, I get this:

techers -- job with long-term stability (pretty much keep doing the same thing for decades)

students -- most of them do very little and yet they get certificates for smartness

parents -- free babysitting until the child is 18 (depends on the country)

politicians -- can keep "reforming" the system every year and impress voters

Seems to me that most people are happy with how the system works now.

politicians -- can keep "reforming" the system every year and impress voters

I think you have a bad model of politicians if you model them primarily as wanting to impress voters.

On of the reasons for centralized testing is for example that it amkes it easier for employees to evaluate the applicants from different schools. As a result they lobby for standardized testing and get it.

Teachers unions are politically strong.

Politicians are generally concerned about unemployement and want the education system to teach skills that allow students to take jobs. In the UK the lately also call about some thing they call happiness.

parents -- free babysitting until the child is 18 (depends on the country)

Parents also care substantially about their children getting into a good college.

I guess a lot of what I wrote is country-specific, and I was thinking about Slovakia where employers do not care about the specific college, they only care about whether you have one or not. Not sure why, but that's how it works.

And pretty much anyone can get to some college, so the only obstacles are either being somehow insane, or coming from so poor family that even if the college is free, you simply cannot afford a few more years of not having income. So "having college education" is a proxy for "not being poor or insane", which of course is a horrible classism. Somehow citizens of the country that regularly has a majority of communists in the parliament don't mind this at all.

So the current situation here is that the elementary and high schools don't matter at all -- because unless you are very poor or insane, you will get to some college, and for most people it doesn't matter which one -- so the usual complaints about schools are along the lines of "too much homework" or "too difficult lessons". On the other hand, people notice that young people with university education are somehow much less impressive than they used to be a decade or two ago. But almost no one can connect the dots. So the politicians here do some Brownian-motion "reforms" of education, which for example means that one year they remove some part of math education, the next year they put it back, yet another year they shift some math from one grade to another. Each time saying to media how this reform will fix the problems with education.

Sorry, it's a stupid country with stupid voters, and I am getting more and more disappointed every year.

it's a stupid country with stupid voters

Sorry to disappoint you, but it's not Slovaks, it's humans.

Imagine someone of average intelligence. Now consider that fully one half of the country's population is below median intelligence, that is, stupider than someone you imagined...

That's where cultural habits make a big difference. In some places the stupid people follow relatively good heuristics, in some places they follow relatively bad ones.

Culture is important, yes, but that usual argument is that it's institutions which matter. The most prominent advocate of this approach is probably Daron Acemoglu, see e.g. this or his book.

From your description of Slovakian politics it seems like the actors are little coordinated. Maybe there room for a liquid democracy based political party?

Hi! Male, 30-something, bean-counting, sports-watching, alcohol-drinking, right-of-center normie here. Been lurking on LW for several years. I possess an insight-porn level of interest in and real-life application of AR. Slow day at the office so I thought I would say "hi". It's possible that I may never post on this board again, in which case, it's been nice knowing you. xx

I'm here because of the AI and SETI info and debates.

I am throwing links into the open threads that have some info related to these things, and am interested in seeing if there ends up being other discussions on them.

Pretty sure that there isn't any biological aliens out there at this time, and am pretty spooked by the idea of a machine intelligence running around the galaxy. Looking at some of those dense star clusters, it occurs to me that those would be the only places to put a civ , that was protected from hyper-velocity attacks. Am kinda concerned that the way we find out we aren't alone is a bunch of rocks coming in at a hefty chunk of "c".

Have read most of the background sequences by EY, and most of the discussion posts each week, but don't like getting caught up in arguments that are all about syntax.

Comments use Markdown, not html. The Show Help button at the lower right of of a comment box will give you details. As I recall, two spaces at the end of a line are a hard return-- it's been a while since I had to wrestle with how a list or a poem would appear.

Articles use html, sort of. There's an html button at the top. If you copy and paste from a word processor, you might get inconvenient formatting.

I don't like this system, but the only thing worse than this system is having to try to guess how it works.

[comment score below threshold] I think that's anything with -3 karma. You can see the article/thread for free by clicking on the link, but if you reply to anything on that thread, you lose 5 points.

[continue this thread]-- LW has limited nesting of comments, so this link will start a new window. You will lose your pink borders on the old window if you left click, so I recommend using a new tab for continuing threads. You get the option of a new tab by right-clicking on the link.

Something which wasn't clear to me after looking around a bit - it seems the recent comments in the bar at the right is cached, and I saw some comments with a pink border. Does the pink border mean they're new?

Comments with a pink border are new since the last time you (that is, your account) refreshed the page. They might be years old, but they're new to you.

This adds so much more to my LW experience. Reading open threads just became doable, rather than an exercise in trying to remember what parts of the discussion I've already seen and which ones I hadn't. ... Although I'm not seeing everything with a pink border whenever I look at an old page, so I think that part of the explanation is false. That, or there is a bug somewhere...

The first time you look at a page (no matter how old it is) , you don't get any pink borders.

Hey! My name's Jared and I'm a senior in high school. I guess I started being a "rationalist" a couple months ago (or a bit more) when I started looking at the list of cognitive biases on Wikipedia. I've tried very hard to mitigate almost all of them as much as I can and I plan on furthering myself down this path. I've read a lot of the sequences on here and I like to read a lot of rationalwiki and I also try to get information from many different sources.

As for my views, I am first a rationalist and make sure I am open to changing my mind about ANYTHING because reality doesn't change on your ability to stomach it.

As for labels, I'm vegan (or at least strict vegetarian), anarcho-communist (something around the range of left libertarian), agnostic (not in the sense that I'm on the fence but that I'm sure that we don't know - so militant agnostic lol).

My main first question is, since you guy are rationalists, why aren't you vegetarian or vegan? The percentage that is vegetarian on sites like lesswrong and rationalwiki is hardly higher than the public. I would think considering you are rationalists you would understand vegetarian or veganism and go for it for sure. Am I missing something because this actually blows my mind? If you oppose it, I really wanna hear some arguments because I've never heard a single even somewhat convincing argument and I've argued with oh so many people about it. Obviously goal of veganism is to lessen suffering not end it etc.

But yeah hey!

Hi Jared, Your question about vegetarianism is an interesting one, and I'll give a couple of responses because I'm not sure exactly what direction you're coming from.

I think there's a strong rationalist argument in favor of limiting consumption of meat, especially red meat, on both health and environmental grounds. These issues get more mixed when you look at moderate consumption of chicken or fish. Fish especially is the best available source of healthy fats, so leaving it out entirely is a big trade-off, and the environmental impact of fishing varies a great deal by species, wild vs. farmed, and even the fishing method. Veganism gives relatively small environmental gains over vegetarianism, and is generally considered a loss in terms of health.

When you look at animal suffering, things get a lot more speculative. Clearly you can't treat a chicken's suffering the same as a human's, but how many chickens does it take to be equivalent to a human? At what point is a chicken's life not worth living? This quickly bogs down in questions of the repugnant conclusion, a standard paradox in utilitarianism. Although I have seen no thorough analysis of the topic, my sense is that 1) Scaling of moral value is probably more-than-linear with brain mass (that is, you are worth more than the ~300 chickens it would take to equal your gray matter) but I can't be much more precise than that 2) Most of the world's neurons are in wild inverterbrates: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html which argues against focusing specially on domesticated vertebrates 3) Effort expended to reduce animal suffering is largely self-contained--that is, if you choose not to eat a chicken, you probably reduce the number of factory-farmed chickens by about one, with no longer-term effects. Effort to help humans, on the other hand, often has a difficult-to-estimate multiplier from follow-on effects. See here for more on this argument: http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/

The upshot is that when you make any significant investment in animal welfare, including vegetarianism and especially veganism, you should consider the opportunity costs. If it makes your life more difficult and reduces the amount of good you can do in other ways, it may not be worth it.

Personally, I used to be a pescetarian and would consider doing so again, depending on the people around me. Trying to do it in my current circumstances would cause more hassle than I think it's worth (having to ask people for separate meals, not participating in group activities, etc). If you know a lot of other vegetarians, there may be no social cost or even some social benefit. But don't assume that's the case for everyone.

Thank you for the polite and formal response! I understand what you're saying about the chicken and fish. Pescetarian is much better than just eating all the red meat you can get your hands on.

When you look at animal suffering, things get a lot more speculative. Clearly you can't treat a chicken's suffering the same as a human's, but how many chickens does it take to be equivalent to a human? At what point is a chicken's life not worth living? This quickly bogs down in questions of the repugnant conclusion, a standard paradox in utilitarianism. Although I have seen no thorough analysis of the topic, my sense is that 1) Scaling of moral value is probably more-than-linear with brain mass (that is, you are worth more than the ~300 chickens it would take to equal your gray matter) but I can't be much more precise than that 2) Most of the world's neurons are in wild inverterbrates: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html which argues against focusing specially on domesticated vertebrates 3) Effort expended to reduce animal suffering is largely self-contained--that is, if you choose not to eat a chicken, you probably reduce the number of factory-farmed chickens by about one, with no longer-term effects. Effort to help humans, on the other hand, often has a difficult-to-estimate multiplier from follow-on effects. See here for more on this argument: http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/

Now I understand what you're saying about the animal suffering, but I'd like to add some things. If you don't eat many chickens or many cows than you can save more than one because you're consistently abstaining from meat consumption. Its also not about making the long term effects on your own; its contributing so that something like factory farming can be changed into something more sustainable, more environmentally friendly, and more addressing animal concerns once more people boycott meat. Even if you were to choose to compare gray matter, you have to compare that its the animal's death vs the human's quite minor pleasure that could have been just as pleasurable eating/doing something else.

If it makes your life more difficult and reduces the amount of good you can do in other ways, it may not be worth it.

For you, does it really make life more difficult? From my personal experience and hearing about others, the only hard part is the changing process. Its only difficult in certain situations because of society, and the point of boycotting is to change the society so its easier as well the other benefits.

Thanks again for responding!

factory farming can be changed into something more sustainable

It's sustainable in the sense that we can keep doing it for a very long time.

more environmentally friendly,

This may be more what you were talking about.

Hi Jared! I don't remember the statistics, but here are a few hypotheses:

  • There is usually a distribution of a few "hardcore" members, and many lukewarm ones. In a statistics that includes all of them, the behavior of the hardcore members can easily disappear.

  • Many people eat some kind of paleo diet, which (if we ignore the animal suffering, and look at health benefits of eating lot of vegetables) is almost as good as vegetarianism. Possibly, a paleo person eating meat with mostly unprocessed vegetables has a more healthy diet than a vegan who gets most of their food cooked. For some people, vegetarianism or veganism may seem low status, and paleo high status (simply because it is relatively new).

  • Or maybe it's just that food doesn't get as high priority as e.g. education, making money, or exercise, so people focus their attention on the other things.

  • Or, most obviously -- just because people know something is the right thing to do, it doesn't mean they will automatically start doing it! Not even if they identify as "rationalists".

In my bubble of local hardcore aspiring rationalists, vegetarianism or veganism is almost the norm. (Generally, I would suspect that the hardcore ones go either vegetarian or vegan or paleo.)

There is usually a distribution of a few "hardcore" members, and many lukewarm ones. In a statistics that includes all of them, the behavior of the hardcore members can easily disappear.

Could you explain this more in depth; I'm failing to grasp this completely. I apologize.

if we ignore the animal suffering

Why would we do that?

Or maybe it's just that food doesn't get as high priority as e.g. education, making money, or exercise, so people focus their attention on the other things.

I guess, but you can usually focus on multiple things at once, and most people have certain causes they ascribe to.

Or, most obviously -- just because people know something is the right thing to do, it doesn't mean they will automatically start doing it! Not even if they identify as "rationalists".

Really? Why not though? All humans, excluding sociopaths, have empathy. I'll admit I see this a bit though.

In my bubble of local hardcore aspiring rationalists, vegetarianism or veganism is almost the norm.

Oh, hmm I guess I just missed it.

Thank you for your response and your hypotheses! These responses are great compared to the usual yelling match ... anywhere else.

Could you explain this more in depth

In general, imagine that you have a website about "X" (whether X is rationality or StarCraft; the mechanism is the same). Quite likely, a distribution of people who visit the website (let's assume the days of highest glory of Less Wrong) will be something like this:

10 people who are quite obsessed about "X" (people who dramatically changed their lives after doing some strategic thinking; or people who participate successfully in StarCraft competitions).

100 people who are moderately interested in "X" (people who read some parts of the Sequences and perhaps changed a habit or two; or people who once in a while play StarCraft with their friends).

1000 people who are merely interested in "X" as a topic of conversation (people who read Dan Ariely and Malcolm Gladwell, and mostly read Less Wrong to find cool things they could mention in a debate on similar topics; people who sometimes watch a StarCraft video on YouTube, but actually didn't play it for months).

Now you are doing a survey about whether the readers of the website somehow differ from the general population. I would expect that those 10 obsessed ones do, but those 1000 recreational readers don't. If you put them both in the same category, the obsessed ones make only 1% in the category, so whatever are their special traits, they will disappear in the whole.

For example (completely made up numbers here), let's assume that an average person has a 1% probability of becoming a vegetarian, those 1000 recreational LW readers also have a 1% probability, the 100 moderate LW readers have probability 2%, and the hardcore ones have a probability of 20% (that would be a huge difference compared with the average population). Add them all together, you have 1110 people, of whom 0.01 × 1000 + 0.02 × 100 + 0.2 × 10 = 14 vegetarians; that means 1.26% of the LW readers -- almost the same as the 1% of the general population.

This is further complicated by the fact that you can more easily select professional StarCraft players (e.g. by asking whether they participated in some competition, and what is their ranking), but it's more difficult to tell who is a "hardcore rationalist". Just spending a lot of time debating on LW (which pretty much guarantees high karma), or having read the whole Sequences doesn't necessarily mean anything. But this now feels like talking about "true Scotsmen". Also, there are various status reasons why people may or may not want to identify as "rationalists".

just because people know something is the right thing to do, it doesn't mean they will automatically start doing it!

Really? Why not though?

That's kinda one of the central points of this website. Humans are not automatically strategic, because evolution merely made us execute adaptations, some of which were designed to impress other people rather than to actually change things.

People are stupid, including the smartest ones. Including you and me. Research this thoroughly and cry in despair... then realize you have something to protect, stand up and become stronger. (If these links are new for you, you may want to read the LW Sequences.)

Just look at yourself -- are you doing the literally best thing you could do (with the resources you have)? If not, how large is the difference between what you are actually doing, and the literally best thing you could do? For myself, the answer to this is quite depressing. Considering this, why should I expect other people to do better?

In my bubble of local

I guess I just missed it.

Statistically you are quite likely to be at a different part of the planet, so it's quite easy to miss my local group. ;) Maybe finding a LW meetup nearest to your place could help you find someone like that. (But even within the meetup I would expect that only a few people really try to improve their reasoning, and most are there mostly for social reasons. That's okay, as long as you can identify the hardcore ones.)

These responses are great compared to the usual yelling match ... anywhere else.

Oh, I remember this feeling when I found LW!

Thank you for such a clear response and the additional info! :) I have read most of the sequences but some of those links are new to me.

Or, most obviously -- just because people know something is the right thing to do, it doesn't mean they will automatically start doing it! Not even if they identify as "rationalists".

Really? Why not though?

http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/

These responses are great compared to the usual yelling match

Welcome!

if we ignore the animal suffering

Why would we do that?

If my worldview was, "animals are inferior and their suffering is irrelevant".

If my worldview was, "animals are inferior and their suffering is irrelevant".

Wouldn't that be an irrational 'axiom' to start from though? Maybe the inferior part works, but you can't just say their suffering is irrelevant. If you go off the basis that humans matter just because than that's a case of special pleading saying humans are better because they are human. There suffering may be less but it isn't irrelevant because they can suffer.

If my worldview was, "animals are inferior and their suffering is irrelevant".

Wouldn't that be an irrational 'axiom' to start from though?

you can't just say their suffering is irrelevant.

Why?

If you go off the basis that humans matter

Do humans matter? Why do humans matter? I think you might be leaping a conclusion or a few here.

[-][anonymous]7y30

Hello everyone. I've slowly become entangled in rationality after stumbling across the site when I was quite young and looking for information about cognitive biases and logical fallacies to use in my speech and debate club. This played a minor role in my deconversion and I've been poking around the website and the #lesswrong IRC ever since. (Some of you know me as Madplatypus.) After moving to Seattle I became much more heavily involved because the community here is the best in all sorts of ways.

I'm still young, and hunting for the best opportunities to meet my goals of,becoming the best person I can be, protecting and growing my expanding circles of loyalty, and insuring humanity has a glorious future. Yes, I already know about 80,000 hours.

I'm interested in finding mentors/building networks beyond Seattle/finding new friends so send me a message and let's talk!

I'm interested in talking about: Virtue ethics, Historical Models, Introspection, Better Life Plans, Oratory, Psychology, Geopolitics, Self-Education, Mental Movements, Phenomenology, Metaphor, Martial Arts, Poetry, and ways of thinking about ethics that aren't horrendously simplified. And more!

I'm busy catching up on some more technical fields like mathematics, programming, and information security, but my passions are generally humanistic.

I want you to tell me about: Your passions/drive, the phenomenology of music, your metaphors of mind, unusual things you find valuable, and social constructs you think should be instantiated.

What I love about the Rationality community: Intentional community building, a focus on clear thinking, and the beautiful combination of people who generate lots of crazy hypotheses and people who knock them down.

What I dislike: Getting told to "Go read X" in response to some disagreement I have with rationalist canon. Chances are, I already have read X. People who critique old philosophy which they have not read. Ethical systems which render humans as a fungible moral asset and abstract individual interests away from their reasoning.

Osthanes is a mythical figure in Greek magical pseudepigrapha, who was held to be the first disciple of Zarathustra. It was held that Zarathustra invented magic, and Osthanes brought it to Greece where it was written down for the first time.

[This comment is no longer endorsed by its author]Reply

About voting - how does voting on the open thread post itself work? The post is pretty much always the same, so why does it get voted up anyway? Is it about the quality of the comments?

some people thank the person who posts the open thread. It's a community responsibility to keep posting it. No one is in charge but it has kept on happening for a long time now.