Proposed rewrites of LW home page, about page, and FAQ
Proposed rewrites can be found here. Please suggest specific improvements in the comments!
Although long-time Less Wrong users don't pay much attention to the home page, about page, and FAQ, I suspect new users pay lots of attention to them. A few times, elsewhere on the internet, I've seen people describe their impression of Less Wrong that seemed primarily gleaned from these pages--they made generalizations about Less Wrong that didn't seem true to me, but might appear to be true if all one did was read the about page and FAQ.
The about page, in particular, is called out to every new visitor. Try visiting Less Wrong in incognito mode or private browsing (i.e. without your current cookies) to see what I'm referring to.
But the current set of "newcomer pages" isn't very good, in my opinion:
- Text is duplicated between the home page and the about page. There's plenty to say and link to without repeating ourselves.
- The first paragraph of the home page text has four links to Wikipedia articles and none to Less Wrong posts. These may be very good Wikipedia articles, but I tend to think that linking to actual Less Wrong posts is generally a better way to communicate what kind of site Less Wrong is than linking to Wikipedia.
- The home page text also makes references to the blog, discussion section, and meetups, which are already highlighted plenty in the brain image.
- I think the primary purpose of the about page should be to describe and link to lots of interesting Less Wrong posts. I think reading posts is probably best way to figure out what Less Wrong is about. If the smorgasboard of posts linked to from the about page is sufficiently varied and high-quality, I think that most users will be able to find at least a couple posts they really like. Right now this purpose isn't given much real estate. There is a sentence starting with the words "If you want a sampling of the content on the main blog...", but this sentence does little to describe the posts it links to aside from providing a few related keywords.
- There's also a lot of instruction on the about page regarding how to do basic stuff like create posts. Facebook and Youtube don't seem to think it's necessary to provide instructions on how to do basic stuff, so I don't think we need it either. (Just in case, though, it's mostly still all there in my rewrite of the FAQ.)
- Some of the answers in the FAQ make us look very close-minded (when in fact we're only a little close-minded). See Why is almost everyone here an atheist? and Why do you all agree on so much? Am I joining a cult?. I think it's possible to answer these questions in a way that's less obnoxious and gives a more accurate impression of what LW is like: 1, 2.
- I tried to link to various posts that are explicitly targeted at newcomers, like "What I've Learned from Less Wrong" and "What is Bayesianism?", but weren't being shown on the existing newcomer pages.
- I put a lot more stuff in the FAQ, on the theory that a long FAQ doesn't hurt much since folks can just read the answers to the questions that interest them.
- I deliberately avoided looking at the existing pages at first when writing my alternatives, to avoid contamination. My thinking was that being different for its own sake was good if we could reliably figure out which version was better in each case (e.g. overcome status quo bias). Please comment on nitty-gritty differences between the two versions, e.g. if you think I left an important sentence from the originals out or if one of the posts I linked to seems rather weak.
I certainly don't claim to speak for all Less Wrong users. If you have any thoughts, please comment here, send me a private message, or log in to the wiki and edit the candidate pages directly.
I'm especially interested in getting feedback on the FAQ, because I took the liberty of codifying some social norms that were previously implicit: see the section Site Etiquette and Social Norms, especially the bits about Discussion vs Main, politics, and "if you never get voted down, you're not posting enough".
If you think I codified the social norms incorrectly, or you've been thinking they really should be different, please comment! The FAQ seems like a good way to broadcast preferred norms, so I suspect this is an ideal thread to discuss them.
If you've got a suggested change that's nontrivial, I encourage you to create a poll for it here using comments as poll options or HonoreDB's system.
[Link] Holistic learning ebook
This ebook is kind of dopey, but it's one of the few resources I've seen where someone who's reasonably good at learning stuff tries to dissect and communicate the mental mechanisms they use for learning:
http://www.scotthyoung.com/blog/Programs/HolisticLearningEBook.pdf
Here's a quick summary.
- You can learn things faster and better by improving the strategies you use for learning stuff.
- "Holistic" learning is opposed to "rote" learning. Holistic learners make lots of connections between different things they learn, and between things they learn and things that are personally relevant to them. An example might be this diagram of various concepts in electrostatics, which I no longer know how to interpret. Another example might be me remembering about that diagram when reading the book.
- Holistic learners understand concepts in many different ways in order to really "get" them. They focus on building mental models instead of memorizing facts or procedures.
- If you understand a body of knowledge well enough, and forget a specific thing, you should be able to reconstruct your understanding of it based on related things you understand.
- The book refers to a "model" as something specific you understand particularly well that you can explain other things in terms of. For example, your "model" of a subspace (in linear algebra) might be a plane cutting through 3d space. Not all subspaces are planes, but thinking of a plane could be a way to quickly preload a bunch of relevant concepts in to your head.
- To learn holistically:
- "Visceralize" concepts by summarizing them with a specific image, sound, feeling, and/or texture. Example: when learning programming, think of an array as a bunch of colored cubes suspended along a cord.
- Use metaphors to understand things better. Whenenever you learn something new, try to figure out what it reminds you of. If it's something from a totally unrelated domain, that's great.
- Explore your understanding network, ideally by solving problems, in order to fix glitches in your understanding and refresh it.
- Holistic learning works great for some subjects, like science and math, but it's not as good for others, like history and law. It also helps less with concrete skills, like playing golf.
The author sells various information & coaching products in this vein, but as far as I can tell the ebook I linked to is the only free one: http://www.scotthyoung.com/lmslvidcourse/2.html. (If anyone pays for any of these, they should summarize them (to understand them better) and post the summaries to LW ;].) I'm definitely interested in hearing about other resources people know of on the mechanics of learning.
Someone once told me that if you're a grad student studying under a Nobel laureate, you're much more likely to later win the Nobel yourself. (I just searched the internet for evidence regarding this claim and couldn't find any, so I'm now less confident in it.) This claim suggests that doing good research is learnable.
The person who told me this thought these research skills couldn't be described with words, and could only be transmitted through actual research partnerships. I think it's more likely that they can be described with words, but no Nobel laureate has bothered to sit down and write a book called "How I Do Research". (Please leave a comment if you know of a book like this!)
Even if your fluid intelligence is static and difficult to improve, that doesn't prevent you from improving the mental algorithms and habits you use to accomplish tasks.
Brainstorming additional AI risk reduction ideas
It looks as though lukeprog has finished his series on how to purchase AI risk reduction. But the ideas lukeprog shares are not the only available strategies. Can Less Wrong come up with more?
A summary of recommendations from Exploring the Idea Space Efficiently:
- Deliberately avoid exposing yourself to existing lines of thought on how to solve a problem. (The idea here is to defeat anchoring and the availability heuristic.) So don't review lukeprog's series or read the comments on this thread before generating ideas.
- Start by identifying broad categories where ideas might be found. If you're trying to think of calculus word problems, your broad categories might be "jobs, personal life, the natural world, engineering, other".
- With these initial broad categories, try to include all the categories that might contain a solution and none that will not.
- Then generate subcategories. Subcategories of "jobs" might include "agriculture, teaching, customer service, manufacturing, research, IT, other". You're also encouraged to generate subsubcategories and so on.
- Spend more time on those categories that seem promising.
- You may wish to map your categories and subcategories on a piece of paper.
If you're strictly a lurker, you can send your best ideas to lukeprog anonymously using his feedback box. Or send them to me anonymously using my feedback box so I can post them here and get all your karma.
Thread Usage
Please reply here if you wish to comment on the idea of this thread.
You're encouraged to discuss the ideas of others in addition to coming up with your own ideas.
If you split your ideas into individual comments, they can be voted on individually and you will probably increase your karma haul.
Marketplace Transactions Open Thread
Social scientists think humans operate under one of two different sets of norms, depending on the circumstances: "market norms" or "social norms". The basic idea is that when exchanging money for goods and services formally, it's considered okay to be much more calculating and self-interested than when exchanging favors with friends informally. You can read this blog post by Dan Ariely for more.
It's often considered rude to introduce market norms in an area where they don't traditionally apply. For example, by charging money for your presence at a barbecue.
This is a thread where it's okay to talk about trading money for goods and services with other Less Wrong users, which might otherwise be considered rude because you'd be inappropriately introducing market norms. Things you're encouraged to do include:
- Post your resume
- Advertise a product sold by you or your company
- Advertise a service provided by you or your company
- Advertise an open position working for you or your company
The argument for having a thread like this is as follows. Less Wrong users have a variety of goals they wish to accomplish. Some of these goals involve engaging in marketplace transactions. It's plausible that a thread facilitating marketplace transactions between LW users will buy just as much or more collective goal accomplishment per unit attention consumed than a traditional Less Wrong thread.
Anecdotally it seems that introducing market norms takes a certain amount of chutzpah. For example, apparently it takes a certain kind of person to actually be able to name a dollar figure in a sales conversation, and that's why you need a professional salesperson to come along with a sales engineer when selling a technical product. One LWer friend of mine struggled for a while before she was able to get herself to charge money for talk therapy she had been providing to friends for free.
To combat this, please feel inclined to vote up folks who post in this thread. They likely overcame some akrasia in the act of promoting their offer.
To discuss the concept of this thread, as opposed to advertising a transaction you wish to engage in, please reply to this comment.
Expertise and advice
In his essay "How The Type Of Advice Someone Can Give Can Change Over Time", the author of SucceedSocially.com writes:
...as someone starts to become proficient in a field, they can start to take all the basic little steps for granted. With time they may even start to forget what it was like to be a beginner, or lose touch with how it felt to not have certain abilities. I've noticed this happening myself.
...
Once someone internalizes the basics and moves into more advanced levels of skill, I've noticed they can want to focus on the bigger picture. They'll want to figure out a handful of profound, succinct principles that tie all the advice about a field together.
...
I think beginners often need that drilled down, specific information though. Sometimes they need to learn some tactics as well before talk of larger strategies can really sink in. I also think if an advice giver gets too broad and abstract, his ideas can become vague to the point of being unhelpful.
If you're trying to learn something, it's natural to try to find the best people in the field to learn from. And if you're thinking of sharing your thoughts on something, it's natural to think twice unless you have a proven track record of accomplishments in that area.
PSA: Learn to code
Presumably you read Less Wrong because you're interested in thinking better.
If so, you might be interested in another opportunity to improve the quality of your thinking: learn to code.
- Employment. You probably don't need a computer science degree. I know of two Less Wrong users who learned to program after college and got jobs at Silicon Valley startups with just a project or two on their resume. (MBlume and FrankAdamek.) See Advice on Getting a Software Job by Tom McCabe for more on this possibility.
- Productivity software. Writing your own is much nicer than using stuff made by other people in my experience. The reason there are so many to-do list applications is because everyone's needs are different. If you use the terminal as your interface, it doesn't take much effort to write this stuff; you'll spend most of your time figuring out what you want it to do. (Terminal + cron on Linux with JSON log files has worked great for my needs.)
Starting Out
I recommend trying one of these interactive tutorials right now to get a quick feel for what programming is like.
After you do that, here are some freely available materials for studying programming:
- Learn Python the Hard Way. I like Zed's philosophy of having you type a lot of code, and apparently I'm not the only one. (Other books in the Hard Way series.)
- Eloquent JavaScript. No installation needed for this one, and the exercises are nicely interspersed with the text.
- Think Python. More of a computer science focus. ("Computer science" refers to more abstract, less applied aspects of programming.)
- Codecademy (uses JavaScript). Makes use of gamification-type incentives. Just don't lose sight of the fact that programming can be fun without them.
- Hackety Hack (uses Ruby). Might be especially good for younger folks.
- How to Design Programs. This book uses an elegant, quirky, somewhat impractical language called Scheme, and emphasizes a disciplined approach to programming. Maybe that will appeal to you. Structure and Interpretation of Computer Programs is a tougher, more computer science heavy book that also uses Scheme. You should probably have a good understanding of programming with recursive functions before tackling it.
Knowledge value = knowledge quality × domain importance
Months ago, my roommate and I were discussing someone who had tried to replicate Seth Roberts' butter mind self-experiment. My roommate seemed to be making almost no inference from the person's self-reports, because they weren't part of a scientific study.
But knowledge does not come in two grades, "scientific" and "useless". Anecdotes do count as evidence, they are just weak evidence. And well designed scientific studies constitute stronger evidence then poorly designed studies. There's a continuum for knowledge quality.
Knowing that humans are biased should make us take their stories and ad hoc inferences less seriously, but not discard them altogether.
There exists some domains where most of our knowledge is fairly low-quality. But that doesn't mean they're not worth study, if the value of information in the domain is high.
For example, a friend of mine read a bunch of books on negotiation and says this is the best one. Flipping through my copy, it looks like the author is mostly just enumerating his own thoughts, stories, and theories. So one might be tempted to discard the book entirely because it isn't very scientific.
But that would be a mistake. If a smart person thinks about something for a while and comes to a conclusion, that's decent-quality evidence that the conclusion is correct. (If you disagree with me on this point, why do you think about things?)
And the value of information in the domain of negotiation can be very high: If you're a professional, being able to negotiate your salary better can net you hundreds of thousands over the course of a career. (Anchoring means your salary next year will probably just be an incremental raise from your salary last year, so starting salary is very important.)
Similarly, this self-help book is about as dopey and unscientific as they come. But doing one of the exercises from it years ago destroyed a large insecurity of mine that I was only peripherally aware of. So I probably got more out of it in instrumental terms than I would've gotten out of a chemistry textbook.
In general, self-improvement seems like a domain of really high importance that's unfortunately flooded with low-quality knowledge. If you invest two hours implementing some self-improvement scheme and find yourself operating 10% more effectively, you'll double your investment in just a week, assuming a 40 hour work week. (ALERT: this seems like a really important point! I'd write an entire post about it, but I'm not sure what else there is to say.)
Here are some free self-improvement resources where the knowledge quality seems at least middling: For people who feel like failures. For students. For mathematicians. Productivity and general ass kicking (web implementation for that last idea). Even more ass kicking ideas that you might have seen already.
Rationality anecdotes for the homepage?
In the comments for The Cognitive Science of Rationality, Spurlock said
The beginning of this post (the list of concrete, powerful, real/realistic, and avoidable cases of irrationality in action), is probably the best introduction to x-rationality I've read yet. I can easily imagine it hooking lots of potential readers that our previous attempts at introduction (our home page, the "welcome to LW" posts, etc) wouldn't.
In fact, I'd nominate some version of that text as our new home page text, perhaps just changing out the last couple sentences to something that encompasses more of LW in general (rather than cogsci specifically). I mean this as a serious actionable suggestion.
There are couple problems with using the specific anecdotes from the post:
- It would make the beginning of the post seem boring for anyone who had read the homepage.
- There has been discussion on LW that the sunk cost fallacy may not be much of a fallacy in practice, and commenters on the post were also skeptical of the rare disease example.
Simple but important ideas
Important ideas don't always require long explanations. Here's a famous example:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind… Thus the first ultraintelligent machine is the last invention that man need ever make.
This is a single paragraph on the third page of a 50 page report. Maybe someone who's good at parsing 60s era academic English can tell us if the rest is any good.
It seems like anyone who has an idea they want people to take seriously has to write a bunch about it. This is most apparent in popular nonfiction books, which are often bloated far beyond what it takes to communicate the core ideas.
To correct this "presentation length bias", we can fight it from both ends:
- Remember that important ideas don't have to be in an important place, be said by an important person, or be an important length.
- Alert readers to important ideas that don't look important (e.g. "This is a simple idea, but it seems important:"). Do this especially if it's someone else's idea, since people are going to be reluctant to label their own ideas as important.
6 Tips for Productive Arguments
We've all had arguments that seemed like a complete waste of time in retrospect. But at the same time, arguments (between scientists, policy analysts, and others) play a critical part in moving society forward. You can imagine how lousy things would be if no one ever engaged those who disagreed with them.
This is a list of tips for having "productive" arguments. For the purposes of this list, "productive" means improving the accuracy of at least one person's views on some important topic. By this definition, arguments where no one changes their mind are unproductive. So are arguments about unimportant topics like which Pink Floyd album is the best.
Why do we want productive arguments? Same reason we want Wikipedia: so people are more knowledgeable. And just like the case of Wikipedia, there is a strong selfish imperative here: arguing can make you more knowledgeable, if you're willing to change your mind when another arguer has better points.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)