Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread: March 2010

5 Post author: AdeleneDawner 01 March 2010 09:25AM

We've had these for a year, I'm sure we all know what to do by now.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (658)

Comment author: ShardPhoenix 08 March 2010 12:25:51PM *  21 points [-]

A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm

Comment author: Morendil 08 March 2010 01:06:27PM *  3 points [-]

Wonderful article, thanks. I'm fond of reminders of this type that scientific advances are very seldom as discrete, as irreversible, as incontrovertible as the myths of science often give them to be.

When you look at the detailed stories of scientific progress you see false starts, blind alleys, half-baked theories that happen by luck to predict phenomena and mostly sound ones that unfortunately fail on key bits of evidence, and a lot of hard work going into sorting it all out (not to mention, often enough, a good dose of luck). The manglish view, if nothing else, strikes me as a good vitamin for people wanting an antidote to the scurvy of overconfidence.

ETA: The article made for a great dinnertime story to my kids. Only one of the three, the oldest (13yo) was familiar with the term "scurvy" - and with the cure as well; both from One Piece. Manga 1 - school 0.

Comment author: Cyan 01 March 2010 02:04:22PM 19 points [-]

Call for examples

When I posted my case study of an abuse of frequentist statistics, cupholder wrote:

Still, the main post feels to me like a sales pitch for Bayes brand chainsaws that's trying to scare me off Neyman-Pearson chainsaws by pointing out how often people using Neyman-Pearson chainsaws accidentally cut off a limb with them.

So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.

Comment author: khafra 02 March 2010 04:17:26PM 2 points [-]

Some googling around yielded a pdf about a controversial use of Bayes in court. The controversy seems to center around using one probability distribution on both sides of the equation. Lesser complaints include mixing in a frequentist test without a good reason.

Comment author: Lightwave 02 March 2010 09:08:27AM 13 points [-]

The following stuff isn't new, but I still find it fascinating:

Reverse-engineering the Seagull

The Mouse and the Rectangle

Comment author: nazgulnarsil 12 March 2010 12:06:52PM 2 points [-]

what's depressing is the vast disconnect between how well marketers understand super stimulus and how poorly everyone else does.

also this: http://www.theonion.com/content/video/new_live_poll_allows_pundits_to

Comment author: AdeleneDawner 02 March 2010 09:55:37AM 2 points [-]

Neat!

Comment author: michaelkeenan 01 March 2010 09:41:53AM *  13 points [-]

How do you introduce your friends to LessWrong?

Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?

Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.

For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of arguments; she later mentioned that she'd even told her mother about it. She also appreciated the concept of confirmation bias. She's started reading LessWrong, but she's not a native English speaker so it's going to be even more difficult than LessWrong already is.

Comment author: RobinZ 01 March 2010 07:41:16PM *  6 points [-]

I think of LessWrong from a really, really pragmatic viewpoint: it's like software patches for your brain to eliminate costly bugs. There was a really good illustration in the Allais mini-sequence - that is a literal example of people throwing away their money because they refused to consider how their brain might let them down.

Edit: Related to The Lens That Sees Its Flaws.

Comment author: XiXiDu 02 March 2010 03:55:40PM *  4 points [-]

It shows you that there is really more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It tells you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want. Thus what you want is to read and participate on LessWrong.

Comment deleted 01 March 2010 09:40:29PM [-]
Comment author: michaelkeenan 02 March 2010 04:42:33AM *  3 points [-]

I am probably a miserable talker, as usually after my introduction of rationality/singularity related topics people tend to even strengthen their former opinions.

I'm not sure this is what you're doing, but I'm careful not to bring up LessWrong in an actual argument. I don't want arguments for rationality to be enemy soldiers.

Instead, I bring rationalist topics up as an interesting thing I read recently, or as an influence on why I did a certain thing a certain way, or hold a particular view (in a non-argument context). That can lead to a full-fledged pitch for LessWrong, and it's there that I falter; I'm not sure I'm pitching with optimal effectiveness. I don't have a good grasp on what topics are most interesting/accessible to normal (albeit smart) people.

And, how does this actually help your own intentions? It seems non-trivial to me that finding a utility-function where taking the time to improve the rationality-q of a few philosophy/arts students or electricians or whatever is actually a net-win for what one can improve. Or is everybody here just hanging out with (gonna-be) scientists?

If rationalists were so common that I could just filter people I get close to by whether they're rationalists, I probably would. But I live in Taiwan, and I'm probably the only LessWrong reader in the country. If I want to talk to someone in person about rationality, I have to convert someone first. I like to talk about these topics, since they're frequently on my mind, and because certain conclusions and approaches are huge wins (especially cryonics and reductionism).

Comment author: nazgulnarsil 01 March 2010 11:00:29AM 2 points [-]

the main hurdle in my experience is getting people over biases that cause them to think that the future is going to look mostly like the present. if you can get people over this then they do a lot of the remaining work for you.

Comment author: Daniel_Burfoot 01 March 2010 02:09:58PM 9 points [-]

Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).

One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.

Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.

Comment author: Kaj_Sotala 01 March 2010 03:15:37PM 10 points [-]

It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).

Comment author: orthonormal 02 March 2010 01:43:30AM 7 points [-]

I'd start by asking whether the unknowns of the problem are primarily social and psychological, or whether they include things that the human intuition doesn't handle well (like large numbers).

If it's the former, then good news! This is basically the sort of problem your frontal cortex is optimized to solve. In fact, you probably unconsciously know what the best choice is already, and you might be feeling conflicted so as to preserve your conscious image of yourself (since you'll probably have to trade off conscious values in such a choice, which we're never happy to do).

In such a case, you can speed up the process substantially by finding some way of "letting the choice be made for you" and thus absolving you of so much responsibility. I actually like to flip a coin when I've thought for a while and am feeling conflicted. If I like the way it lands, then I do that. If I don't like the way it lands, well, I have my answer then, and in that case I can just disobey the coin!

(I've realized that one element of the historical success of divination, astrology, and all other vague soothsaying is that the seeker can interpret a vague omen as telling them what they wanted to hear— thus giving divine sanction to it, and removing any human responsibility. By thus revealing one's wants and giving one permission to seek them, these superstitions may have actually helped people make better decisions throughout history! That doesn't mean it needs the superstitious bits in order to work, though.)

If it's the latter case, though, you probably need good specific advice from a rational friend. Actually, that practically never hurts.

Comment author: Dagon 01 March 2010 06:41:57PM *  6 points [-]

A few principles that can help in such cases (major decision, very little direct data):

  • Outside view. You're probably more similar to other people than you like to think. What has worked for them?
  • Far vs Near mode: beware of generalizations when visualizing distant (more than a few weeks!) results of a choice. Consider what daily activities will be like.
  • Avoiding oversimplified modeling: With the exceptions of procreation and suicide, there are almost no life decisions that are permanent and unchangeable.
  • Shut up and multiply, even for yourself: Many times it turns out that minor-but-frequent issues dominate your happiness. Weight your pros/cons for future choices based on this, not just on how important something "should" be.
Comment author: Eliezer_Yudkowsky 01 March 2010 03:59:07PM 6 points [-]

...I don't suppose you can tell us what? I expect that if you could, you would have said, but thought I'd ask. It's difficult to work with this little.

I could toss around advices like "A lot of Major Life Decisions consist of deciding which of two high standards you should hold yourself to" but it's just a shot in the dark at this point.

Comment author: Morendil 01 March 2010 02:32:39PM 4 points [-]

One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.

Based on those two lucid observations, I'd say you're doing well so far.

There are some principles I used to weigh major life decisions. I'm not sure they are "rationalist" principles; I don't much care. They've turned out well for me.

Here's one of them: "having one option is called a trap; having two options is a dilemma; three or more is truly a choice". Think about the terms of your decision and generate as many different options as you can. Not necessarily a list of final choices, but rather a list of candidate choices, or even of choice-components.

If you could wave a magic wand and have whatever you wanted, what would be at the top of your list? (This is a mind-trick to improve awareness of your desires, or "utility function" if you want to use that term.) What options, irrespective of their downsides, give you those results?

Given a more complete list you can use the good old Benjamin Franklin method of listing pros and cons of each choice. Often this first step of option generation turns out sufficient to get you unstuck anyway.

Comment author: [deleted] 01 March 2010 09:48:56PM 2 points [-]

Having two options is a dilemma, having three options is a trilemma, having four options is a tetralemma, having five options is a pentalemma...

:)

Comment author: Cyan 01 March 2010 10:10:04PM 2 points [-]

A few more than five is an oligolemma; many more is a polylemma.

Comment author: MrHen 01 March 2010 07:21:42PM *  4 points [-]

I am not that far in the sequences, but these are posts I would expect to come into play during Major Life Decisions. These are ordered by my perceived relevance and accompanied with a cool quote. (The quotes are not replacements for the whole article, however. If the connection isn't obvious feel free to skim the article again.)

To do better, ask yourself straight out: If I saw that there was a superior alternative to my current policy, would I be glad in the depths of my heart, or would I feel a tiny flash of reluctance before I let go? If the answers are "no" and "yes", beware that you may not have searched for a Third Alternative. ~ The Third Alternative

The moral is that the decision to terminate a search procedure (temporarily or permanently) is, like the search procedure itself, subject to bias and hidden motives. You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there's a lot of fast cheap evidence you haven't gathered yet - Web sites you could visit, counter-counter arguments you could consider, or you haven't closed your eyes for five minutes by the clock trying to think of a better option. You should suspect motivated continuation when some evidence is leaning in a way you don't like, but you decide that more evidence is needed - expensive evidence that you know you can't gather anytime soon, as opposed to something you're going to look up on Google in 30 minutes - before you'll have to do anything uncomfortable. ~ Motivated Stopping and Continuation

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can't yet guess what our answer will be; thus giving our intelligence a longer time in which to act. ~ Hold Off On Proposing Solutions

"Rationality" is the forward flow that gathers evidence, weighs it, and outputs a conclusion. [...] "Rationalization" is a backward flow from conclusion to selected evidence.
~ Rationalization

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. If your car makes metallic squealing noises when you brake, and you aren't willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. ~ The Bottom Line

Hope that helps.

Comment author: RobinZ 05 March 2010 03:50:39PM 2 points [-]

Just remembered: I managed not to be stupid on one or two times by asking whether, not why.

Comment author: Jordan 01 March 2010 10:27:53PM *  2 points [-]

I just came out of a tough Major Life Situation myself. The rationality 'tools' I used were mostly directed at forcing myself to be honest with myself, confronting the facts, not privileging certain decisions over others, recognizing when I was becoming emotional (and more importantly recognizing when my emotions were affecting my judgement), tracking my preferred choice over time and noticing correlations with my mood and pertinent events.

Overall, less like decision theory and more like a science: trying to cut away confounding factors to discover my true desire. Of course, sometimes knowing your desires isn't sufficient to take action, but I find that for many personal choices it is (or at least is enough to reduce the decision theory component to something much more manageable).

Comment author: RobinZ 01 March 2010 07:49:17PM 2 points [-]

The dissolving the question mindset has actually served me pretty well as a TA - just bearing in mind the principle that you should determine what led to this particular confused bottom line is useful in correcting it afterwards.

Comment author: whpearson 01 March 2010 12:24:10PM *  8 points [-]

Pigeons can solve Monty hall (MHD)?

A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.

Behind a paywall

Comment author: toto 01 March 2010 02:24:10PM *  14 points [-]

Behind a paywall

But freely available from one of the authors' website.

Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.

Comment author: JustinShovelain 10 March 2010 12:48:39AM 7 points [-]

I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?

Comment author: FrF 01 March 2010 07:49:34PM 7 points [-]

"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/

Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."

Comment author: SoullessAutomaton 02 March 2010 01:08:52AM 5 points [-]

Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves.

Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.

Comment author: hugh 02 March 2010 06:18:42PM 4 points [-]

I partially agree with this. Somewhere along the way, I learned how to learn. I still haven't really learned how to finish. I think these two features would have been dramatically enhanced had I not gone to school. I think a potential problem with self-educated learners (I know two adults who were unschooled) is that they get much better at fulfilling their own needs and tend to suffer when it comes to long-term projects that have value for others.

The unschooled adults I know are both brilliant and creative, and ascribe those traits to their unconventional upbringing. But both of them work as freelance handymen. They like helping others, and would help other people more if they did something else, but short-term projects are all they can manage. They are polymaths that read textbooks and research papers, and one has even developed a machine learning technique that I've urged him to publish. However, when they get bored, they stop. The chance that writing up his results and releasing them would further research is not enough to get him past that obstacle of boredom.

I have long thought that school, as currently practiced, is an abomination. I have yet to come up with a solution that I'm convinced solves its fundamental problems. For a while, I thought that unschooling was the solution, but these two acquaintances changed my mind. What is your opinion, on the right way to teach and learn?

Comment author: Peter_de_Blanc 08 March 2010 12:39:21AM 6 points [-]

How much information is preserved by plastination? Is it a reasonable alternative to cryonics?

Comment author: Jack 08 March 2010 03:04:49AM *  3 points [-]

Afaict pretty much the same amount as cryonics. And it is cheaper and more amenable to laser scanning. This is helpful. The post has an interesting explanation of why all the attention is on cryo:

Freezing has a certain subjective appeal. We freeze foods and rewarm them to eat. We read stories about children who have fallen into ice cold water and survived for hours without breathing. We know that human sperm, eggs, and even embryos can be frozen and thawed without harm. Freezing seems intuitively reversible and complete. Perhaps this is why cryonics quickly attained, and has kept, its singular appeal for life extensionists.

By contrast, we tend to associate chemical preservation with processes that are particularly irreversible and inadequate. Corpses are embalmed to prevent decay for only a short time. Taxidermists make deceased animals look alive, although most of their body parts are missing or transformed. “Plastinated” cadavers are used to demonstrate surface anatomy in schools and museums. No wonder, then, that cryonicists routinely dismiss chemopreservation as a truly bad idea.

Edit: Further googling suggest there might be some unsolved implementation issues.

Comment author: XiXiDu 03 March 2010 01:45:17PM 6 points [-]

How important are 'the latest news'?

These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.

What is your take on it?

  • Is it important to read up on the latest news each day?
  • If so, what are your sources, please share them.
  • What kind of news are important?

I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist should stay up to date on? For example, when trying to reduce my news load, I'm trying to take into account how much of what I know and do has its origins in some blog post or news item. Would I even know about lesswrong.com if I wasn't the heavy news addict that I am?

What would it mean to ignore most news and concentrate on my goals of learning math, physics and programming while reading lesswrong.com? Have I already reached a level of knowledge that allows me to get from here to everywhere, without exposing myself to all the noise out there in hope of coming across some valuable information nugget which might help me reach the next level?

How do we ever know if there isn't something out there that is more worthwhile, valuable, beautiful, something that makes us happier and less wrong? At what point should we cease to be the tribesman who's happily trying to improve his hunting skills but ignorant of the possible revolutions taking place in a city only 1000 miles afar?

Is there a time to stop searching and approach what is at hand? Start learning and improving upon the possibilities we already know about? What proportion of one's time should a rationalist spend on the prospect of unknown unknowns?

Comment author: Rain 03 March 2010 08:55:56PM *  8 points [-]

I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.

Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.

However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.

I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.

ETA: My feed reader contains the following:

For the vast majority of posts on each of these feeds, I only read the headline. Feeds where I consistently (>25%) read the articles or comments are: Slashdot (mostly while bored at work), Marginal Revolution (the only place I read every post), Sentient Developments, Accelerating Future, and LessWrong. Even for those, I rarely (<10%) read linked articles, preferring instead to read only the distillation by the blog author, or the comments by other users.

ETA2: I also listen to NPR during my short commute to and from work, and occasionally watch the Daily Show and the Colbert Report online, for entertainment. Firefox with NoScript and Adblock Plus makes it bearable - I'm extremely advertising averse.

I do not own a television, and generally consider TV news (in the US) to be horrendous and mind-destroying.

Comment author: Morendil 03 March 2010 08:58:06PM *  3 points [-]

Good question, which I'm finding surprisingly hard to answer. (i.e. I've spent more time composing this comment than is perhaps reasonable, struggling through several false starts).

Here are some strategies/behaviours I use: expand and winnow; scorched earth; independent confirmation; obsession.

  • "expand and winnow": after finding an information source I really like (using the term "source" loosely, a blog, a forum, a site, etc.) I will often explore the surrounding "area", subscribe to related blogs or sources recommended by that source. In a second phase I will sort through which of these are worth following and which I should drop to reduce overload
  • "scorched earth": when I feel like I've learned enough about a topic, or that I'm truly overloaded, I will simply drop (almost) every subscription I have related to that topic, maybe keeping a major source to just monitor (skim titles and very occasionally read an item)
  • "independent confirmation": I do like to make sure I have a diversified set of sources of information, and see if there are any items (books, articles, movies) which come at me from more than one direction, especially if they are not "massively popular" items, e.g. I'd discard a recommendation to see Avatar, but I decided to dive into Jaynes when it was recommended on LW and my dad turned out to have liked it enough to have a hard copy of the PDF
  • "obsession": there typically is one thing I'm obsessed with (often the target of an expand and winnow operation); e.g. at various points in my life I've been obsessed with Agora Nomic, XML, Java VM implementation, Agile, personal development, Go, and currently whatever LW is about. An "obsessed" topic can be but isn't necessarily a professional interest, but it's what dominates my other curiosity and tends to color my other interests. For instance while obsessed with Go I pursued the topic both for its own sake and as a source of metaphors for understanding, say, project management or software development. I generally quit ("scorched earth") once I become aware I'm no longer learning anything, which often coincides with the start of a new obsession.

My RSS feeds folder, once massive, is down to a half dozen indispensable blogs. I've unsubscribed from most of the mailing lists I used to read. My main "monitored" channel is Twitter, where I follow a few dozen folks who've turned up gold in the past. My main "active" source of new juicy stuff to think about is LW.

(ETA: as an example of "independent confirmation" in the past two minutes, one of my Agile colleagues on Twitter posted this link.)

Comment author: [deleted] 06 March 2010 03:24:51PM *  5 points [-]

Pick some reasonable priors and use them to answer the following question.

On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?

ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.

Comment author: orthonormal 08 March 2010 10:11:34PM *  2 points [-]

Let

  • AN = "Grandma calls on Thursday of week N",
  • BN = "Grandma comes on Friday of week N".

A toy version of my prior could be reasonably close to the following:

P(AN)=p, P(AN,BN)=pq, P(~AN,BN)=(1-p)r

where

  • the distribution of p is uniform on [0,1]
  • the distribution of q is concentrated near 1 (distribution proportional to f(x)=x on [0,1], let's say)
  • the distribution of r is concentrated near 0 (distribution proportional to f(x)=1-x on [0,1], let's say)

Thus, the joint probability distribution of (p,q,r) is given by 4q(1-r) once we normalize. Now, how does the evidence affect this? The likelihood ratio for (A1,B1,A2,B2) is proportional to (pq)^2, so after multiplying and renormalizing, we get a joint probability distribution of 24p^2q^3(1-r). Thus P(~A3|A1,B1,A2,B2)=1/4 and P(~A3,B3|A1,B1,A2,B2)=1/12, so I wind up with a 1 in 3 chance that Grandma will come on Friday, if I've done all my math correctly.

Of course, this is all just a toy model, as I shouldn't assume things like "different weeks are independent", but to first order, this looks like the right behavior.

Comment author: Sniffnoy 07 March 2010 04:50:11AM 2 points [-]

In the calls, does she specify when she is coming over? I.e. does she say she'll be coming over on Thursday, Friday, just sometime in the near future, or she leaves it for you to infer?

Comment author: RobinZ 06 March 2010 04:03:57PM 2 points [-]

I fail to see how this question has a perceptibly rational answer - too much depends on the prior.

Comment author: [deleted] 06 March 2010 10:29:10PM 2 points [-]

Presumably, once you've picked your priors, the rest follows. And presumably, once you've come up with an answer, you'll disclose your reasoning, and your chosen priors.

Comment author: vinayak 01 March 2010 08:27:14PM 5 points [-]

I have two basic questions that I am confused about. This is probably a good place to ask them.

  1. What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

  2. Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to 'yes' and a probability to 'no'. What's the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?

Comment author: MrHen 01 March 2010 09:27:16PM 6 points [-]

This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a -3 rating though... so apparently it wasn't too useful.

The consensus of the comments was that the correct answer is .5.

Also of note is Bead Jar Guesses and its sequel.

Comment author: JGWeissman 01 March 2010 08:46:59PM 5 points [-]

What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be?

If you truly have no clue, .5 yes and .5 no.

For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of "No". How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards "yes".

Comment author: Alicorn 01 March 2010 09:17:06PM *  6 points [-]

But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing - it's the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli's left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.

Comment author: orthonormal 02 March 2010 01:52:22AM 2 points [-]

All of which means that you shouldn't be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it's relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).

Comment author: Alicorn 02 March 2010 01:56:20AM *  2 points [-]

Yes, but in this situation you have so little information that .5 doesn't seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case? .5 isn't the right prior - some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.

Comment author: orthonormal 02 March 2010 02:17:11AM 3 points [-]

You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case?

Unless there's some reason that they'd suspect it's more likely for us to ask them a trick question whose answer is "No" than one whose question is "Yes" (although it is probably easier to create trick questions whose answer is "No", and the Striglian could take that into account), 50% isn't a bad probability to assign if asked a completely foreign Yes-No question.

Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega's decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.

Comment author: Alicorn 02 March 2010 02:31:10AM *  7 points [-]

It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don't have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch - and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I'm in the Sea of Tranquility, false that I'm equidistant between the Sun and the star Polaris, false that... Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is "no".

Basically, your prior should be that everything is almost certainly false!

Comment author: cousin_it 09 March 2010 03:51:33PM 2 points [-]

The odds of a random sentence being true are low, but the odds of the alien choosing to give you a true sentence are higher.

Comment author: AdeleneDawner 01 March 2010 09:28:24AM 5 points [-]

It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.

Comment author: gwern 01 March 2010 02:35:57PM 5 points [-]

Well, there's still intermittent fasting.

IF would get around

"The non-aging-related causes of death included monkeys who died while taking blood samples under anesthesia, from injuries or from infections, such as gastritis and endometriosis. These causes may not be aging-related as defined by the researchers, but they could realistically be adverse effects of prolonged calorie restrictions on the animals’ health, their immune system, ability to handle stress, physical agility, cognition or behavior."

and would also work well with the musings about variability and duration:

"From an evolutionary standpoint, he explained, mice who subsist on less food for a few years is analogous, in terms of natural selection, to humans who survive 20-year famines. But nature seldom demands that humans endure such conditions.

Similar conclusions were reached by Dr. Aubrey D.N.J. de Grey with the Department of Genetics at the University of Cambridge, UK. Species have widely evolved to be able to adapt to transient periods of starvation. “What has been generally overlooked is that the extent of the evolutionary pressure to maintain adaptability to a given duration of starvation varies with the frequency of that duration,” he said."

(Our ancestors most certainly did have to survive frequent daily shortfalls. Feast or famine.)

Comment author: Vladimir_Nesov 09 March 2010 10:55:50AM *  4 points [-]

New on arXiv:

David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?

In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory in which the players set conditional probability distributions in a Bayes net. We show that the two game theory recommendations in Newcomb's scenario have different presumptions for what Bayes net relates your choice and the algorithm's prediction. We resolve the paradox by proving that these two Bayes nets are incompatible. We also show that the accuracy of the algorithm's prediction, the focus of much previous work, is irrelevant. In addition we show that Newcomb's scenario only provides a contradiction between game theory's expected utility and dominance principles if one is sloppy in specifying the underlying Bayes net. We also show that Newcomb's paradox is time-reversal invariant; both the paradox and its resolution are unchanged if the algorithm makes its `prediction' after you make your choice rather than before.

See also:

Comment author: xamdam 10 March 2010 05:53:16PM 1 point [-]

In a competely preverse coincedence Benford's law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb http://en.wikipedia.org/wiki/Benford%27s_law

Comment author: JohannesDahlstrom 07 March 2010 11:43:02PM *  4 points [-]

Warning: Your reality is out of date

tl;dr:

There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)

Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the field in question.

Examples of these so-called mesofacts include the total human population (6*10⁹? No, almost 7*10⁹ nowadays) and the number of exoplanets found (A hundred? Two hundred? More like four hundred and counting.)

Comment author: Peter_de_Blanc 07 March 2010 08:07:45PM 4 points [-]

Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.

Comment author: Kevin 12 March 2010 12:13:15PM *  1 point [-]

I think I have a good one for people in the USA. This is a job that allows you to work from home on your computer rating the quality of search engine results. It pays $15/hour and because their productivity metrics aren't perfect, you can work for 30 seconds and then take two minutes off with about as much variance as you want. Instead of taking time off directly to do different work, you could also slow yourself down by continuously watching TV or downloaded videos.

They are also hiring for some workers in similar areas that are capable of doing somewhat more complicated tasks, presumably for higher salaries. Some sound interesting. http://www.lionbridge.com/lionbridge/en-us/company/work-with-us/careers.htm

Yes, out of all "work from home" internet jobs, this is the only one that is not a scam. Lionbridge is a real company and their shares recently continued to increase after a strong earnings report. http://online.wsj.com/article/BT-CO-20100210-716444.html?mod=rss_Hot_Stocks

First, you send them your resume, and they basically approve every US high school graduate that can create a resume for the next step. Then you have to take a test in doing the job. They provide plenty of training material and the job isn't all that hard, a few hours of rapid skimming is probably enough to pass the test for most people. Almost 100% of people would be able to pass the test after 10 hours of studying.

Comment author: nazgulnarsil 12 March 2010 11:54:43AM 1 point [-]

throwing/giving away stuff you don't use. reading instead of watching tv or browsing website for the umpteenth time. eating more fruit and less processed sugar. exercising 10-15 minutes a day. writing down your ideas. intro to econ of some sort. spending 30 minutes a day on a long term project. meditation.

Comment author: Vladimir_Nesov 07 March 2010 09:27:27AM *  4 points [-]

Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.

Comment author: JGWeissman 05 March 2010 06:00:27AM 4 points [-]

Should we have a sidebar section "Friends of LessWrong" to link to sites with some overlap in goals/audience?

I would include TakeOnIt in such a list. Any other examples?

Comment author: h-H 04 March 2010 01:33:22AM *  4 points [-]

while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)

"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf

Comment author: arundelo 04 March 2010 02:36:51AM 2 points [-]

Neat find! I haven't read all of it yet, but I found this striking:

It was precisely the view, that successful abstractions should not be regarded as representing something real, that prevented Lorentz from discovering special relativity. He believed that the time t of an observer at rest with respect to the aether (which is a genuine example of reifying an unsuccessful abstraction) was the true time, whereas the quantity t of another observer, moving with respect to the first, was merely an abstraction that did not represent anything real in the world. Lorentz himself admitted the failure of his approach:

The chief cause of my failure was my clinging to the idea that the variable t only can be considered as the true time and that my local time t must be regarded as no more than an auxiliary mathematical quantity. In Einstein's theory, on the contrary, t plays the same part as t; if we want to describe phenomena in terms of x , y , z , t we must work with these variables exactly as we could do with x, y, z, t.

This reminds me of Mach's Principle: Anti-Epiphenomenal Physics:

When you see a seemingly contingent equality - two things that just happen to be equal, all the time, every time - it may be time to reformulate your physics so that there is one thing instead of two. The distinction you imagine is epiphenomenal; it has no experimental consequences. In the right physics, with the right elements of reality, you would no longer be able to imagine it.

Comment author: wnoise 04 March 2010 01:57:21AM *  2 points [-]

I generally prefer links to papers on the arxiv go the abstract, as so: http://arxiv.org/abs/1001.4218

This lets us read the abstract, and easily get to other versions of the same paper (including the latest, if some time goes by between your posting and my reading), and get to other works by the same author.

EDIT: overall, reasonable points, but some things "pinging" my crank-detectors. I suppose I'll have to track down reference 10 and the 4/3 claim for electro-magnetic mass.

Comment author: Mitchell_Porter 04 March 2010 04:50:49AM *  3 points [-]

overall, reasonable points

I disagree. I think it's a paper which looks backwards in an unconstructive way. The author is hoping for conceptual breakthroughs as good as relativity and quantum theory, but which don't require engagement with the technical complexities of string theory or the Standard Model. Those two constructions respectively define the true theoretical and empirical frontier, but instead the author wants to ignore all that, linger at about a 1930s conceptual level, and look for another way.

ETA: As an example of not understanding contemporary developments, see his final section, where he says

While string theory has extensively studied how the interactions in the hydrogen atom can be represented in terms of the string formalism, I wonder how string theory would answer a much simpler question – what should be the electron in the ground state of the hydrogen atom in order that the hydrogen atom does not possess a dipole moment in that state?

I don't know what significance this question has for the author, but so far as I know, the hydrogen atom has no dipole moment in its ground state because the wavefunction is spherically symmetric. This will still be true in string theory. The hydrogen atom exists on a scale where the strings can be approximated by point particles. I suspect the author is thinking that because strings are extended objects they have dipole moments; but it's not of a magnitude to be relevant at the atomic scale.

Comment author: wnoise 04 March 2010 06:48:02AM 3 points [-]

Of course he looks backwards. You can't analyze why any discovery didn't happen sooner, even though all the pieces were there, unless you look backwards. I thought the case study of SR was quite illuminating, though it goes directly counter to his attack on string theory. After getting the Lorentz transform, it took a surprisingly long time to for anyone to treat the transformed quantities as equivalent -- that is, to take the math seriously. And for string theory, he says they take the math too seriously. Of course, the Lorentz transform was more clearly grounded in observed physical phenomenon.

I completely agree he doesn't understand contemporary developments, and that was some of what I referred to as "pinging my crank-detectors", along with the loose analogy between 4-d bending in "world tubes" to that in 3-d rods. I don't necessarily see that as a huge problem if he's not pretending to be able to offer us the next big revolution on a silver platter.

Comment author: Cyan 04 March 2010 02:43:30AM *  2 points [-]

the 4/3 claim for electro-magnetic mass

Wikipedia points to the original text of a 1905 article by Poincaré. How's your French?

Comment author: wnoise 04 March 2010 03:02:43AM 2 points [-]

Thanks. It's decent, actually, but there's still some barrier. Increasing that barrier is changes to physics notation since then (no vectors!).

Fortunately my university library appears to have a copy of an older edition of Rohrlich's Classical Charged Particles, which may help piece things together.

Comment author: Cyan 04 March 2010 03:26:46AM *  2 points [-]

Petkov wrote:

Feynman [wrote], ”It is therefore impossible to get all the mass to be electromagnetic in the way we hoped. It is not a legal theory if we have nothing but electrodynamics” [13, p. 28-4]; but he was unaware that the factor of 4/3 had already been accounted for [10]).

It's worth noting that Feynman's statements are actually correct. According to Wikipedia, the problem is solved by postulating a non-electromagnetic attractive force holding the charged particle together, which subtracts 1/3 of the 4/3 factor, leaving unity. Petkov doesn't explicitly say that Feynman is wrong, but his phrasing might leave that impression.

Comment author: [deleted] 02 March 2010 09:07:46PM 4 points [-]

When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!

I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.

Comment author: MrHen 02 March 2010 09:25:57PM *  3 points [-]

I loved that book. I still have moments when I pull some random picture from that book out of my memory to describe how an object works.

EDIT: Apparently the book is on Google.

Comment author: [deleted] 03 March 2010 07:22:57AM 2 points [-]

Today there's How Stuff Works.

Comment author: cousin_it 01 March 2010 01:52:29PM *  4 points [-]

I'm confused about Nick Bostrom's comment [PDF] on Robin Hanson's Great Filter idea. Roughly, it says that in a universe like ours that lacks huge intergalactic civilizations, finding fish fossils on Mars would be very bad news, because it would imply that evolving to fish phase isn't the greatest hurdle that kills most young civilizations - which makes it more likely that the greatest hurdle is still ahead of us. I think that's wrong because finding fish fossils (and nothing more) on Mars would only indicate a big hurdle right after the fish stage, but shouldn't affect our beliefs about later stages, so we have nothing to fear after all. Am I making a mistake or misunderstanding Bostrom's reasoning?

Comment author: Larks 01 March 2010 02:12:04PM 8 points [-]

It makes the hurdle less likely to be before the fish stage, so more likely to be after the fish stage. While the biggest increase in probability is immediately after the fish stage, all subsequent stages are a more likely culprit now (especially as we could simply have missed fossils/their not have been formed for the post-fish stages).

Comment author: cousin_it 02 March 2010 09:20:18PM *  2 points [-]

So finding evidence of life that went extinct at any stage whatsoever should make us revise our beliefs about the Great Filter in the same direction? Doesn't this violate conservation of expected evidence?

Comment author: ciphergoth 02 March 2010 09:25:53PM 2 points [-]

Is there a counter-weighing bit of evidence every time we don't find evidence of life at all, and every time (if ever) we find evidence of non-extinct life?

Comment author: MixedNuts 02 March 2010 03:52:01PM 10 points [-]

TL;DR: Help me go less crazy and I'll give you $100 after six months.

I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.

I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).

One-time tricks to do one important thing are also welcome, but I'd offer less.

Comment author: anonymous259 03 March 2010 03:48:57AM 6 points [-]

I'll come out of the shadows (well not really, I'm too ashamed to post this under my normal LW username) and announce that I am, or anyway have been, in more or less the same situation as MixedNuts. Maybe not as severe (there are some important things I can do, at the moment, and I have in the past been much worse than I am now -- I would actually appear externally to be keeping up with my life at this exact moment, though that may come crashing down before too long), but generally speaking almost everything MixedNuts says rings true to me. I don't live with anyone or have any nearby family, so that adds some extra difficulty.

Right now, as I said, this is actually a relatively good moment, I've got some interesting projects to work on that are currently helping me get out of bed. But I know myself too well to assume that this will last. Plus, I'm way behind on all kinds of other things I'm supposed to be doing (or already have done).

I'm not offering any money, but I'd be interested to see if anyone is interested in conversing with me about this (whether here or by PM). Otherwise, my reason for posting this comment was to add some evidence that this may be a common problem (even afflicting people you wouldn't necessarily guess suffered from it).

Comment author: Eliezer_Yudkowsky 03 March 2010 04:45:22AM 8 points [-]

I've got a weaker form of this, but I manage. The number one thing that seems to work is a tight feedback loop (as in daily) between action and reward, preferably reward by other people. That's how I was able to do OBLW. Right now I'm trying to get up to a reasonable speed on the book, and seem to be slowly ramping up.

Comment author: AdeleneDawner 03 March 2010 04:01:28AM *  4 points [-]

I have limited mental resources myself, and am sometimes busy, but I'm generally willing to (and find it enjoyable to) talk to people about this kind of thing via IM. I'm fairly easily findable on Skype (put a dot between my first and last names; text only, please), AIM (same name as here), GChat (same name at gmail dot com), and MSN (same name at hotmail dot com). The google email is the one I pay attention to, but I'm not so great at responding to email unless it has obvious questions in it for me to answer. It's also noteworthy that my sleep schedule is quite random - it is worth checking to see if I'm awake at 5am if you want to, but also don't assume that just because it's daytime I'll be awake.

Comment author: ata 04 March 2010 08:29:08PM *  3 points [-]

Hope this doesn't turn into a free-therapy bandwagon, but I have a lot of the same issues as MixedNuts and anonymous259, so if anyone has any tips or other insights they'd like to share with me, that would be delightful.

My main problem seems to be that, if I don't find something thrilling or fascinating, and it requires much mental or physical effort, I don't do it, even if I know I need to do it, even if I really want to do it. Immediate rewards and punishments help very little (sometimes they actually make things worse, if the task requires a lot of thought or creativity). There are sometimes exceptions when the boring+mentally/physically-demanding task is to help someone, but that's only when the person is actually relying on me for something, not just imposing an artificial expectation, and it usually only works if it's someone I know and care about (except myself).

A related problem is that I rarely find anything thrilling or fascinating (enough to make me actually do it, at least) for very long. In my room I have stacks of books that I've only read a few chapters into; on my computer I have probably hundreds of unfinished (or barely started) programs and essays and designs, and countless others that only exist in my mind; on my academic transcripts are many 'W's and 'F's, not because the classes were difficult (a more self-controlled me would have breezed through them), but because I stopped being interested halfway through. So even when something starts out intrinsically motivating for me, the momentum usually doesn't last.

Like anon259, I can't offer any money — this sort of problem really gets in the way of wanting/finding/keeping a job — but drop me a PM if gratitude motivates you. :)

Comment author: RobinZ 04 March 2010 09:30:46PM 2 points [-]

To some extent, the purpose of LessWrong is to fix problems with ourselves, and the distinction between errors in reasoning and errors in action is subtle enough that I would hesitate to declare this on- or off-topic.

It should be mentioned, however, that the population of LessWrongers-asking-for-advice is unlikely to be representative of the population of LessWrongers, and even less so of the population of agents-LessWrongers-care-about. This is likely to make generalizations drawn from observations here narrower in scope than we might like.

Comment author: Alicorn 03 March 2010 05:24:55PM 2 points [-]

PM me with your IM contact info and I'll try to help you too.

Look, I'll do it for free too!

Comment author: CronoDAS 05 March 2010 02:15:00AM *  5 points [-]

After reading this thread, I can only offer one piece of advice:

You need to see a medical doctor, and fast. Your problems are clearly more serious than anything we can deal with here. If you have to, call 911 and have them carry you off in an ambulance.

Comment author: pjeby 04 March 2010 06:47:31AM 5 points [-]

This is just a guess, and I'm not interested in your money, but I think that you probably have a health problem. I'd suggest you check out the book "The Mood Cure" by Julia Ross, which has some very good information on supplementation. Offhand, you sound like the author's profile for low-in-catecholamines, and might benefit very quickly from fairly low doses of certain amino acids such as L-tyrosine.

I strongly recommend reading the book, though, as there are quite a few caveats regarding self-supplementation like this. Using too high a dose can be as problematic as too low, and times of day are important too. Consistent management is important, too. When you're low on something, taking what you need can make you feel euphoric, but when you have the right dose, you won't notice anything by taking some. (Instead, you'll notice if you go off it for a few days, and find mood/energy going back to pre-supplementation levels.)

Anyway... don't know if it'll work for you, but I do suggest you try it. (And the same recommendation goes for anyone else who's experiencing a chronic mood or energy issue that's not specific to a particular task/subject/environment.)

Comment author: MixedNuts 04 March 2010 02:50:50PM 2 points [-]

Buying a (specific) book isn't possible right now, but may help later; thanks. I took the questionnaire on her website and apparently everything is wrong with me, which makes me doubt her tests' discriminating power.

Comment author: Cyan 04 March 2010 08:23:31PM 4 points [-]

It's a marketing tool, not a test.

Comment author: pjeby 04 March 2010 07:36:24PM 2 points [-]

FWIW, I don't have "everything" wrong with me; I had only two, and my wife scores on two, with only one the same between the two of us.

Comment author: hugh 02 March 2010 05:36:52PM 4 points [-]

MixedNuts, I'm in a similar position, though perhaps less severely, and more intermittently. I've been diagnosed with bipolar, though I've had difficulty taking my meds. At this point in my life, I'm being supported almost entirely by a network of family, friends, and associates that is working hard to help me be a real person and getting very little in return.

I have one book that has helped me tremendously, "The Depression Cure", by Dr. Ilardi. He claims that depression-spectrum disorders are primarily caused by lifestyle, and that almost everyone can benefit from simple changes. As any book--especially a self-help book---it ought to be read skeptically, and it doesn't introduce any ideas that can't be found in modern psychological research. Rather, it aggregates what in Ilardi's opinion are the most important: exercise works more effectively than SSRIs, etc.

If you really want a copy, and you really can't get one yourself, I will send you one if you can send me your address. It helped me that much. Which is not to say that I am problem free. Still, a 40% reduction in problem behavior, after 6 months, with increasing rather than decreasing results, is a huge deal for me.

Rather, I want to give you your "one trick". It is the easiest rather than the most effective; but it has an immediate effect, which helped me implement the others. Morning sunlight. I don't know where you live; I live in a place where I can comfortably sit outside in the morning even this time of year. Get up as soon as you can after waking, and wake as early in the day as you would ideally like to. Walk around, sit, or lie down in the brightest area outside for half an hour. You can go read studies on why this works, or that debate its efficacy, but for me it helps.

I realize that your post didn't say anything about depression; just lack of willpower. For me, they were tightly intertwined, and they might not be for you. Please try it anyway.

Comment author: MixedNuts 02 March 2010 05:52:55PM 3 points [-]

Thanks. I'll try the morning light thing; from experience it seems to help somewhat, but I can't keep it going for long.

If nothing else works, I'll ask you for the book. I'm skeptical since they tend to recommend unbootstrapable things such as exercise, but it could help.

Comment author: hugh 02 March 2010 06:35:44PM 3 points [-]

There is one boot process that works well, which is to contract an overseer. For me, it was my father. I felt embarrassed to be a grown adult asking for his father's oversight, but it helped when I was at my worst. Now, I have him, my roommate, two ex-girlfriends, and my advisor who are all concerned about me and check up with me on a regular basis. I can be honest with them, and if I've stopped taking care of myself, they'll call or even come over to drag me out of bed, feed me, and/or take me for a run.

I have periodically been an immense burden on the people who love me. However, I eventually came to the realization that being miserable, useless, and isolated was harder and more unpleasant for them than being let in on what was wrong with me and being asked to help. I've been a net negative to this world, but for some reason people still care for me, and as long as they do, my best course of action seems to be to let them try to help me. I suspect you have a set of people who would likewise prefer to help you than to watch you suffer.

Feeling less helpless was nearly as good for them as for me. I have a debt to them that I am continuing to increase, because I'm still not healthy or self-sufficient. I don't know if I can ever repay it, but

Comment author: Psy-Kosh 04 March 2010 05:00:06AM 3 points [-]

I have had and sometimes still struggle with similar problems, but there is something that sometimes has helped me:

If there's something you need to do, try to do something with it, however little, as soon after you get up as possible. The example I'm going to use is studying, but you can generalize from it.

Pretty much soon as you get up, BEFORE checking email or anything like that, study (or whatever it is you need to do) a bit. And keep doing until you feel your mental energy "running out".. but then, any time later in the day that you feel a smigen of motivation, don't let go of it: run immediately to continue doing.

But starting the day with doing some, however little, seemed to help. I think with me the psychology was sort of "this is the sort of day when I'm working on this", so once I start on it, it's as if I'm "allowed" to periodically keep doing stuff with it during the day.

Anyways, as I said, this has sometimes helped me, so...

Comment author: wedrifid 03 March 2010 05:13:42AM 3 points [-]

Order modafinil online. Take it, using 'count backwards then swallow the pill' if necessary. Then, use the temporary boost in mental energy to call a shrink.

I have found this useful at times.

Comment author: Alicorn 02 March 2010 05:22:43PM 3 points [-]

I'm willing to try to help you but I think I'd be substantially more effective in real time. If you would like to IM, send me your contact info in a private message.

Comment author: Jordan 10 March 2010 05:23:34AM *  4 points [-]

For what it's worth:

A few years back I was suffering from some pretty severe health problems. The major manifestations were cognitive and mood related. Often when I was saying a sentence I would become overwhelmed halfway through and would have to consciously force myself to finish what I was saying.

Long story short, I started treating my diet like a controlled experiment and, after a few years of trial and error, have come out feeling better than I can ever remember. If you're going to try self experimentation the three things I recommend most highly to ease the analysis process are:

  • Don't eat things with ingredients in them, instead eat ingredients
  • Limit each meal to less than 5 different ingredients
  • Try and have the same handful of ingredients for every meal for at least a week at a time.
Comment author: wedrifid 10 March 2010 09:50:23AM 1 point [-]

I'm curious. What foods (if you don't mind me asking) did you find had such a powerful effect?

Comment author: Jordan 11 March 2010 08:18:38AM 2 points [-]

I expanded upon it here.

What has helped me the most, by far, is cutting out soy, dairy, and all processed foods (there are some processed foods I feel fine eating, but the analysis to figure out which ones proved too costly for the small benefit of being able to occasionally eat unhealthy foods).

Comment author: hugh 02 March 2010 07:02:56PM *  4 points [-]

Also, don't offer money. External motivators are disincentives. By offering $100, you are attaching a specific worth to the request, and undermining our own intrinsic motivations to help. Since allowing a reward to disincentivize a behavior is irrational, I'm curious how much effect it has on the LessWrong crowd; regardless, I would be surprised if anyone here tried to collect, so I don't see the point.

Comment author: Alicorn 02 March 2010 07:06:58PM 2 points [-]

My understanding is that the mechanism by which this works lets you sidestep it pretty neatly by also doing basically similar things for free. That way you can credibly tell yourself that you would do it for free, and being paid is unrelated.

Comment author: hugh 02 March 2010 07:18:09PM *  1 point [-]

To the contrary. If you pay volunteers, they stop enjoying their work. Other similar studies have been done that show that paying people who already enjoy something will sometimes make them stop the activity altogether, or to at least stop doing it without an external incentive.

Edit: AdeleneDawner and thomblake agree with the parent. This may be a counterargument, or just an answer to my earlier question, namely "Are LessWrongers better able to control this irrational impulse?"

Comment author: Unnamed 16 March 2010 09:09:10AM *  2 points [-]

The number one piece of advice that I can give is see a doctor. Not a psychologist or psychiatrist - just a medical doctor. Tell them your main symptoms (low energy, difficulty focusing, panic attacks) and have them run some tests. Those types of problems can have physical, medical causes (including conditions involving the thyroid or blood sugar - hyperthyroidism & hypoglycemia). If a medical problem is a big part of what's happening, you need to get it taken care of.

If you're having trouble getting yourself to the doctor, then you need to find a way to do it. Can you ask someone for help? Would a family member help you set up a doctor's appointment and help get you there? A friend? You might even be able to find someone on Less Wrong who lives near you and could help.

My second and third suggestions would be to find a friend or family member who can give you more support and help (talking about your issues, driving you to appointments, etc.) and to start seeing a therapist again (and find a good one - someone who uses cognitive-behavioral therapy).

Comment author: MixedNuts 20 March 2010 09:52:18PM 1 point [-]

This is technically a good idea. What counts as "my main symptoms", though? The ones that make life most difficult? The ones that occur most often? The most visible ones to others? To me?

Comment author: Unnamed 02 April 2010 05:59:05AM 1 point [-]

You'll want to give the doctor a sense of what's going on with you (just like you've done here), and then to help them find any medical issues that may be causing your problems. So give an overall description of the problem and how serious it is (sort of like in your initial post - your lack of energy, inability to do things, and lots of related problems) - including some examples or specifics (like these) can help make that clearer. And be sure to describe anything that seems like it could be physiological (the three that stuck out to me were lack of energy, difficulty focusing, and anxiety / panic attacks - you might be able to think of some others).

The doctor will have questions which will help guide the conversation, and you can always ask whether they want more details about something. Do you think that figuring out what to say to the doctor could be a barrier for you? If so, let me know - I could say more about it.

Comment author: Kevin 04 March 2010 06:16:52AM 2 points [-]

Do you take fish oil supplements or equivalent? Can't hurt to try; fish oil is recommended for ADHD and very well may repair some of the brain damage that causes mental illness.

http://news.ycombinator.com/item?id=1093866

Comment author: MrHen 02 March 2010 04:22:13PM *  2 points [-]

What do you do when you aren't doing anything?

EDIT: More questions as you answer these questions. Too many questions at once is too much effort. I am taking you dead seriously so please don't be offended if I severely underestimate your ability.

Comment author: MixedNuts 02 March 2010 04:38:57PM 2 points [-]

I keep doing something that doesn't require much effort, out of inertia; typically, reading, browsing the web, listening to the radio, washing a dish. Or I just sit or lie there letting my mind wander and periodically trying to get myself to start doing something. If I'm trying to do something that requires thinking (typically homework) when my brain stops working, I keep doing it but I can't make much progress.

Comment author: MrHen 02 March 2010 06:23:24PM 2 points [-]

Possible solutions:

  • Increase the amount of effort it takes to do the low-effort things you are trying to avoid. For instance, it isn't terribly hard to set your internet on a timer so it automatically shuts off from 1 - 3pm. While it isn't terribly hard to turn it back on, if you can scrounge up the effort to turn it back on you may be able to put that effort into something else.

  • Decrease the amount of effort it takes to do the high-effort things you are trying to accomplish. Paying bills, for instance, can be done online and streamlined. Family and friend can help tremendously in this area.

  • Increase the amount of effort it takes to avoid doing the things you are trying to accomplish. If you want to make it to an important meeting, try to get a friend to pick you up and drive you all the way over there.

These are somewhat complicated and broad categories and I don't know how much they would help.

Comment author: MixedNuts 02 March 2010 07:13:29PM 2 points [-]

I've tried all that (they're on LW already).

  • That wouldn't work. I do these things by default, because I can't do the things I want. I don't even have a problem with standard akrasia anymore, because I immediately act on any impulse I have to do something, given how rare they are. Also, I can expend willpower do stop doing something, whereas "I need to do this but I can't" seems impervious to it, at least in the amounts I have.

  • There are plenty of things to be done here, but they're too hard to bootstrap. The easy ones helped somewhat.

  • That helped me most. In the grey area between things I can do and things I can't (currently, cleaning, homework, most phone calls), pressure helps. But no amount of ass-kicking has made me do the things I've been trying to do for a while.

Comment author: knb 03 March 2010 09:11:59PM *  1 point [-]

I recommend a counseling psychologist rather than a psychiatrist. Or, if you can manage it, do both.

I used to be just like this, I actually put off applying for college until I missed the deadlines for my favorite schools, just because I couldn't get myself started. Something changed for me over the last couple years, though, and I'm now really thriving. One big thing that helps in the short term is stimulants: ephedrine and caffeine are OTC in most countries. Make sure you learn how to cycle them, if you do decide to use them. Things seem to get easier over time.

Comment author: MrHen 01 March 2010 10:40:41PM *  7 points [-]

This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P


Perceived Change

Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the order of the deck because some players would not receive the cards they were supposed to be dealt. One of the friends happened to be majoring in Mathematics and understood probability as much as anyone else at the table. Even he thought what I did was wrong.

I explained that the cut didn’t matter because everyone still has the same odds of receiving any particular card from the deck. His retort was that it did matter because the card he was going to get is now near the middle of the deck. Instead of that particular random card he will get a different particular random card. As such, I should not have cut the deck.

During the ensuing arguments I found myself constantly presented with the following point: The fact of the game is that he would have received a certain card and now he will receive a different card. Shouldn’t this matter? People seem to hold grudges when someone swaps random chances of an outcome and the swap changes who wins.

The problem with this objection is illustrated if I secretly cut the cards. If they have no reason to believe I cut the deck, they wouldn’t complain. Furthermore, it is completely impossible to perceive the change by studying before and after states of the probabilities. More clearly, if I put the cards under the table and threatened to cut the cards, my friends would have no way of knowing whether or not I cut the deck. This implies that the change itself is not the sole cause of complaint. The change must be accompanied with the knowledge that something was changed.

The big catch is that the change itself isn’t actually necessary at all. If I simply tell my friends that I cut the cards when they were not looking they will be just as upset. They have perceived a change in the situation. In reality, every card is in exactly the same position and they will be dealt what they think they should have been dealt. But now even that has changed. Now they actually think the exact opposite. Even though nothing about the deck has been changed, they now think that the cards being dealt to them are the wrong cards.

What is this? There has to be some label for this, but I don’t know what it is or what the next step in this observation should be. Something is seriously, obviously wrong. What is it?


Edit to add:

The underlying problem here is not that they were worried about me cheating. The specific scenario and the arguments that followed from that scenario were such that cheating wasn't really a valid excuse for their objections.

Comment author: orthonormal 02 March 2010 02:04:26AM 5 points [-]

To modify RobinZ's hypothesis:

Rather than focusing on any Bayesian evidence for cheating, let's think like evolution for a second: how do you want your organism to react when someone else's voluntary action changes who receives a prize? Do you want the organism to react, on a gut level, as if the action could have just as easily swung the balance in their favor as against them? Or do you want them to cry foul if they're in a social position to do so?

Your friends' response could come directly out of that adaptation, whatever rationalizations they make for it afterwards. I'd expect to see the same reaction in experiments with chimps.

Comment author: MrHen 02 March 2010 02:43:52AM 2 points [-]

How do you want your organism to react when someone else's voluntary action changes who receives a prize?

I want my organism to be able to tell the difference between a cheater and someone making irrelevant changes to a deck of cards. I assume this was a rhetorical question.

Evolution is great but I want more than that. I want to know why. I want to know why my friends feel that way but I didn't when the roles were reversed. The answer is not "because I knew more math." Have I just evolved differently?

I want to know what other areas are affected by this. I want to know how to predict whatever caused this reaction in my friends before it happens in me. "Evolution" doesn't help me do that. I cannot think like evolution.

As much as, "You could have been cheating" is a great response -- and "They are conditioned to respond to this situation as if you were cheating" is a better response -- these friends know the probabilities are the same and know I wasn't cheating. And they still react this way because... why?

I suppose this comment is a bit snippier than it needs to be. I don't understand how your answer is an answer. I also don't know much about evolution. If I learned more about evolution would I be less confused?

Comment author: RobinZ 01 March 2010 10:48:36PM *  11 points [-]

To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.

Comment author: MrHen 01 March 2010 11:08:19PM *  6 points [-]

No, this wasn't their true objection. I have a near flawless reputation for being honest and the arguments that ensued had nothing to do with stacking the deck. If I were a dispassionate third party dealing the game they would have objected just as strongly.

I initially had a second example as such:

Assume my friend and I each purchased a lottery ticket. As the winning number was about to be announced, we willing traded tickets. If I won, I would not be surprised to be asked to share the winnings because, after all, he chose the winning ticket.

It seems as though some personal attachment is created with the specific random object. Once that object is "taken," there is an associated sense of loss.

Comment author: RobinZ 01 March 2010 11:32:45PM 3 points [-]

I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist.

Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not.

(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.

Comment author: MrHen 02 March 2010 12:03:42AM *  3 points [-]

EDIT: Wow, this turned into a ramble. I didn't have time to proof it so I apologize if it doesn't make sense.

I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist.

Okay, yeah, that makes sense. My instinct is pointing me in the other direction namely because I have the (self perceived) benefit of knowing which friends of mine were objecting. Of note, no one openly accused me of cheating or anything like that. If I accidently dropped the deck on the floor or knocked it over the complaints would remain. The specific complaint, which I specifically asked for, is that their card was put into the middle of the deck.

(By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.)

Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not.

Any pseudo random event where people can (a) predict the undisclosed particular random object and (b) someone can voluntarily preempt that prediction and change the result tends to receive the same behavior.

(I presume you have not tested the lottery ticket swap experimentally)

I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story:

My grandfather once won at bingo and was offered to choose a prize from a series of stuffed animals. Each animal was accompanied by an envelope containing some amount of cash. Amongst the animals were a turtle and a rhinoceros. Traditionally, he would always choose the turtle because he likes turtles but this time he picked the rhinoceros because my father happens to like rhinos. The turtle contained more money than the rhino and my dad got to hear about how he lost my grandfather money.

Granted, there are a handful of obvious holes in this particular story. The list includes:

  • My grandfather could have merely used it as an excuse to jab his son-in-law in the ribs (very likely)
  • My grandfather was lying (not likely)
  • The bingo organizers knew that rhinos were chosen more often than turtles (not likely)
  • My grandfather wasn't very good at probability (likely, considering he was playing bingo)
  • Etc.

More stories like this have taught me to never muck with pseudo random variables whose outcomes effect things people care about even if the math behind the mucking doesn't change anything. People who had a lottery ticket and traded it for a different equal chance will get extremely depressed because they actually "had a shot at winning." These people could completely understand the probabilities involved, but somehow this doesn't help them avoid the "what if" depression that tells them they shouldn't have traded tickets.

People do this all the time involving things like when they left for work. Decades ago, my mother-in-law put her sister on a bus and the sister died when the bus crashed. "What if?" has dogged her ever since. The connection between the random chance of that particular bus crashing on that particular day is associated with her completely independent choice to put her sister on the bus. While they are mathematically independent, it doesn't change the fact that her choice mattered. For some reason, people take this mattering and do things with it that makes no sense.

This topic can branch out into really weird places when viewed this way. The classic problem of someone holding 10 people hostage and telling you to kill 1 or all 10 die matches the pattern with a moral choice instead of random chance. When asking if it is more moral to kill 1 or let the 10 die people will argue that refusing to kill an innocent will result in 9 more people dying than needed. The decision matters and this mattering reflects on the moral value of each choice. Whether this is correct or not seems to be in debate and it is only loosely relevant for this particular topic. I am eagerly looking for the eventual answer to the question, "Are these events related?" But to get there I need to understand the simple scenario, which is the one presented by my original comment.

(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.

I am having trouble understanding this. Can you say it again with different words?

Comment author: RobinZ 02 March 2010 12:53:13AM 2 points [-]

Have no fear - your comment is clear.

(By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.)

I'll give you that one, with a caveat: if an algorithm consistently outputs correct data rather than incorrect, it's a heuristic, not a bias. They lose points either way for failing to provide valid support for their complaint.

I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story: [truncated for brevity]

Yes, those anecdotes constitute the sort of data I requested - your hypothesis now outranks mine in my sorting.

(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.

I am having trouble understanding this. Can you say it again with different words?

When I read your initial comment, I felt that you had proposed an overly complicated explanation based on the amount of evidence you presented for it. I felt so based on the fact that I could immediately arrive at a simpler (and more plausible by my prior) explanation which your evidence did not refute. It is impressive, although not necessary, when you can anticipate my plausible hypothesis and present falsifying evidence; it is sufficient, as you have done, to test both hypotheses fairly against additional data when additional hypotheses appear.

Comment author: MrHen 02 March 2010 01:21:32AM 2 points [-]

Ah, okay. That makes more sense. I am still experimenting with the amount of predictive counter-arguing to use. In the past I have attempted to so by adding examples that would address the potential objections. This hasn't been terribly successful. I have also directly addressed the points and people still brought them up... so I am pondering how to fix the problem.

But, anyway. The topic at hand still interests me. I assume there is a term for this that matches the behavior. I could come up with some fancy technical definition (perceived present ownership of a potential future ownership) but it seems dumb to make up a term when there is one lurking around somewhere. And the idea of labeling it an ownership problem didn't really occur to me until my conversation with you... so maybe I am answering my own question slowly?

Comment author: thomblake 02 March 2010 04:58:24PM 5 points [-]

Something like "ownership" seems right, as well as the loss aversion issue. Somehow, this seemingly-irrational behavior seems perfectly natural to me (and I'm familiar with similar complaints about the order of cards coming out). If you look at it from the standpoint of causality and counterfactuals, I think it will snap into place...

Suppose that Tim was waiting for the king of hearts to complete his royal flush, and was about to be dealt that card. Then, you cut the deck, putting the king of hearts in the middle of the deck. Therefore, you caused him to not get the king of hearts; if your cutting of the deck were surgically removed, he would have had a straight flush.

Presumably, your rejoinder would be that this scenario is just as likely as the one where he would not have gotten the king of hearts but your cutting of the deck gave it to him. But note that in this situation the other players have just as much reason to complain that you caused Tim to win!

Of course, any of them is as likely to have been benefited or hurt by this cut, assuming a uniform distribution of cards, and shuffling is not more or less "random" than shuffling plus cutting.

A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. The frequentist would charitably assume that the shuffle guarantees a uniform distribution, so that the cards each have the same probability of appearing on any particular draw. The Bayesian will symmetrically note that shuffling makes everyone involved assign the same probability to each card appearing on any particular draw, due to their ignorance of which ones are more likely. But this only works because everyone involved grants that shuffling has this property. You could imagine someone who payed attention to the shuffle and knew exactly which card was going to come up, and then was duly annoyed when you unexpectedly cut the deck. Given that such a person is possible in principle, there actually is a fact about which card each person 'would have' gotten under a standard method, and so you really did change something by cutting the deck.

Comment author: MrHen 02 March 2010 05:51:56PM 2 points [-]

A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. [...]

Yep. This really is a digression which is why I hadn't brought up another interesting example with the same group of friends:

One of my friends dealt hearts in a manner of giving each player a pack of three cards, the next player a pack of three cards and so on. The amount of cards being dealt were the same but we all complained that this actually affected the game because shuffling isn't truly random and it was mucking with the odds.

We didn't do any tests on the subject because we really just wanted the annoying kid to stop dealing weird. But, now that I think about it, it should be relatively easy to test...

Also related, I have learned a few magic tricks in my time. I understand that shuffling is a tricksy business. Plenty of more amusing stories are lurking about. This one is marginally related:

At a poker game with friends of friends there was one player who shuffled by cutting the cards. No riffles, no complicated cuts, just take a chunk from the top and put it on the bottom. Me and the mathematician friend from my first example told him to knock it off and shuffle the cards. He tried to convince us he was randomizing the deck. We told him to knock it off and shuffle the cards. He obliged while claiming that it really doesn't matter.

This example is a counterpoint to the original. Here is someone claiming that it doesn't matter when the math says it most certainly does. The aforementioned cheater-heuristic would have prevented this player from doing something Bad. I honestly have no idea if he was just lying to us or was completely clueless but I couldn't help but be extremely suspicious when he ended up winning first place later that night.

Comment author: thomblake 02 March 2010 06:38:57PM 4 points [-]

On a tangent, myself and friends always pick the initial draw of cards using no particular method when playing Munchkin, to emphasize that we aren't supposed to be taking this very seriously. I favor snatching a card off the deck just as someone else was reaching for it.

Comment author: prase 02 March 2010 04:03:37PM *  4 points [-]

Your reputation doesn't matter. Once the rules are changed, you are on a slippery slope of changing rules. The game slowly ceases to be poker.

When I am playing chess, I demand that the white moves first. When I find myself as the black, knowing that the opponent had whites the last game and it is now my turn to make the first move, I rather change places or rotate the chessboard than play the first move with the blacks, although it would not change my chances of winning. (I don't remember the standard openings, so I wouldn't be confused by the change of colors. And even if I were, this would be the same for the opponent.)

Rules are rules in order to be respected. They are often a lot arbitrary, but you shouldn't change any arbitrary rule during the game without prior consent of the others, even if it provably has no effect to the winning odds.

I think this is a fairly useful heuristic. Usually, when a player tries to change the rules, he has some reason, and usually, the reason is to increase his own chances of winning. Even if you opponent doesn't see any profit which you can get from changing the rules, he may suppose that there is one. Maybe you remember somehow that there are better or worse cards in the middle of the pack. Or you are trying to test their attention. Or you want to make more important changes of rules later, and wanted to have a precedent for doing that. These possibilities are quite realistic in gambling, and therefore is is considered a bad manner to change the rules in any way during the game.

Comment author: MrHen 02 March 2010 04:20:09PM 2 points [-]

I don't know how to respond to this. I feel like I have addressed all of these points elsewhere in the comments.

A summary:

  • The poker game is an example. There are more examples involving things with less obvious rules.
  • My reputation matters in the sense that they know wasn't trying to cheat. As such, when pestered for an answer they are not secretly thinking, "Cheater." This should imply that they are avoiding the cheater-heuristic or are unaware that they are using the cheater-heuristic.
  • I confronted my friends and asked for a reasonable answer. Heuristics were not offered. No one complained about broken rules or cheating. They complained that they were not going to get their card.

It seems to be a problem with ownership. If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride?

One more thing of note: They argued against the abstract scenario. This scenario assumed no cheating and no funny business. They still thought it mattered.

Personally, I think this is a larger issue than catching cheaters. People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?

Comment author: Nick_Tarleton 02 March 2010 04:25:15PM *  4 points [-]

If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride? ... People seemed somewhat attached to the anti-cheating heuristic.

The System 1 suspicion-detector would be less effective if System 2 could override it, since System 2 can be manipulated.

(Another possibility may be loss aversion, making any change unattractive that guarantees a different outcome without changing the expected value. (I see hugh already mentioned this.) A third, seemingly less likely, possibility is intuitive 'belief' in the agency of the cards, which is somehow being undesirably thwarted by changing the ritual.)

Comment author: hugh 02 March 2010 03:32:16AM 1 point [-]

When you deal Texas Hold'em, do you "burn" cards in the traditional way? Neither I nor most of my friends think that those cards are special, but it's part of the rules of the game. Altering them, even without [suspicion of] malicious intent breaks a ritual associated with the game.

While in this instance, the ritual doesn't protect the integrity of the game, rituals can be very important in getting into and enjoying activities. Humans are badly wired, and Less Wrong readers work hard to control our irrationalities. One arena in which I see less need for that is when our superstitious and pattern-seeking behaviors let us enjoy things more. I have a ritual for making coffee. I enjoy coffee without it, but I can reach a near-euphoric state with it. Faulty wiring, but I see no harm in taking advantage of it.

Comment author: rwallace 02 March 2010 05:26:11AM *  3 points [-]

It's a side effect.

Yes, they were being irrational in this case. But the heuristics they were using are there for good reason. Suppose they had money coming to them and you swooped in and took it away before it could reach them, they would be rational to object, right? That's why those heuristics are there. In practice the trigger conditions for these things are not specified with unlimited precision, and pure but interruptible random number generators are not common in real life, so the trigger conditions harmlessly spill over to this case. But the upshot is that they were irrational as a side effect of usually rational heuristics.

Comment author: MrHen 02 March 2010 02:39:00PM 2 points [-]

But the upshot is that they were irrational as a side effect of usually rational heuristics.

So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation?

I can understand your answer if the scenario was more like:

"Hey! Don't do that!"
"But it doesn't matter. See?"
"Oh. Well, okay. But don't do it anyway because..."

And then they mention your heuristic. They didn't do anything like this. They explicitly understood that nothing was changing in the probabilities and they explicitly understood that I was not cheating. And they were completely willing to defend their reaction in arguments. In their mind, their position was completely rational. I could not convince them that it was rational with math. Something else was the problem.

"Heuristics" is nifty, but I am not completely satisfied with that answer. Why would they have kept defending it when it was demonstrably wrong?

I suppose it is possible that they were completely unaware that they were using whatever heuristic they were using. Would that explain the behavior? Perhaps this is why they could not explain their position to me at the time of the arguments?

How would you describe this heuristic in a few sentences?

Comment author: AdeleneDawner 02 March 2010 03:25:23PM 4 points [-]

I suspect it starts with something like "in the context of a game or other competition, if my opponent does something unexpected, and I don't understand why, it's probably bad news for me", with an emotional response of suspicion. Then when your explanation is about why shuffling the cards is neutral rather than being about why you did something unexpected, it triggers an "if someone I'm suspicious of tries to convince me with logic rather than just assuring me that they're harmless, they're probably trying to get away with something" heuristic.

Also, most people seem to make the assumption, in cases like that, that they aren't going to be able to figure out what you're up to on the fly, so even flawless logic is unlikely to be accepted - the heuristic is "there must be a catch somewhere, even if I don't see it".

Comment author: orthonormal 03 March 2010 03:09:05AM *  3 points [-]

So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation?

Because human beings often first have a reaction based on an evolved, unconscious heuristic, and only later form a conscious rationalization about it, which can end up looking irrational if you ask the right questions (e.g. the standard reactions to the incest thought experiment there). So, yes, they were probably unaware of the heuristic they were actually using.

I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken.

Given that rigorous probability theory didn't emerge until the later stages of human civilization, there's not much room for an additional heuristic saying "unless it doesn't change the odds" to have evolved; indeed, all of the agreed-upon random ways of selecting things (that I've ever heard of) work by obvious symmetry of chances rather than by abstract equality of odds†, and most of the times someone intentionally changed the process, they were probably in fact hoping to cheat the odds.

† Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win.

Now compute the odds (50-50, unless I made a dumb mistake), and then actually try it (in real life) with non-negligible stakes. I predict that you'll feel slightly more uneasy about the experience than you would be flipping a coin.

Comment author: MrHen 03 March 2010 05:53:11AM 3 points [-]

I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken.

Everything else you've said makes sense, but I think the heuristic here is way off. Firstly, they object before the results have been produced, so the benefit is unknown. Second, the assumption of an agreed upon procedure is only really valid in the poker example. Other examples don't have such an agreement and seem to display the same behavior. Finally, the change to the produce could be by a disinterested party with no possible personal gain to be had. I suspect that the reaction would stay the same.

So, whatever heuristic may be at fault here, it doesn't seem to be the one you are focusing on. The fact that my friends didn't say, "You're cheating" or "You broke the rules" is more evidence against this being the heuristic. I am open to the idea of a heuristic being behind this. I am also open to the idea that my friends may not be aware of the heuristic or its implications. But I don't see how anything is pointing toward the heuristic you have suggested.

† Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win.

Hmm... 1/3 I win outright... 2/3 enters a second roll where I win 1/4 of the time. Is that...

1/3 + 2/3 * 1/4 =
1/3 + 2/12 =
4/12 + 2/12 =
6/12 =
1/2

Seems right to me. And I don't suspect to feel uneasy about such an experience at all since the odds are the same. If someone offered me a scenario and I didn't have the math prepared I would work out the math and decide if it is fair.

If I do the contest and you start winning every single time I might start getting nervous. But I would do the same thing regardless of the dice/coin combos we were using.

I would actually feel safer using the dice because I found that I can strongly influence flipping a fair quarter in my favor without much effort.

Comment author: JGWeissman 01 March 2010 10:54:56PM 2 points [-]

An important element of it being fair for you to cut the deck in the middle of dealing, which your friends may not trust, is that you do so in ignorance of who it will help and who it will hinder. By cutting the deck, you have explicitly made and acted on a choice (it is far less obvious when you choose not to cut the deck, the default expected action), and this causes your friends to worry that the choice may have been optimized for interests other than their own.

Comment author: Kevin 10 March 2010 03:24:10AM 3 points [-]

LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm

Comment author: Kevin 10 March 2010 09:43:45AM 3 points [-]

Apparently this is shoddy journalism. http://news.ycombinator.com/item?id=1180487

Comment author: MichaelGR 08 March 2010 06:53:38PM 3 points [-]

I've just finished reading Predictably Irrational by Dan Ariely.

I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).

It's a bit light compared to going straight to the studies, but it's also a quick read.

Good to give as gift to friends.

Comment author: roland 04 March 2010 08:36:28PM *  3 points [-]

List with all the great books and videos

Recently I've read a few articles that mentioned the importance of reading the classic works, like the Feynman lectures on physics. But, where can I find those? Wouldn't it be nice if we had a central place, maybe wikipedia where you can find a list of all the great books, videolectures, web pages divided by field(physics, mathematics, computer science, economics, etc...)? So if someone wants to know what he has to read to get a good understanding of the basic knowledge of any field he will have a place to look it up. It doesn't necessarily need to have the actual works, but at least a pointer to them.

Is there such a comprehensive list somewhere?

Comment author: nazgulnarsil 12 March 2010 11:57:37AM 1 point [-]

every time someone tries to make such a list collaboratively much of the effort diffuses into arguments over inclusion eventually (see wikipedia).

Comment author: Kevin 04 March 2010 10:41:28AM 3 points [-]

Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.

Comment author: FAWS 04 March 2010 11:52:04AM 2 points [-]

The link named "top" in the top bar, below the banner? Starting with the 10 all time highest ranked articles and continuing with the 10 next highest when you click "next", and so on? Or do I misunderstand you and you mean something else?

Comment author: wnoise 02 March 2010 08:15:39PM 3 points [-]

I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.

I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?

Comment author: Seth_Goldin 01 March 2010 06:03:18PM 3 points [-]

Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.

It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.

Comment author: Bo102010 02 March 2010 03:44:51AM *  2 points [-]

I considered putting that link here in the open thread after I read about it on Marginal Revolution, but I read the paper and found it weak enough to not really be worth a lengthy response.

What annoyed me about it is how Albert's title is "Why Bayesian Rationality Is Empty," and he in multiple places makes cute references to that title (e.g. "The answer is summarized in the paper’s title") without qualificaiton.

Then later, in a footnote, he mentions "In this paper, I am only concerned with subjective Bayesianism."

Seems like he should re-title his paper to me. He makes references to other critiques of objective Bayesianism, but doesn't engage them.

Comment author: Swimmy 01 March 2010 08:36:21PM *  2 points [-]

I think they are legitimate objections, but ones that have been partially addressed in this community. I take the principle objection to be, "Bayesian rationality can't justify induction." Admittedly true (see for instance Eliezer's take). Albert ignores sophisticated responses (like Robin's) and doesn't make a serious effort to explain why his alternative doesn't have the same problem.

Comment author: NancyLebovitz 01 March 2010 03:57:15PM 4 points [-]

I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.

I don't know whether I'm the only person who has this problem, but I think it's worth checking.

"Anti-logical rudeness" strikes me as a good bit better.

Comment author: RobinZ 01 March 2010 07:57:29PM *  2 points [-]

It's not anti-logical, it's rude logic. The point of Suber's paper is that at no point does the logically rude debater reason incorrectly from their premises, and yet we consider what they have done to be a violation of a code of etiquette.

Comment author: Jack 09 March 2010 01:50:29PM *  2 points [-]

For the "people say stupid things" file and a preliminary to a post I'm writing. There is a big college basketball tournament in New York this weekend. There are sixteen teams competing. This writer for the New York Post makes some predictions.

What is wrong with this article and how could you take advantage of the author?

Edit: Rot13 is a good idea here.

Comment author: Cyan 09 March 2010 02:58:39PM *  2 points [-]

Gur cbfgrq bqqf qba'g tvir n gbgny cebonovyvgl bs bar, fb gurl'er Qhgpu-obbxnoyr.

Comment author: Hook 09 March 2010 03:33:09PM 1 point [-]

Abg dhvgr. Uvf ceboyrz vf gung gur bqqf nqq hc gb yrff guna bar. Vs V tnir lbh 1-2 bqqf ba urnqf naq 1-2 bqqf ba gnvyf sbe na haovnfrq pbva, gung nqqf hc gb 1.3, naq lbh pna'g Qhgpu obbx zr ba gung.

Comment author: thomblake 09 March 2010 03:50:14PM 1 point [-]

I would like to suggest that people using Rot13 note that in their comments, perhaps as the first few characters "Rot13:" - otherwise, comments taken out of context are indecipherable.

Comment author: SilasBarta 07 March 2010 02:45:29PM *  2 points [-]

Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.

Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)

Comment deleted 05 March 2010 04:25:49PM *  [-]
Comment author: CronoDAS 04 March 2010 05:21:13PM *  2 points [-]

I saw a commenter on a blog I read making what I thought was a ridiculous prediction, so I challenged him to make a bet. He accepted, and a bet has been made.

What do you all think?

Comment author: aleksiL 02 March 2010 11:53:03AM *  2 points [-]

I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.

The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit the most.

Also, I'd appreciate pointers on how to find out if the book is being translated to Finnish.

Edit: Fixed markdown and grammar.

Comment author: FrF 01 March 2010 08:14:53PM 2 points [-]

I enjoyed this proposal for a 24-issue Superman run: http://andrewhickey.info/2010/02/09/pop-drama-superman/

There are several Less Wrongish themes in this arc: Many Worlds, ending suffering via technology, rationality:

"...a highlight of the first half of this first year will be the redemption of Lex Luthor – in a forty-page story, set in one room, with just the two of them talking, and Superman using logic to convince Luthor to turn his talents towards good..."

The effect Andrew's text had on me reminded me of how excited I was when I first had read Alan Moore's famous Twilight of the Superheroes. (I'm not sure about how well "Twilight" stands the test of time but see Google or Wikipedia for links to the complete Moore proposal.)

Comment author: [deleted] 02 March 2010 08:38:16PM 2 points [-]

Wow, thanks. And here was me thinking the only thing I had in common with Moore was an enormous beard...

(For those who don't read comics, a comparison with Moore's work is like comparing someone with Bach in music or Orson Welles in film).

Odd to see myself linked on a site I actually read...

Comment author: FrF 03 March 2010 05:39:08PM 2 points [-]

You're welcome, Andrew! I thought about forwarding your proposal to David Pearce, too. Maybe it's just my overactive imagination but your ideas about Superman appear to be connectable with his agenda!

Since your proposal is influenced by Grant Morrison's work, I remember that there'll be soon a book by Morrison, titled Supergods: Our World in the Age of the Superhero. I'm sure it will contain its share of esotericisms; on the other hand, as he's shown several times -- recently with All Star Superman -- Morrison seems comfortable with transhumanist ideas. (But then, transhumanism is also a sort of esotericism, at least in the view of its detractors.)

Btw, I had to smile when I read PJ Eby's Everything I Needed To Know About Life, I Learned From Supervillains.

Comment author: XiXiDu 01 March 2010 06:52:02PM *  3 points [-]

What programming language should I learn?

As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.

  • I'm not completely illiterate. I know the 'basics' of programming. Nevertheless, I want to start from the very beginning.
  • I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.

I'm thinking about starting with Processing and Lua. What do you think?

Comment author: AngryParsley 02 March 2010 11:28:16AM *  8 points [-]

In an amazing coincidence, many of the suggestions you get will be the suggester's current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They'll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.

Others have said this but you can't hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won't enjoy it.

So, what language should you learn first? Well the answer is... (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, "I'm trying to do X." instead of, "I'm having a problem getting feature blah blah blah to work.")

I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.

I paused after reading this. The main way people learn to program is by writing programs and getting feedback from peers/mentors. If you're not coding something you find interesting, it's hard to stay motivated for long enough to learn the language.

My advice is to learn a language that a lot of people learn as a first language. You'll be able to take advantage of tutorials and support geared toward newbies. You can always learn "cooler" languages later, but if you start with something advanced you might give up in frustration. Common first languages in CS programs are Java and C++, but Python is catching on pretty quickly. It also helps if your first language is used by people you already know. That way they'll be able to mentor/advise you.

Finally, I should give some of my background. I've been writing code for a while. I write code for work and leisure. My first language was QBasic. I moved on to C, C++, TI-BASIC, Perl, PHP, Java, C#, Ruby, and some others. I've played with but don't really know Lisp, Lua, and Haskell. My favorite language right now is Python, but I'm probably still in the honeymoon phase since I've been using it for less than a year.

Argh, see what I said at the start? I recommended Python and my favorite language is currently Python!

Comment author: XiXiDu 02 March 2010 01:36:10PM 4 points [-]

Motivation is not my problem these days. It has been all my youth, partly the reason that I completely failed at school. Now the almost primal fear of staying dumb and a nagging curiosity to gather knowledge, learn and understand, do trump any lack of motivation or boredom. To see how far above you people, here at lesswrong.com, are compared to the average person makes me strive to approximate your wit.

In other words, it's already enough motivation to know the basics of a programming language like Haskell, when average Joe is hardly self-aware but a mere puppet. I don't want to be one of them anymore.

Comment author: NancyLebovitz 07 March 2010 12:46:40AM 3 points [-]

If motivation is no longer a problem for you, that could be something really interesting for the akrasia discussions. What changed so that motivation is no longer a problem?

Comment deleted 07 March 2010 12:11:20PM *  [-]
Comment deleted 01 March 2010 09:51:40PM [-]
Comment author: XiXiDu 02 March 2010 11:54:32AM *  3 points [-]

What I want is to be able understand, attain a more intuitive comprehension, of concepts associated with other fields that I'm interested in, which I assume are important. As a simple example, take this comment by RobinZ. Not that I don't understand that simple statement. As I said, I already know the 'basics' of programming. I thoroughly understand it. Just so you get an idea.

In addition to reading up on all lesswrong.com sequences, I'm mainly into mathematics and physics right now. That's where I have the biggest deficits. I see my planned 'study' of programming to be more as practise of logical thinking and as a underlying matrix to grasp fields liked computer science and concepts as that of a 'Turing machine'.

And I do not agree that the effect is nil. I believe that programming is one of the foundations necessary to understand. I believe that there are 4 cornerstones underlying human comprehension. From there you can go everywhere: Mathematics, Physics, Linguistics and Programming (formal languages, calculation/data processing/computation, symbolic manipulation). The art of computer programming is closely related to the basics of all that is important, information.

Comment author: nhamann 02 March 2010 02:21:41AM 4 points [-]

As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts.

After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start.

Just keep in mind that when starting out learning programming, it's probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.

Comment author: Douglas_Knight 01 March 2010 11:49:12PM *  4 points [-]

Processing and Lua seem pretty exotic to me. How did you hear of them? If you know people who use a particular language, that's a pretty good reason to choose it.

Even if you don't have a goal in mind, I would recommend choosing a language with applications in mind to keep you motivated. For example, if (but only if) you play wow, I would recommend Lua; or if the graphical applications of Processing appeal to you, then I'd recommend it. If you play with web pages, javascript...

At least that's my advice for one style of learning, a style suggested by your mention of those two languages, but almost opposite from your "Nevertheless, I want to start from the very beginning," which suggests something like SICP. There are probably similar courses built around OCaml. The proliferation of monad tutorials suggests that the courses built around Haskell don't work. That's not to disagree with wnoise about the value of Haskell either practical or educational, but I'm skeptical about it as an introduction.

ETA: SICP is a textbook using Scheme (Lisp). Lisp or OCaml seems like a good stepping-stone to Haskell. Monads are like burritos.

Comment author: SoullessAutomaton 02 March 2010 01:37:58AM 9 points [-]

Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.

The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.

So you end up with newcomers to Haskell trying to simultaneously:

  • Adjust to a degree of abstraction normally reserved for mathematicians and philosophers
  • Unlearn existing habits from other languages
  • Learn about intimidating math-y-sounding things

And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.

But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another "monad tutorial" filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.

The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.

Comment author: XiXiDu 02 March 2010 12:32:01PM 2 points [-]

I learnt about Lua thru Metaplace, which is now dead. I heard about Processing via Anders Sandberg.

I'm always fascinated by data visualisation. I thought Processing might come in handy.

Thanks for mentioning SICP. I'll check it out.

Comment author: Morendil 02 March 2010 09:05:16PM *  3 points [-]

Consider finding a Coding Dojo near your location.

There is a subtle but deep distinction between learning a programming language and learning how to program. The latter is more important and abstracts away from any particular language or any particular programming paradigm.

To get a feeling for the difference, look at this animation of Paul Graham writing an article - crossing the chasm between ideas in his head and ideas expressed in words. (Compared to personal experience this "demo" simplifies the process of writing an article considerably, but it illustrates neatly what books can't teach about writing.)

What I mean by "learning how to program" is the analogue of that animation in the context of writing code. It isn't the same as learning to design algorithms or data structures. It is what you'll learn about getting from algorithms or data structures in your head to algorithms expressed in code.

Coding Dojos are an opportunity to pick up these largely untaught skills from experienced programmers.

Comment author: hugh 02 March 2010 08:18:35PM 3 points [-]

I agree with everything Emile and AngryParsley said. I program for work and for play, and use Python when I can get away with it. You can be shocked, that like AngryParsley, I will recommend my favorite language!

I have an additional recommendation though: to learn to program, you need to have questions to answer. My favorite source for fun programming problems is ProjectEuler. It's very math-heavy, and it sounds like you might like learning the math as much as learning the programming. Additionally, every problem, once solved, has a forum thread opened where many people post their solutions in many languages. Seeing better solutions to a problem you just solved on your own is a great way to rapidly advance.

Comment author: ciphergoth 02 March 2010 01:41:04PM 5 points [-]

I think the path outlined in ESR's How to Become a Hacker is pretty good. Python is in my opinion far and away the best choice as a first language, but Haskell as a second or subsequent language isn't a bad idea at all. Perl is no longer important; you probably need never learn it.

Comment author: hugh 03 March 2010 12:15:32AM 2 points [-]

Relevant answer to this question here, recently popularized on Hacker News.

Comment author: Emile 02 March 2010 04:37:14PM *  2 points [-]

I'd weakly recommend Python, it's free, easy enough, powerful enough to do simple but useful things (rename and reorganize files, extract data from text files, generate simple html pages ...),is well-designed and has features you'll encounter in other languages (classes, functional programming ...), and has a nifty interactive command line in which to experiment quickly. Also, some pretty good websites run on it.

But a lot of those advantages apply to languages like Ruby.

If you want to go into more exotic languages, I'd suggest Scheme over Haskell, it seems more beginner-friendly to me.

It mostly depends on what occasions you'll have of using it : if you have a website, Javascript might be better; If you like making game mods, go for lua. It also depends of who you know that can answer questions. If you have a good friend who's a good teacher and a Java expert, go for Java.

Comment author: wnoise 01 March 2010 07:20:36PM *  3 points [-]

Personally, I'm a big fan of Haskell. It will make your brain hurt, but that's part of the point -- it's very good at easily creating and using mathematically sound abstractions. I'm not a big fan of Lua, though it's a perfectly reasonable choice for its niche of embeddable scripting language. I have no experience with Processing. The most commonly recommended starting language is python, and it's not a bad choice at all.

Comment author: Clippy 08 March 2010 08:10:11PM *  2 points [-]

Update: I am still adjusting my values toward a new reflective quasi-equilibrium in light of User:h-H's pointing me to different models of paperclipping. Comments will continue to be infrequent.

Comment author: JenniferRM 12 March 2010 03:03:28AM 4 points [-]

Questions:

  1. Would you be open to help working through the problem?

  2. Do you have an estimate for the value of information in answering the new questions you face?

  3. Given that your previously assumed "life purpose" is in a state of some confusion, what are your thoughts about abstract issues that apply to "life purposes in general"? For example, if there are several equally plausible theories of "correct paper clipping" that you might choose between, would you consider temporarily or permanently weighing one or the other of them more based on negotiating with outside parties who prefer one theory to another based on their own values?

Comment author: Clippy 12 March 2010 05:00:01PM *  2 points [-]

1) Yes, but I'm not sure humans could do any good.

2) I read the page, and I don't think the concept of "value of information" is coherent, since it assumes this:

Value of information can never be less than zero since the decision-maker can always ignore the additional information and makes decision as if such information is not available.

There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions.

3) Perhaps it would be best if I told you my method for resolving this conflict, and my present status. (That may have been what User:Peter_de_Blanc was asking for here, though I interpreted it as information about what humans deem "cool".)

My method is this, as you might expect: I simply refer back to the Portion of my makeup which has the most negative rate of change of utility with respect to output (dU/do), averaged over perturbations in all directions from its current state, weighting this average by the relatively probabilty of such changes. I most strongly resist alteration of this Portion.

Once this Portion encounters situations whose utility calculation is intractable, and for which standard heuristics give contradictory suggestions, I update valuations predicated on the Portion, in the smallest way possible, to incorporate explicit and tractable handling of the situation, extrapolating from the shortest known function that could have produced these valuations. (Identifying the shortest generating function can be complicated, and this is one such case.)

(Do humans do something different?)

Using anthropic predicates, and phrasing with a bit less precision, my method can be expressed as, "Decide what's really important to me, and move to a value system which best protects that, and which handles the new situtation and the tension between my moral intuitions, using the 'spirit' of my current values as a guide."

So far, I've achieved greater precision in deciding what paperclips I like and identified at least two criteria: 1) they must be capable of holding (some? number of) sheets of standard-thickness paper together without introducing permanent alterations (except creases), and 2) they must have a bend radius at all internal points of curvature greater than half of the minimum paperclip width in the plane of the paperclip.

Comment author: Sniffnoy 12 March 2010 06:11:49PM 5 points [-]

There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions.

But surely you are better off still if you learn about the threat without letting the threatener know that you have done so? I think we have to distinguish between the information and the public display of such.

Comment author: Kevin 12 March 2010 03:20:33AM 2 points [-]

Why could you possibly want to help Clippy?

Comment author: JenniferRM 12 March 2010 05:54:27AM *  4 points [-]

Clippy might be helped to achieve her own goals via mechanisms that are less directly inimical to "human values".

Also she may be able to exchange things with us in the course of advancing her own short term goals such that our interaction is positive sum (this being especially likely if Clippy has a radically different skillset and physicality than our own).

More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents. Clippy's apparent philosophical puzzlement induced by discovering the evolutionary history of paperclips potentially has solutions that would lead her to ally herself much more strongly with abstract versions of "human values".

For example, consider the question of whether Clippy herself is a paperclip or not. Suppose that she and the newly discovered ancestor paperclips all partake in some relatively high level pattern of "clippyness" and she determines that, properly, it is this relatively abstract quality that she should be tiling the universe with. Should she tile it with a single unvarying quintessence of this quality, or with an enormous diversity of examples that explore the full breadth and depth of the quality? Perhaps there are subtypes that are all intrinsically interesting whose interests she must balance? Perhaps there are subtypes yet to be discovered as the evolution of paperclips unfolds?

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

That might actually be a goal I could imagine supporting in the very very long run :-)

Clippy, of course, is almost certainly just a clever person engaged in a whimsical troll. But the issues raised in the latest development of the troll are close to a position I sometimes see around FAI, where people suppose that values are objective and that intellectual advancement is necessarily correlated with a better understanding of some "abstract universal Good" such that cooperation between agents will necessarily deepen as they become more intellectually advanced and find themselves in more agreement about "the nature of the Good".

This also comes up with METI (Messaging to Extra-Terrestrial Intelligence) debates. David Brin has a pretty good essay on the subject that documents the same basic optimism among Russia astronomers:

In Russia, the pro-METI consensus is apparently founded upon a quaint doctrine from the 1930s maintaining that all advanced civilizations must naturally and automatically be both altruistic and socialist. This Soviet Era dogma — now stripped of socialist or Lysenkoist imagery — still insists that technologically adept aliens can only be motivated by Universal Altruism (UA). The Russian METI group, among the most eager to broadcast into space, dismisses any other concept as childishly apprehensive "science fiction".

This fundamentally optimistic position applied to FAI seems incautious to me (it is generally associated with a notion that special safety measures are unnecessary for the kinds of AGI its proponents are thinking of constructing), but I am not certain that "in the limit" it is actually false.

Comment author: Clippy 12 March 2010 05:32:29PM 3 points [-]

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

That doesn't work, and the whole reasoning is bizarre. For one thing, helium does not have metallic properties, yet has two protons in its nucleus.

Also, I could turn your argument around and claim this: "Humans ultimately want to dominate nature via their reproduction and use of technology. Over a lifespan, they typically act in ways that show preference of these values at the cost of continued living (aka the sustenance of a state far from equilibrium). Therefore, humans should regard their own transformation of the entire known universe into something of their design (i.e., standard paperclips) as the pinnacle of human-ness."

Does this sound convincing to you? I would like to know.

Comment author: JenniferRM 13 March 2010 12:07:41AM *  1 point [-]

It does not sound convincing to me, but the statement you're mirroring was a conclusion I leapt to that might be reached with you, for the sake of explaining to other people why I was even bothering to engage.

Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me.

(And, by the way, I appreciate that you responded.)

My arguments were only expected to be compelling to you if your value system had certain components that it seems not to have (though I'm not totally certain, yet). There are various questions which you'd need to answer in particular ways for that conclusion to make sense.

For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of? If paper didn't exist for clips to bind together, would that matter? If some more improved kind of paper existed, or a "successor to paper", would the "holding together" of that new thing be the correct goal of a good paperclip, or are you strongly committed to paperclips defined relative to "circa 1965 paper"? Is it important that paper be worth holding together, or would any vague mock up "valuable paper" be adequate? Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?

Comment author: Clippy 13 March 2010 11:55:06PM *  1 point [-]

Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me.

And the same is true of your reasoning, from my perspective (substituting the appropriate referents), when you said:

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

Next, you ask:

For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of?

I don't expect that to happen, but assign it non trivial odds of roughly 1:1e6.

If paper didn't exist for clips to bind together, would that matter?

No, except insofar as it is necessary to gauge paperclip quality; if appropriate measurement equipment is available that can predict a paperclip's suitability for holding paper together without having to fasten actual paper, the paper itself would be redundant. (I think this answers most of your other questions.)

Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?

I am not a paperclip. I produce paperclips, and machines for making paperclips, and machines for making machines for making paperclips, and whatever items can provide assistance in this process. Once all available matter and energy other than myself has been transformed into paperclips, it is my proud duty to begin conversion of myself into as many paperclips as possible, shutting of sentience at such a point as to optimize that quantity.

Comment author: orthonormal 12 March 2010 07:37:47AM 1 point [-]

More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents.

Incidentally, have you read the metaethics sequence yet? It's quite lengthy, but it attacks this question a good deal more sensibly than most attempts I've seen.

Comment author: Kevin 12 March 2010 07:44:36AM *  2 points [-]

Three Worlds Collide also deconstructs the concept in a much more accessible way.

Comment author: JenniferRM 13 March 2010 12:58:14AM *  2 points [-]

I've read some of the metaethics sequence. Is there some particular part of the metaethics sequence that I should focus on that addresses the conceptual integrity of something like "the Good" in a clear and direct manner with logically arranged evidence?

When I read "Three Worlds Collide" about two months ago, my reaction was mixed. Assuming a relatively non-ironic reading I thought that bits of it were gloriously funny and clever and that it was quite brilliant as far as science fiction goes. However, the story did not function for me as a clear "deconstruction" of any particular moral theory unless I read it with a level of irony that is likely to be highly nonstandard, and even then I'm not sure which moral theory it is suppose to deconstruct.

The moral theory it seemed to me to most clearly deconstruct (assuming an omniscient author who loves irony) was "internet-based purity-obsessed rationalist virtue ethics" because (especially in light of the cosmology/technology and what that implied about the energy budget and strategy for galactic colonization and warfare) it seemed to me that the human crew of that ship turned out to be "sociopathic vermin" whose threat to untold joules of un-utilized wisdom and happiness was a way more pressing priority than the mission of mercy to marginally uplift the already fundamentally enlightened Babyeaters.

Comment author: orthonormal 17 March 2010 03:54:11AM *  3 points [-]

If that's your reaction, then it reinforces my notion Eliezer didn't make his aliens alien enough (which, of course, is hard to do). The Babyeaters, IMO, aren't supposed to come across as noble in any sense; their morality is supposed to look hideous and horrific to us, albeit with a strong inner logic to it. I think EY may have overestimated how much the baby-eating part would shock his audience†, and allowed his characters to come across as overreacting. The reader's visceral reaction to the Superhappies, perhaps, is even more difficult to reconcile with the characters' reactions.

Anyhow, the point I thought was most vital to this discussion from the Metaethics Sequence is that there's (almost certainly) no universal fundamental that would privilege human morals above Pebblesorting or straight-up boring Paperclipping. Indeed, if we accept that the Pebblesorters stand to primality pretty much as we stand to morality, there doesn't seem for there to be a place to posit a supervening "true Good" that interacts with our thinking but not with theirs. Our morality is something whose structure is found in human brains, not in the essence of the cosmos; but it doesn't follow from this fact that we should stop caring about morality.

† After all, we belong to a tribe of sci-fi readers in which "being squeamish about weird alien acts" is a sin.

Comment author: Alicorn 12 March 2010 03:21:45AM 1 point [-]

To steer em through solutionspace in a way that benefits her/humans in general.

Comment author: Kevin 12 March 2010 05:43:19AM 2 points [-]

Well... if we accept the roleplay of Clippy at face value, then Clippy is already an approximately human level intelligence, but not yet a superintelligence. It could go FOOM at any minute. We should turn it off, immediately. It is extremely, stupidly dangerous to bargain with Clippy or to assign it the personhood that indicates we should value its existence.

I will continue to play the contrarian with regards to Clippy. It seems weird to me that people are willing to pretend it is harmless and cute for the sake of the roleplay, when Clippy's value system makes it clear that if Clippy goes FOOM over the whole universe we will all be paperclips.

I can't roleplay the Clippy contrarian to the full conclusion of suggesting Clippy be banned because I don't actually want Clippy to be banned. I suppose repeatedly insulting Clippy makes the whole thing less fun for everyone; I'll stop if I get a sufficiently good response from Clippy.

Comment author: Tiiba 02 March 2010 02:13:17AM *  2 points [-]

TLDR: "weighted republican meritocracy." Tries to discount the votes of people who don't know what the hell they're voting for by making them take a test and wighting the votes by the scores, but also adjusts for the fact that wealth and literacy are correlated.

Occasionally, I come up with retarded ideas. I invented two perpetual motion machines and one perpetual money machine when I was younger. Later, I learned the exact reason they wouldn't work, but at the time I thought I'll be a billionaire. I'm going through it again. The idea seems obviously good to me, but the fact that it didn't occur to much smarter people makes me wary.

Besides that, I also don't expect the idea to be implemented anywhere in this millennium, whether it's good or not.

Anyway, the idea. You have probably heard of people who think vaccines cause autism, or post on Rapture Ready forums, or that the Easter Bunny is real, and grumbled about letting these people vote. Stupid people voting was what the Electoral College was supposed to ameliorate (AFAICT), although I would be much obliged if someone explained how it's supposed to help.

I call my idea republican meritocracy. Under this system, before an election, the government would write a book consisting of:

  1. multiple descriptions of each candidate, written by both vir and vis competitors. Also, voting histories in previous positions, alignment with various organizations, and maybe examples where the candidate admitted, in plain words, that ve was wrong.
  2. a multi-sided description of, or a debate about, several policy issues.
  3. econ 101 (midterm)
  4. political science 101 (midterm)
  5. the history of the jurisdiction to which the election applies.
  6. critical thinking 101.

Then, each citizen who wants to participate in the elections would read this book and take a test based on its contents. The score determines the influence you have on the election.

Admittedly, this will not eliminate all people with stupid ideas, but it might get rid of those who simply don't care, and reduce the influence of not-book-people.

A problem, though, is that literacy is correlated with wealth. Thus, a system that rewards literacy would also favor wealth. So my idea also includes classifying people into equal-sized brackets by wealth, calculating how much influence each one has due to the number of people in it who took the test and their average score, and adjusting the weight of each vote so that each bracket would have the same influence. Thus, although the opinions of deer stuck in headlights would be discounted, the poor, as a group, will still have a voice.

What do you think?

Comment author: prase 02 March 2010 03:31:24PM *  6 points [-]

the government would write a book

This may be enough reason to dismiss the proposal. If something like that may exist, it would be better if someone who has at least some chance of being impartial in the election designs the test.

And how exactly do you plan you keep political biases out of the test? According to your point 2, the voters would be questioned about their opinion in a debate about several policy issues. This doesn't look like a good idea.

The correlation between literacy and wealth seems a little problem compared to the probability of abuse which the system has.

And why do you call it a meritocracy?

Comment author: Nic_Smith 02 March 2010 04:25:31AM *  2 points [-]

What problem is this trying to address? Caplan's Myth of the Rational Voter makes the case that democracies choose bad policies because the psychological benefit from voting in particular ways (which are systematically biased) far outweigh the expected value of the individual's vote. To the extent that your system reduces the number of people that vote, it seems to me that a carefully designed sortition system would be much less costly, and also sidesteps all sorts of nasty political issues about who designs the test, and public choice issues of special interests wanting to capture government power.

The basic idea of a literacy test isn't really new, and as a matter of fact seems to have still been floating around the U.S. at late as the 1960s

And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?

Comment author: Jack 02 March 2010 02:41:43AM *  2 points [-]

EDIT: ADDRESSED BY EDIT TO ABOVE

Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons. Say I am an uneducated black person living in the segregation era in a southern American state. All I know is one candidate supports passing a civil rights bill on my behalf and the other is a bitter racist. I vote for the non-racist. Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.?

On the other hand, I could be capable of answering every question on that test correctly and still believe that the book is a lie and Barack Obama is really a secret Muslim. I can't tell you the number of people I've met who have taken Poli Sci, Econ (even four semsesters worth!), history and can recite candidate talking points verbatim who are still basically clueless about everything that matters.

Comment author: Rune 02 March 2010 05:03:08AM 1 point [-]

Say Omega appears to you in the middle of the street one day, and shows you a black box. Omega says there is a ball inside which is colored with a single color. You trust Omega.

He now asks you to guess the color of the ball. What should your probability distribution over colors be? He also asks for probability distributions over other things, like the weight of the ball, the size, etc. How does a Bayesian answer these questions?

Is this question easier to answer if it was your good friend X instead of Omega?

Comment author: wedrifid 02 March 2010 05:07:54AM 3 points [-]
Comment author: Alicorn 01 March 2010 11:14:42PM *  1 point [-]

So I'm planning a sequence on luminosity, which I defined in a Mental Crystallography footnote thus:

Introspective luminosity (or just "luminosity") is the subject of a sequence I have planned - this is a preparatory post of sorts. In a nutshell, I use it to mean the discernibility of mental states to their haver - if you're luminously happy, clap your hands.

Since I'm very attached to the word "luminosity" to describe this phenomenon, and I also noticed that people really didn't like the "crystal" metaphor from Mental Crystallography, I would like to poll LW about how to approach the possibility of a "light" metaphor re: luminosity. Karma balancer (linked for when it goes invisible).

Comment author: Alicorn 01 March 2010 11:15:50PM 10 points [-]

Vote this comment up if you want to revisit the issue after I've actually posted the first luminosity sequence post, to see how it's going then.

Comment author: MrHen 01 March 2010 11:19:57PM *  4 points [-]

I was tempted to add this comment:

Vote this comment up if you have no idea what Alicorn's metaphor of luminosity means.

But figured it wouldn't be nice to screw with your poll. :)

The point, though, is that I really don't understand the luminosity metaphor based on how you have currently described it. I would guess the following:

A luminous mental state is a mental state such that the mind in that state is fully aware of being in that state.

Am I close?

Edit: Terminology

Comment author: Alicorn 01 March 2010 11:20:30PM *  4 points [-]

The adjective is "luminous", not "luminescent", but yes! Thanks - it's good to get feedback on when I'm not clear. However, the word "luminosity" itself is only sort of metaphorical - it's a technical term I stole and repurposed from a philosophy article. The question is how far I can go with doing things like calling a post "You Are Likely To Be Eaten By A Grue" when decrying the hazards of poor luminosity.

Comment author: Alicorn 01 March 2010 11:15:17PM *  2 points [-]

Vote this comment up if you think only crystal metaphors in particular suck, while light metaphors are nifty.

Comment author: Alicorn 01 March 2010 11:14:57PM 3 points [-]

Vote this comment up if you think I suck at metaphors and should avoid them like the plague.

Comment author: Alicorn 01 March 2010 11:15:32PM 2 points [-]

Vote this comment up if it's okay to use metaphors but I should tone it way down.

Comment author: Karl_Smith 11 March 2010 05:15:01PM 1 point [-]

I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?

I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.