Risto_Saarelma comments on Open Thread: March 4 - 10 - Less Wrong

3 Post author: Coscott 04 March 2014 03:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (391)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 05 March 2014 11:51:24AM *  5 points [-]

In the previous Open Thread NancyLebovitz posted an article about the living-Biblically-for-one-year guy deciding to try living one year rationally. Alicorn noticed that the article was from 2008, so the project was probably cancelled.

However, I was thinking... if someone tried to do this, what would be the best way to do it. (It's easy to imagine wrong ways: Hollywood rationality, etc.) We can assume that the person trying this experiment is not among the most rational people in the world, because they would already be too busy optimizing the universe, and wouldn't have a year of time to spend on such experiment. Also, they would probably already be living pretty rationally, so there would be no big change in their life, and therefore not an interesting report. (Although the participation in the experiment might create some extra incentive to behave rationally more consistently.) On the other hand, too irrational person would not be able to perform the task successfully. So, let's assume that the experimental person is... maybe an average LW reader, or someone generally LW-compatible who haven't found the website yet. (This also assumes that the LW model of rationality is approximately correct. Well, without this assumption it doesn't make much sense to discuss the best strategy here.)

So... let's suppose we have a volunteer who says: "I will try living the next year as rationally as possible, of course within my limits, so give me an advice about how to do it best. (In exchange I promise to keep logs, diaries, and publish the whole story, which could create some popularity for LW and CFAR.)" What advice would we give them?

A good meta-advice would be to keep a feedback loop with other aspiring rationalists. Not just take some initial advice, go away, return after one year with the report and risk getting a "you completely misunderstood it" reaction. Instead they should keep in contact; the question is merely how frequent and how detailed would the optimum contact be, to avoid wasting too much time in web discussions. I could imagine: asking specific difficult questions whenever necessary, and writing a detailed report every month, with the plans for the following months, so people on LW could comment on the strategy. Of course even this decision could be consulted on LW.

Now this feels a bit like cheating. Are we trying to test what one person can achieve during a year of living rationally, or are we using a LW hive-mind to optimize the person? In other words, would the results of the experiment speak about the benefits of rationality on one person, or about benefits of having a LW hive-mind available? Uhm... maybe there is actually no difference there? I mean, it is rational to use the best tools available. Virtue of scholarship, optimizing our social environment, munchkin attitude, etc. For a munchkin, there is no such thing as "cheating"; there is only more or less winning. -- But the important question is what is the goal of this experiment. Is it optimizing the one person's life? Or is it describing a strategy that dozens of other people may follow? Because if too many people decide to follow it, the LW hive-mind may be unable to provide a quality advice to all of them. On the other hand, such an event might motivate the LW hive-mind to become stronger and invent more efficient ways of supporting the aspiring rationalists. -- Uhm... I guess some forms of cheating should be prohibited. For example, if a poor person volunteers for the project, and some people from LW will send them money, and then they would rationalize it as winning by being rational even if the person does nothing else smart. ("What? In their situation it was rational to volunteer for the rationality experiment and ask people for money. It was a strategy that successfully increased their utility, and rationality by definition is winning.") On the other hand, if the person asks LW members for an expert advice in a domain they didn't study, I think that is completely fair; that is what they could (and perhaps should) have done even without the experiment. So, some kinds of support feel okay, other kinds feel not. Maybe the proper question is: Imagine that after successfully publishing the report, the next day 1000 more people would want to try using the same strategy. Would we feel that this contributed to our goal of raising the sanity waterline?

I also think that this kind of experiment would be fun, which is probably the main reason why I describe it; but as a side effect, if successful, it could be a great marketing material. What do you think? Is this "try one year of living most rationally with the support of LW hive-mind" experiment a good idea? Is anyone interested in being a volunteer? Are enough people interested in supporting them? (If yes, maybe we could launch the project on April 1st, the Fools' Day, because it's about all of us being less foolish, isn't it?)

Submitting...

Comment author: Risto_Saarelma 06 March 2014 12:39:19PM 1 point [-]

Bad idea if you just go "live rationally", imo. I predict it'd end up either as mostly useless cargo-cult behavior by a sufficiently incompetent-and-unaware-of-it participant, going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24/7, or doing the general wise living thing that smart people with life experience often already do to the best of their ability but which you can't really impart in an instruction manual very well without having the "smart" and "life experience" parts covered.

Might be salvageable if you narrowed it down a bit. Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this. I don't see why "my goals are doing exactly what I already do day in, day out, so I've already been living rationally all this time, thank you very much" would necessarily be incoherent for example. So maybe go for success for society-wide measuring sticks, like impressive performace in standardised education and good income? A lot of people are doing that, but I'm not seeing terribly much sentiment here for people trying to maximize their earning potential and professional placement as the end goal in life, though some do consider it instumentally.

So maybe say the goal is to live the good life. Only it seems that the good life consists of goals that are often not quite accessible to the conscious mind and methods to search pursue them that can be quite elaborate and often need to be improvised on the spot.

Not to be all bleak and obscurantist though, there is the Wissner-Gross entropy thing, which is a quite interesting idea for an universal goal heuristic, something like "maneuver to maximize your decision space". Also pretty squarely in the not yet ready for human consumption, will drive you crazy if you try to naively apply it 24/7 bin. And if you could actually codify how well someone's satisfying a goal like that you'd probably be getting a PhD and a research job at Google, not running a forum challenge.

Comment author: Viliam_Bur 06 March 2014 03:31:00PM *  1 point [-]

The participant could be observed by the LW community; something like a reality show. The costs of observation would have to be weighted, but I imagine the volunteer would provide:

A short log every day. Such as: "Learned 30 new words with Anki. Met with a friend and discussed our plans; seems interested, but we didn't agree on anything specific. Exercise. Wrote two pages of my thesis, the last one needs a rewrite. Spent 3 hours on internet." Not too detailed, not to waste too much time, but detailed enough to provide an idea about progress. (The log would be outside of LW, to reduce the volunteer's temptation to procrastinate here.)

A plan every week: What do I want to achieve; what needs to be done. Something like a GTD plan with "next actions". What could go wrong, and how will I react. What do I want to avoid, and how. -- At the end of the week: A summary, what happened as expected, what was different, what lessons can be taken. -- The LW hive mind would discuss this, and the volunteer can decide to follow their suggestions.

Every month: A comment-sized report in LW Group Rationality Diary; for the same reason other people write there: to encourage each other.

going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24/7

In this case I would recommend giving feedback: "I'm trying to do this, and it drives me crazy. Any advice? I spent thinking five minutes about it, and here are my ideas: X, Y, Z."

or doing the general wise living thing that smart people with life experience often already do to the best of their ability

This could probably be solved by making a prediction at the beginning of the project. The volunteer would list the changes in the previous years, successes and failures, and interpolate: "Using my previous years as an outside view, I predict that if I didn't participate in this experiment, I would probably do A, B, C." At the end of the project, the actual outcomes can be compared with the prediction.

Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this.

Sure. The goals would be stated by the volunteer, either from the beginning, or at least at the end of the first month.

I don't see why "my goals are doing exactly what I already do day in, day out, so I've already been living rationally all this time, thank you very much" would necessarily be incoherent for example.

It's perfectly okay. It just does not make sense to participate in the experiment for this specific person. The experiment is meant for people who are not in this situation.

Instead of trying to do the perfect thing immediately, I would recommend continuous improvement. Find the most painful problems, and fix them first. Find the obvious mistakes, and do better (not best, just better). Progress towards your current goals, but when you realize they were mistaken, improve them. If you think you couldn't do a big change, start with doing small changes; and once in a while reconsider your beliefs about the big change. The goal is not to be perfect, but to keep improving.

If at the end you are significantly better than a prediction based on your past, that's a success. If as a side effect we get better experimental data, or if you can rewrite and publish your logs as an e-book to make extra money and do an advertisement for CFAR, that's even better. If you inspire dozen other people, and if most of them also become significantly better than the predictions based on their pasts; and if the improvement is still there even after the end of experiments; that would be completely awesome.

The decision of what is "better" is of course individual, but I hope there would be strong correlation. (On the other hand, I would expect different opinions on what is "best".)