A Day Without Defaults

30 katydee 20 October 2014 08:07AM

Author's note: this post was written on Sunday, Oct. 19th. Its sequel will be written on Sunday, Oct. 27th.

Last night, I went to bed content with a fun and eventful weekend gone by. This morning, I woke up, took a shower, did my morning exercises, and began eat breakfast before making the commute up to work.

At the breakfast table, though, I was surprised to learn that it was Sunday, not Monday. I had misremembered what day it was and in fact had an entire day ahead of me with nothing on the agenda. At first, this wasn't very interesting, but then I started thinking. What to do with an entirely free day, without any real routine?

I realized that I didn't particularly know what to do, so I decided that I would simply live a day without defaults. At each moment of the day, I would act only in accordance with my curiosity and genuine interest. If I noticed myself becoming bored, disinterested, or otherwise less than enthused about what was going on, I would stop doing it.

What I found was quite surprising. I spent much less time doing routine activities like reading the news and browsing discussion boards, and much more time doing things that I've "always wanted to get around to"-- meditation, trying out a new exercise routine, even just spending some time walking around outside and relaxing in the sun.

Further, this seemed to actually make me more productive. When I sat down to get some work done, it was because I was legitimately interested in finishing my work and curious as to whether I could use a new method I had thought up in order to solve it. I was able to resolve something that's been annoying me for a while in much less time than I thought it would take.

By the end of the day, I started thinking "is there any reason that I don't spend every day like this?" As far as I can tell, there isn't really. I do have a few work tasks that I consider relatively uninteresting, but there are multiple solutions to that problem that I suspect I can implement relatively easily.

My plan is to spend the next week doing the same thing that I did today and then report back. I'm excited to let you all know what I find!

Cryonics in Europe?

16 roland 10 October 2014 02:58PM

What are the best options for cryonics in Europe?

AFAIK the best option is still to use one of the US providers(e.g. Alcor) and arrange for transportation. There is a problem with this though, in that until you arrive in the US your body will be cooled with dry ice which will cause huge ischemic damage.

Questions:

  1. How critical is the ischemic damage? If I interpret this comment by Eliezer correctly we shouldn't worry about this damage if we consider future technology.
  2. Is there a way to have adequate cooling here in Europe until you arrive at the US for final storage?

There is also KrioRus, a Russian cryonics company, they seem to offer an option of cryo transportation but I don't know how trustworthy they are.

Inferential silence

44 Kaj_Sotala 25 September 2013 12:45PM

Every now and then, I write an LW comment on some topic and feel that the contents of my comment pretty much settles the issue decisively. Instead, the comment seems to get ignored entirely - it either gets very few votes or none, nobody responds to it, and the discussion generally continues as if it had never been posted.

Similarly, every now and then I see somebody else make a post or comment that they clearly feel is decisive, but which doesn't seem very interesting to me. Either it seems to be saying something obvious, or I don't get its connection to the topic at hand in the first place.

This seems like it would be about inferential distance: either the writer doesn't know the things that make the reader experience the comment as uninteresting, or the reader doesn't know the things that make the writer experience the comment as interesting. So there's inferential silence - a sufficiently long inferential distance that a claim doesn't provoke even objections, just uncomprehending or indifferent silence.

But "explain your reasoning in more detail" doesn't seem like it would help with the issue. For one, we often don't know beforehand when people don't share our assumptions. Also, some of the comments or posts that seem to encounter this kind of a fate are already relatively long. For example, Wei Dai wondered why MIRI-affiliated people don't often respond to his posts that raise criticisms, and I essentially replied that I found the content of his post relatively obvious so didn't have much to say.

Perhaps people could more often explicitly comment if they notice that something that a poster seems to consider a big thing doesn't seem very interesting or meaningful to them, and briefly explain why? Even a sentence or two might be helpful for the original poster.

How to Run a Successful Less Wrong Meetup

55 Kaj_Sotala 12 June 2012 09:32PM

Always wanted to run a Less Wrong meetup, but been unsure of how? The How to Run a Successful Less Wrong Meetup booklet is here to help you!

The 33-page document draws from consultations with more than a dozen Less Wrong meetup group organizers. Stanislaw Boboryk created the document design. Luke provided direction, feedback, and initial research, and I did almost all the writing.

The booklet starts by providing some motivational suggestions on why you'd want to create a meetup in the first place, and then moves on to the subject of organizing your first one. Basics such as choosing a venue, making an announcement, and finding something to talk about once at the meetup, are all covered. This section also discusses pioneering meetups in foreign cities and restarting inactive meetup groups.

For those who have already established a meetup group, the booklet offers suggestions on things such as attracting new members, maintaining a pleasant atmosphere, and dealing with conflicts within the group. The "How to Build Your Team of Heroes" section explains the roles that are useful for a meetup group to fill, ranging from visionaries to organizers.

If you're unsure of what exactly to do at meetups, the guide describes many options, from different types of discussions to nearly 20 different games and exercises. All the talk and philosophizing in the world won't do much good if you don't actually do things, so the booklet also discusses long-term projects that you can undertake. Some people attend meetups to just have fun and to be social, and others to improve themselves and the world. The booklet has been written to be useful for both kinds of people.

In order to inspire you and let you see what others have done, the booklet also has brief case studies and examples from real meetup groups around the world. You can find these sprinkled throughout the guide.

This is just the first version of the guide. We will continue working on it. If you find mistakes, or think that something is unclear, or would like to see some part expanded, or if you've got good advice you think should be included... please let me know! You can contact me at kaj.sotala@intelligence.org.

A large number of people have helped in various ways, and I hope that I've remembered to mention most of them in the acknowledgements. If you've contributed to the document but don't see your name mentioned, please send me a message and I'll have that fixed!

The booklet has been illustrated with pictures from various meetup groups. Meetup organizers sent me the pictures for this use, and I explicitly asked them to make sure that everyone in the photos was fine with it. Regardless, if there's a picture that you find objectionable, please contact me and I'll have it replaced with something else.

Bayesianism for humans: "probable enough"

38 BT_Uytya 02 September 2014 09:44PM

There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before.
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain. Second penny is here.



"Probable enough"

When you have eliminated the impossible, whatever  remains is often more improbable than your having made a mistake in one  of your impossibility proofs.


Bayesian way of thinking introduced me to the idea of "hypothesis which is probably isn't true, but probable enough to rise to the level of conscious attention" — in other words, to the situation when P(H) is notable but less than 50%.

Looking back, I think that the notion of taking seriously something which you don't think is true was alien to me. Hence, everything was either probably true or probably false; things from the former category were over-confidently certain, and things from the latter category were barely worth thinking about.

This model was correct, but only in a formal sense.

Suppose you are living in Gotham, the city famous because of it's crime rate and it's masked (and well-funded) vigilante, Batman. Recently you had read The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker, and according to some theories described here, Batman isn't good for Gotham at all.

Now you know, for example, the theory of Donald Black that "crime is, from the point of view of the perpetrator, the pursuit of justice". You know about idea that in order for crime rate to drop, people should perceive their law system as legitimate. You suspect that criminals beaten by Bats don't perceive the act as a fair and regular punishment for something bad, or an attempt to defend them from injustice; instead the act is perceived as a round of bad luck. So, the criminals are busy plotting their revenge, not internalizing civil norms.

You believe that if you send your copy of book (with key passages highlighted) to the person connected to Batman, Batman will change his ways and Gotham will become much more nice in terms of homicide rate. 

So you are trying to find out Batman's secret identity, and there are 17 possible suspects. Derek Powers looks like a good candidate: he is wealthy, and has a long history of secretly delegating illegal-violence-including tasks to his henchmen; however, his motivation is far from obvious. You estimate P(Derek Powers employs Batman) as 20%. You have very little information about other candidates, like Ferris Boyle, Bruce Wayne, Roland Daggett, Lucius Fox or Matches Malone, so you assign an equal 5% to everyone else.

In this case you should pick Derek Powers as your best guess when forced to name only one candidate (for example, if you forced to send the book to someone today), but also you should be aware that your guess is 80% likely to be wrong. When making expected utility calculations, you should take Derek Powers more seriously than Lucius Fox, but only by 15% more seriously.

In other words, you should take maximum a posteriori probability hypothesis into account while not deluding yourself into thinking that now you understand everything or nothing at all. Derek Powers hypothesis probably isn't true; but it is useful.

Sometimes I find it easier to reframe question from "what hypothesis is true?" to "what hypothesis is probable enough?". Now it's totally okay that your pet theory isn't probable but still probable enough, so doubt becomes easier. Also, you are aware that your pet theory is likely to be wrong (and this is nothing to be sad about), so the alternatives come to mind more naturally.

These "probable enough" hypothesis can serve as a very concise summaries of state of your knowledge when you simultaneously outline the general sort of evidence you've observed, and stress that you aren't really sure. I like to think about it like a rough, qualitative and more System1-friendly variant of Likelihood ratio sharing.

Planning Fallacy

The original explanation of planning fallacy (proposed by Kahneman and Tversky) is about people focusing on a most optimistic scenario when asked about typical one (instead of trying to do an Outside VIew). If you keep the distinction between "probable" and "probable enough" in mind, you can see this claim in a new light.

Because the most optimistic scenario is the most probable and the most typical one, in a certain sense.

The illustration, with numbers pulled out of thin air, goes like this: so, you want to visit a museum.

The first thing you need to do is to get dressed and take your keys and stuff. Usually (with 80% probability) you do this very quick, but there is a weak possibility of your museum ticket having been devoured by an entropy monster living on your computer table.

The second thing is to catch bus. Usually (p = 80%), bus is on schedule, but sometimes it can be too early or too late. After this, the bus could (20%) or could not (80%) get stuck in a traffic jam.

Finally, you need to find a museum building. You've been there before once, so you sorta remember your route, yet still could be lost with 20% probability.

And there you have it: P(everything is fine) = 40%, and probability of every other scenario is 10% or even less. "Everything is fine" is probable enough, yet likely to be false. Supposedly, humans pick MAP hypothesis and then forget about every other scenario in order to save computations.

Also, "everything is fine" is a good description of your plan. If your friend asks you, "so how are you planning to get to the museum?", and you answer "well, I catch the bus, get stuck in a traffic jam for 30 agonizing minutes, and then just walk from here", your friend is going  to get a completely wrong idea about dangers of your journey. So, in a certain sense, "everything is fine" is a typical scenario. 

Maybe it isn't human inability to pick the most likely scenario which should be blamed. Maybe it is false assumption that "most likely == likely to be correct" which contributes to this ubiquitous error.

In this case you would be better off having picked the "something will go wrong, and I will be late", instead of "everything will be fine".

So, sometimes you are interested in the best specimen out of your hypothesis space, sometimes you are interested in a most likely thingy (and it doesn't matter how vague it would be), and sometimes there are no shortcuts, and you have to do an actual expected utility calculation.

One Year of Pomodoros

22 alexvermeer 01 January 2014 09:27PM

(Pomodoros have been talked about a bunch on LW. I, like elharo, first started using the technique after attending a CFAR workshop. Cross-posted from my blog.)

The pomodoro technique is, in short, starting a timer and doing 25 minutes of focused work on a single task without interruption, followed by a five minute break. Choose a new task, restart the timer, and repeat.

Throughout 2013 I used pomodoros to execute on pretty much all of my life projects, organized into the following categories:

  • work – at MIRI
  • bizdev – other income-generating projects
  • growth – personal development projects (e.g. reading books, taking notes, making Anki decks; monthly reviews)
  • misc – miscellaneous life maintenance projects (e.g. banking stuff, knocking off a bunch of small todo’s, house cleanup)
  • health – exercise projects (mostly climbing, some running, some misc other stuff)

The Result: 5,008 Pomodoros

The end result was 2,504 hours of recorded work—5,008 pomodoros in total: 

Stacked Pomodoros by Week in 2013

2013pomodoros

A summary, by category (with hours in brackets):

  • work – 2,457 (1,228.5h) – 47.3 (23.7h) avg/week
  • bizdev – 700 (350h) – 13.5 (6.7h) avg/week
  • growth – 996 (498h) – 19.2 (9.6h) avg/week
  • misc – 448 (224h) – 8.6 (4.3h) avg/week
  • health – 407 (203.5h) – 7.8 (3.9h) avg/week

Grand Total: 5,008 (2,504h) – 96.3 (48.2h) avg/week

My version of the pomodoro technique

To be clear, I didn’t use the pomodoro technique 100% faithfully. Certain things here, such as most Health (exercise) stuff, I never actually ran a pomodoro timer. But since I had a system for tracking where and how I spent my time, and since “claiming” all that time helped motivate me e.g. to climb regularly, I included them.

Ways I deviate from the “true” pomodoro technique:

  • I don’t always take breaks. For example, if I do two pomodoros, get in the zone, and work for another two hours straight, I’d still record that as 6 pomodoros (3 hours) total.
  • I don’t always use a timer. Sometimes I just start working, remembering to take small intermittent breaks, and record the total time in pomodoros (4h of work = 8 pomodoros).
  • I don’t record interruptions. You’re supposed to track all internal and external interruptions, but I don’t bother with that. I merely try remain conscious of interruptions and eliminate/avoid them as much as possible.
  • I don’t let interruptions cancel out pomodoros. Let’s say I work for fifteen minutes and someone comes in to chat about something important that’s been on their mind. I know that “a pomodoro is indivisible”, but screw it, I chat, and when the conversation ends I count a pomodoro after ten more minutes of work. Pomodoro blasphemy? Maybe.
  • I don’t always set targets. I don’t constantly set detailed pomodoro targets and track how many pomodoros were actually required. I only do this occasionally if I think my estimating ability is getting really off. I do set weekly pomodoro targets by category.

How did I track?

Near the end of 2012 I whipped up a simple web app that I use for tracking all of my pomodoros. Here’s a sample screenshot from a week from earlier this year:

pomodoro-tracker

Every pomodoro added is given a description, project, major area, and count. This way I can view all pomodoros by project, area, over a given date range, etc. (I’m pretty sure there are other apps out there that let you do basically the same thing, but I haven’t taken much time to explore them.)

Why I think it’s worked really well for me

Of all the productivity hacks I’ve tried over the last decade, the pomodoro technique was, for me, the hands-down most effective technique. My thoughts on why the pomodoro technique has worked so well for me:

  • It helps you start – start the timer and then just start working. You’ve already decided what to work on, so just start already.
  • It helps you focus on one thing at a time – work on only one thing and ignore everything else.
  • It helps you prioritize – look at your lists/projects/tasks/whatever, pick the most important thing to work on, and then just start already.
  • It helps create success spirals – when you have 5 successful pomodoros under your belt, it’s motivation to keep going.

In summary, if you haven’t yet, I highly recommend giving the pomodoro technique a try.

Extreme Rationality: It's Not That Great

140 Yvain 09 April 2009 02:44AM

Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these "benefits" of "x-rationality"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

continue reading »

Mandatory Secret Identities

28 Eliezer_Yudkowsky 08 April 2009 06:10PM

Previously in seriesWhining-Based Communities

"But there is a reason why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy.  I expected much of them, and they came to expect much of themselves." —Jeffreyssai

Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of...

...becoming a teacher and having their own martial arts dojo someday.

To see what's wrong with this, imagine going to a class on literary criticism, falling in love with it, and dreaming of someday becoming a famous literary critic just like your professor, but never actually writing anything.  Writers tend to look down on literary critics' understanding of the art form itself, for just this reason.  (Orson Scott Card uses the analogy of a wine critic who listens to a wine-taster saying "This wine has a great bouquet", and goes off to tell their students "You've got to make sure your wine has a great bouquet".  When the student asks, "How?  Does it have anything to do with grapes?" the critic replies disdainfully, "That's for grape-growers!  I teach wine.")

Similarly, I propose, no student of rationality should study with the purpose of becoming a rationality instructor in turn.  You do that on Sundays, or full-time after you retire.

And to place a go stone blocking this failure mode, I propose a requirement that all rationality instructors must have secret identities.  They must have a life outside the Bayesian Conspiracy, which would be worthy of respect even if they were not rationality instructors.  And to enforce this, I suggest the rule:

  Rationality_Respect1(Instructor) = min(Rationality_Respect0(Instructor), Non_Rationality_Respect0(Instructor))

That is, you can't respect someone as a rationality instructor, more than you would respect them if they were not rationality instructors.

continue reading »

Tell Culture

109 BrienneYudkowsky 18 January 2014 08:13PM

Followup to: Ask and Guess

Ask culture: "I'll be in town this weekend for a business trip. Is it cool if I crash at your place?" Response: “Yes“ or “no”.

Guess culture: "Hey, great news! I'll be in town this weekend for a business trip!" Response: Infer that they might be telling you this because they want something from you, conclude that they might want a place to stay, and offer your hospitality only if you want to. Otherwise, pretend you didn’t infer that.

The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no".

The two basic rules of Guess Culture: 1) Ask for things if, and *only* if, you're confident the person will say "yes". 2) Interpret requests as expectations of "yes", and, when possible, avoid saying "no".

Both approaches come with costs and benefits. In the end, I feel pretty strongly that Ask is superior. 

But these are not the only two possibilities!

"I'll be in town this weekend for a business trip. I would like to stay at your place, since it would save me the cost of a hotel, plus I would enjoy seeing you and expect we’d have some fun. I'm looking for other options, though, and would rather stay elsewhere than inconvenience you." Response: “I think I need some space this weekend. But I’d love to get a beer or something while you’re in town!” or “You should totally stay with me. I’m looking forward to it.”

There is a third alternative, and I think it's probably what rationalist communities ought to strive for. I call it "Tell Culture".

The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.

Suppose you’re in a conversation that you’re finding aversive, and you can’t figure out why. Your goal is to procure a rain check.

  • Guess: *You see this annoyed body language? Huh? Look at it! If you don’t stop talking soon I swear I’ll start tapping my foot.* (Or, possibly, tell a little lie to excuse yourself. “Oh, look at the time…”) 
  • Ask: “Can we talk about this another time?”
  • Tell: "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."

Here are more examples from my own life:

  • "I didn't sleep well last night and am feeling frazzled and irritable today. I apologize if I snap at you during this meeting. It isn’t personal." 
  • "I just realized this interaction will be far more productive if my brain has food. I think we should head toward the kitchen." 
  • "It would be awfully convenient networking for me to stick around for a bit after our meeting to talk with you and [the next person you're meeting with]. But on a scale of one to ten, it's only about 3 useful to me. If you'd rate the loss of utility for you as two or higher, then I have a strong preference for not sticking around." 

The burden of honesty is even greater in Tell culture than in Ask culture. To a Guess culture person, I imagine much of the above sounds passive aggressive or manipulative, much worse than the rude bluntness of mere Ask. It’s because Guess people aren’t expecting relentless truth-telling, which is exactly what’s necessary here.

If you’re occasionally dishonest and tell people you want things you don't actually care about--like their comfort or convenience--they’ll learn not to trust you, and the inherent freedom of the system will be lost. They’ll learn that you only pretend to care about them to take advantage of their reciprocity instincts, when in fact you’ll count them as having defected if they respond by stating a preference for protecting their own interests.

Tell culture is cooperation with open source codes.

This kind of trust does not develop overnight. Here is the most useful Tell tactic I know of for developing that trust with a native Ask or Guess. It’s saved me sooooo much time and trouble, and I wish I’d thought of it earlier.

"I'm not asking because I expect you to say ‘yes’. I'm asking because I'm having trouble imagining the inside of your head, and I want to understand better. You are completely free to say ‘no’, or to tell me what you’re thinking right now, and I promise it will be fine." It is amazing how often people quickly stop looking shifty and say 'no' after this, or better yet begin to discuss further details.

Universal Fire

63 Eliezer_Yudkowsky 27 April 2007 09:15PM

In L. Sprague de Camp's fantasy story The Incomplete Enchanter (which set the mold for the many imitations that followed), the hero, Harold Shea, is transported from our own universe into the universe of Norse mythology.  This world is based on magic rather than technology; so naturally, when Our Hero tries to light a fire with a match brought along from Earth, the match fails to strike.

I realize it was only a fantasy story, but... how do I put this...

No.

continue reading »

View more: Prev | Next