Changes to my workflow

28 paulfchristiano 26 August 2014 05:29PM

About 18 months ago I made a post here on my workflow. I've received a handful of requests for follow-up, so I thought I would make another post detailing changes since then. I expect this post to be less useful than the last one.

For the most part, the overall outline has remained pretty stable and feels very similar to 18 months ago. Things not mentioned below have mostly stayed the same. I believe that the total effect of continued changes have been continued but much smaller improvements, though it is hard to tell (as opposed to the last changes, which were more clearly improvements).

Based on comparing time logging records I seem to now do substantially more work on average, but there are many other changes during this period that could explain the change (including changes in time logging). Changes other than work output are much harder to measure; I feel like they are positive but I wouldn't be surprised if this were an illusion.

Splitting days:

I now regularly divide my day into two halves, and treat the two halves as separate units. I plan each separately and reflect on each separately. I divide them by an hour long period of reflecting on the morning, relaxing for 5-10 minutes, napping for 25-30 minutes, processing my emails, and planning the evening. I find that this generally makes me more productive and happier about the day. Splitting my days is often difficult due to engagements in the middle of the day, and I don't have a good solution to that.

WasteNoTime:

I have longstanding objections to explicitly rationing internet use (since it seems either indicative of a broader problem that should be resolved directly, or else to serve a useful function that would be unwise to remove). That said, I now use the extension WasteNoTime to limit my consumption of blogs, webcomics, facebook, news sites, browser games, etc., to 10 minutes each half-day. This has cut the amount of time I spend browsing the internet from an average of 30-40 minutes to an average of 10-15 minutes. It doesn't seem to have been replaced by lower-quality leisure, but by a combination of work and higher-quality leisure.

Similarly, I turned off the newsfeed in facebook, which I found to improve the quality of my internet time in general (the primary issue was that I would sometimes be distracted by the newsfeed while sending messages over facebook, which wasn't my favorite way to use up wastenotime minutes).

I also tried StayFocusd, but ended up adopting WasteNoTime because of the ability to set limits per half-day (via "At work" and "not at work" timers) rather than per-day. I find that the main upside is cutting off the tail of derping (e.g. getting sucked into a blog comment thread, or looking into a particularly engrossing issue), and for this purpose per half-day timers are much more effective.

Email discipline:

I set gmail to archive all emails on arrival and assign them the special label "In." This lets me to search for emails and compose emails, using the normal gmail interface, without being notified of new arrivals. I process the items with label "in" (typically turning emails into todo items to be processed by the same system that deals with other todo items) at the beginning of each half day. Each night I scan my email quickly for items that require urgent attention. 

Todo lists / reminders:

I continue to use todo lists for each half day and for a range of special conditions. I now check these lists at the beginning of each half day rather than before going to bed.

I also maintain a third list of "reminders." These are things that I want to be reminded of periodically, organized by day; each morning I look at the day's reminders and think about them briefly. Each of them is copied and filed under a future day. If I feel like I remember a thing well I file it in far in the future, if I feel like I don't remember it well I file it in the near future.

Over the last month most of these reminders have migrated to be in the form "If X, then Y," e.g. "If I agree to do something for someone, then pause, say `actually I should think about it for a few minutes to make sure I have time,' and set a 5 minute timer that night to think about it more clearly." These are designed to fix problems that I notice when reflecting on the day. This is a recommendation from CFAR folks, which seems to be working well, though is the newest part of the system and least tested.

Isolating "todos":

I now attempt to isolate things that probably need doing, but don't seem maximally important; I aim to do them only on every 5th day, and only during one half-day. If I can't finish them in this time, I will typically delay them 5 days. When they spill over to other days, I try to at least keep them to one half-day or the other. I don't know if this helps, but it feels better to have isolated unproductive-feeling blocks of time rather than scattering it throughout the week.

I don't do this very rigidly. I expect the overall level of discipline I have about it is comparable to or lower than a normal office worker who has a clearer division between their personal time and work time.

Toggl:

I now use Toggl for detailed time tracking. Katja Grace and I experimented with about half a dozen other systems (Harvest, Yast, Klok, Freckle, Lumina, I expect others I'm forgetting) before settling on Toggl. It has a depressing number of flaws, but ends up winning for me by making it very fast to start and switch timers which is probably the most important criterion for me. It also offers reviews that work out well with what I want to look at.

I find the main value adds from detailed time tracking are:

1. Knowing how long I've spent on projects, especially long-term projects. My intuitive estimates are often off by more than a factor of 2, even for things taking 80 hours; this can lead me to significantly underestimate the costs of taking on some kinds of projects, and it can also lead me to think an activity is unproductive instead of productive by overestimating how long I've actually spent on it.

2. Accurate breakdowns of time in a day, which guide efforts at improving my day-to-day routine. They probably also make me feel more motivated about working, and improve focus during work.

Reflection / improvement:

Reflection is now a smaller fraction of my time, down from 10% to 3-5%, based on diminishing returns to finding stuff to improve. Another 3-5% is now redirected into longer-term projects to improve particular aspects of my life (I maintain a list of possible improvements, roughly sorted by goodness). Examples: buying new furniture, improvements to my diet (Holden's powersmoothie is great), improvements to my sleep (low doses of melatonin seem good). At the moment the list of possible improvements is long enough that adding to the list is less valuable than doing things on the list.

I have equivocated a lot about how much of my time should go into this sort of thing. My best guess is the number should be higher.

-Pomodoros:

I don't use pomodoros at all any more. I still have periods of uninterrupted work, often of comparable length, for individual tasks. This change wasn't extremely carefully considered, it mostly just happened. I find explicit time logging (such that I must consciously change the timer before changing tasks) seems to work as a substitute in many cases. I also maintain the habit of writing down candidate distractions and then attending to them later (if at all).

For larger tasks I find that I often prefer longer blocks of unrestricted working time. I continue to use Alinof timer to manage these blocks of uninterrupted work.

-Catch:

Catch disappeared, and I haven't found a replacement that I find comparably useful. (It's also not that high on the list of priorities.) I now just send emails to myself, but I do it much less often.

-Beeminder:

I no longer use beeminder. This again wasn't super-considered, though it was based on a very rough impression of overhead being larger than the short-term gains. I think beeminder was helpful for setting up a number of habits which have persisted (especially with respect to daily routine and regular focused work), and my long-term averages continue to satisfy my old beeminder goals.

Project outlines:

I now organize notes about each project I am working on in a more standardized way, with "Queue of todos," "Current workspace," and "Data" as the three subsections. I'm not thrilled by this system, but it seems to be an improvement over the previous informal arrangement. In particular, having a workspace into which I can easily write thoughts without thinking about where they fit, and only later sorting them into the data section once it's clearer how they fit in, decreases the activation energy of using the system. I now use Toggl rather than maintaining time logs by hand.

Randomized trials:

As described in my last post I tried various randomized trials (esp. of effects of exercise, stimulant use, and sleep on mood, cognitive performance, and productive time). I have found extracting meaningful data from these trials to be extremely difficult, due to straightforward issues with signal vs. noise. There are a number of tests which I still do expect to yield meaningful data, but I've increased my estimates for the expensiveness of useful tests substantially, and they've tended to fall down the priority list. For some things I've just decided to do them without the data, since my best guess is positive in expectation and the data is too expensive to acquire.

 

[LINK] Speed superintelligence?

36 Stuart_Armstrong 14 August 2014 03:57PM

From Toby Ord:

Tool assisted speedruns (TAS) are when people take a game and play it frame by frame, effectively providing super reflexes and forethought, where they can spend a day deciding what to do in the next 1/60th of a second if they wish. There are some very extreme examples of this, showing what can be done if you really play a game perfectly. For example, this video shows how to winSuper Mario Bros 3 in 11 minutes. It shows how different optimal play can be from normal play. In particular, on level 8-1, it gains 90 extra lives by a sequence of amazing jumps.

Other TAS runs get more involved and start exploiting subtle glitches in the game. For example, this page talks about speed running NetHack, using a lot of normal tricks, as well as luck manipulation (exploiting the RNG) and exploiting a dangling pointer bug to rewrite parts of memory.

Though there are limits to what AIs could do with sheer speed, it's interesting that great performance can be achieved with speed alone, that this allows different strategies from usual ones, and that it allows the exploitation of otherwise unexploitable glitches and bugs in the setup.

Questions on the human path and transhumanism.

-1 HopefullyCreative 12 August 2014 08:34PM

I had a waking nightmare: I know some of you reading this just went "Oh great, here we go..." but bear with me. I am a man who loves to create and build, it is what I have dedicated my life to. One day because of the Less Wrong community I was prompted to ask "What if they are successful in creating an artificial general intelligence whose intellect dwarfs our own?"

My mind raced and imagined the creation of an artificial mind designed to be creative, subservient to man but also anticipate our needs and desires. In other words I imagined if current AGI engineers accomplished the creation of the greatest thing ever. Of course this machine would see how we loathe tiresome repetitive work and design and build for us a host of machines to do it for us. However then the horror at the implication of this all set in. The AGI will become smarter and smarter through its own engineering and soon it will anticipate human needs and produce things no human being could dream of. Suddenly man has no work to do, there is no back breaking labor to be done nor even the creative glorious work of engineering, exploring and experimentation. Instead our army of AGI has robbed us of that. 

At this moment I certainly must express that this is not a statement amounting to "Lets not make AGI" for we all know AGI are coming. Then what is my point in expressing this? To express a train of thought that results in questions that have yet to be answered in the hopes that in depth discussion may shed some light.

I realized that the only meaning for man in a world run by AGI would actually be to order the AGI to make man himself better. Instead of focusing on having the AGI design a world for us, use that intellect that we could not before modification compare with to design a means to put us on its own level. In other words, the goal of creating an AGI should not to be to create an AGI but to make a tool so powerful we can use it to command man to be better. Now, I'm quite certain the audience present here is well aware of transhumanism. However, there are some important questions to be answered on the subject:

Mechanical or Biological modification? I know many would think "Are you stupid?! Of course cybernetics would be better than genetic alteration!" Yet the balance of advantages is not as clear as one would think. Lets consider cybernetics for a moment: Many would require maintenance, they would need to be designed and manufactured and therefore quite expensive. They also would need to be installed. Initially, possibly for decades only the rich could afford such a thing creating a titanic rift in power. This power gap of course will widen the already substantial resentment between the regular folk and the rich thereby creating political and social uncertainty which we can ill afford in a world with the kind of destructive power nuclear arms present. 

Genetic alteration comes with a whole new set of problems. A titanic realm of genetic variables in which tweaking one thing may unexpectedly alter and damage another thing. Research in this area could potentially take much longer due to experimentation requirements. However the advantage is that genetic alteration can be accomplished with the help of virus in controlled environments. There would be no mechanic required to maintain the new being we have created and if designed properly the modifications can be passed down to the next generation. So instead of having to pay to upgrade each successive generation we instead only have to pay to upgrade one single generation. The rich obviously would still be the first ones to afford this procedure, however it could quickly spread across the globe due its potentially lower cost nature once development costs have been seen to. However, the problem is that we would be fundamentally and possibly irreversibly be altering our genetic code. Its possible to keep a gene bank so we have a memory of what we were in the hopes we could undo the changes and revert if the worst happened yet that is not the greatest problem with this path. We cannot even get the public to accept the concept of genetically altered crops, how can we get a world to accept its genes being altered? The sort of instability created by trying to push such a thing too hard, or the power gap created by those who have upgraded and who have not can again cause substantial instability that is globally dangerous.

So now I ask you, the audience. Genetic or cybernetic? How would we solve the political problems associated with both? What are the problems with both? 

Roles are Martial Arts for Agency

140 Eneasz 08 August 2014 03:53AM

A long time ago I thought that Martial Arts simply taught you how to fight – the right way to throw a punch, the best technique for blocking and countering an attack, etc. I thought training consisted of recognizing these attacks and choosing the correct responses more quickly, as well as simply faster/stronger physical execution of same. It was later that I learned that the entire purpose of martial arts is to train your body to react with minimal conscious deliberation, to remove “you” from the equation as much as possible.

The reason is of course that conscious thought is too slow. If you have to think about what you’re doing, you’ve already lost. It’s been said that if you had to think about walking to do it, you’d never make it across the room. Fighting is no different. (It isn’t just fighting either – anything that requires quick reaction suffers when exposed to conscious thought. I used to love Rock Band. One day when playing a particularly difficult guitar solo on expert I nailed 100%… except “I” didn’t do it at all. My eyes saw the notes, my hands executed them, and no where was I involved in the process. It was both exhilarating and creepy, and I basically dropped the game soon after.)

You’ve seen how long it takes a human to learn to walk effortlessly. That's a situation with a single constant force, an unmoving surface, no agents working against you, and minimal emotional agitation. No wonder it takes hundreds of hours, repeating the same basic movements over and over again, to attain even a basic level of martial mastery. To make your body react correctly without any thinking involved. When Neo says “I Know Kung Fu” he isn’t surprised that he now has knowledge he didn’t have before. He’s amazed that his body now reacts in the optimal manner when attacked without his involvement.

All of this is simply focusing on pure reaction time – it doesn’t even take into account the emotional terror of another human seeking to do violence to you. It doesn’t capture the indecision of how to respond, the paralysis of having to choose between outcomes which are all awful and you don’t know which will be worse, and the surge of hormones. The training of your body to respond without your involvement bypasses all of those obstacles as well.

This is the true strength of Martial Arts – eliminating your slow, conscious deliberation and acting while there is still time to do so.

Roles are the Martial Arts of Agency.

When one is well-trained in a certain Role, one defaults to certain prescribed actions immediately and confidently. I’ve acted as a guy standing around watching people faint in an overcrowded room, and I’ve acted as the guy telling people to clear the area. The difference was in one I had the role of Corporate Pleb, and the other I had the role of Guy Responsible For This Shit. You know the difference between the guy at the bar who breaks up a fight, and the guy who stands back and watches it happen? The former thinks of himself as the guy who stops fights. They could even be the same guy, on different nights. The role itself creates the actions, and it creates them as an immediate reflex. By the time corporate-me is done thinking “Huh, what’s this? Oh, this looks bad. Someone fainted? Wow, never seen that before. Damn, hope they’re OK. I should call 911.” enforcer-me has already yelled for the room to clear and whipped out a phone.

Roles are the difference between Hufflepuffs gawking when Neville tumbles off his broom (Protected), and Harry screaming “Wingardium Leviosa” (Protector). Draco insulted them afterwards, but it wasn’t a fair insult – they never had the slightest chance to react in time, given the role they were in. Roles are the difference between Minerva ordering Hagrid to stay with the children while she forms troll-hunting parties (Protector), and Harry standing around doing nothing while time slowly ticks away (Protected). Eventually he switched roles. But it took Agency to do so. It took time.

Agency is awesome. Half this site is devoted to becoming better at Agency. But Agency is slow. Roles allow real-time action under stress.

Agency has a place of course. Agency is what causes us to decide that Martial Arts training is important, that has us choose a Martial Art, and then continue to train month after month. Agency is what lets us decide which Roles we want to play, and practice the psychology and execution of those roles. But when the time for action is at hand, Agency is too slow. Ensure that you have trained enough for the next challenge, because it is the training that will see you through it, not your agenty conscious thinking.

 

As an aside, most major failures I’ve seen recently are when everyone assumed that someone else had the role of Guy In Charge If Shit Goes Down. I suggest that, in any gathering of rationalists, they begin the meeting by choosing one person to be Dictator In Extremis should something break. Doesn’t have to be the same person as whoever is leading. Would be best if it was someone comfortable in the role and/or with experience in it. But really there just needs to be one. Anyone.

cross-posted from my blog

Gaming Democracy

8 Froolow 30 July 2014 09:45AM

I live in the UK, which has a very similar voting structure to the US for the purposes of this article. Nevertheless, it may differ on the details, for which I am sorry. I also use a couple of real-life political examples which I hope are uncontroversial enough not to break the unofficial rules here. If they are not, I can change them, because this is a discussion of gaming democracy by exploiting swing seats to push rationalist causes.

Cory Doctrow writes in the Guardian about using Kickstarter-like thresholds to encourage voting for minority parties:

http://www.theguardian.com/technology/2014/jul/24/how-the-kickstarter-model-could-transform-uk-elections

He points out that nobody votes for minority parties because nobody else votes for them; if you waste your vote on Yellow then it is one fewer vote that might stop the hated Blue candidate getting in by voting for the not-quite-so-bad Green. He argues that you could use the internet to inform people when some pre-set threshold had been triggered with respect to voting for a minor party and thus encourage them to get out and vote. So for example if the margin of victory was 8000 votes and 9000 people agreed with the statement, “If more than 8000 people agree to this statement, then I will go to the polls on election day and vote for the minority Yellow party”, the minority Yellow party would win power even though none of the original 9000 participants would have voted for Yellow without the information-coordinating properties of the internet.

I’m not completely sure of the argument, but I looked into some of the numbers myself. There are 23 UK seats (roughly equivalent to Congressional Districts for US readers) with a margin of 500 votes or fewer. So to hold the balance of power in these seats you need to find either 500 non-voters who would be prepared to vote the way you tell them, or 250 voters with the same caveats (voters are worth twice as much as non-voters to the aspiring seat-swinger, since a vote taken from the Blues lowers the margin by one, and a vote given to the Greens lowers the margin by one, and every voter is entitled to both take a vote away from the party they are currently voting for and award a vote to any party of their choice). I’ll call the number of votes required to swing a seat the ‘effective voter’ count, which allows for the fact that some voters count for two.

It doesn’t sound impossible to me to reach the effective voter count for some swing constituencies, given that often even extremely obvious parody parties can often win back their deposit (500 actual votes, not even ‘effective votes’).

Doctrow wants to use the information co-ordination system to help minority parties reach a wider audience. I think it could be used in a much more active way to force policy promises on uncontroversial but low-status issues from potential future MPs. Let me take as an example ‘Research funding for transhuman causes’. Most people don’t know what transhumanism is, and most people who do know what it is don’t care. Most people who know what it is and care are basically in support of research into transhuman augmentations, but would definitely rank issues like the economy or defence as more important. There is a small constituency of people who oppose transhumanism outright, but they are not single issue voters either by any means (I imagine opposing transhumanism is strongly correlated with a ‘traditional religious value’ cluster which includes opposing abortion, gay marriage and immigration). Politicians could therefore (almost) costlessly support a small amount of research funding for transhuman, which would almost certainly be a sensible move when averaged across the whole country (either you discover something cool, in which case your population is made better off and your army more powerful or you don’t, and in the worst case you get a decent multiplier effect to the economy that comes from employing a load of material scientists and bioengineers). However we know that they won’t do this because while the benefits to the country might be great, the minor cost of supporting a low-status (‘weird’) project is borne entirely by the individual politician. What I mean by this is that the politician will probably not lose any votes by publically supporting transhumanism, but will lose status among their peers and will want to avoid this. There’s also a small risk of losing votes by supporting transhuman causes from the ‘traditional value’ cluster and no obvious demographic with whom supporting transhuman causes gains votes.

This indicates to me that if enough pro-transhumans successfully co-ordinated their action, they could bargain with the politicians standing for office. Let us say there are unequivocally enough transhumans to meet the effective voter threshold for a particular constituency. One person could go round each transhuman (maybe on that city’s subreddit) and get them to agree in principle to vote for whichever candidate will agree to always vote ‘Yes’ on research funding for transhuman causes, up to a maximum of £1bn. Each transhuman might have a weak preference for Blues vs Greens or vice versa, but the appeal is made to their sense of logic; each Blue vote is cancelled out by each Green vote, but each ‘Transhuman’ vote is a step closer to getting transhumanism properly funded, and transhumanism is more important than any marginal policy difference between the two parties. You then go to each candidate and present the evidence that the ‘transhuman’ block has the power to swing the election and is well co-ordinated enough to vote as a bloc on election day. If both candidates agree that they will vote ‘Yes’ on the bills you decided on, then send round an electronic message saying – essentially – “Vote your conscience”. If one candidate says ‘Yes’ and the other ‘No’ send round a message saying “Vote Blue” (or Green). If both candidates say ‘no’ send a message saying “Vote for the Transhuman Party (which is me)” in the hope that you can demonstrate you really did hold the balance of power, to increase the weight of your negotiation in the future.

If the candidate then goes back on their word, you slash and burn the constituency and make sure that no matter what the next candidate from that party promises, they lose. Also ensure that if that candidate ever stands in a marginal seat again, they lose (effectively ending their political career). This gives a strong incentive for MPs to vote the way they promised, and for parties to allow them to vote the way they promised.

Incidentally my preferred promise to extract from the candidates (and I don’t think this works in America) is to bring a bill with a particular wording if they win a Private Members’ Ballot (a system whereby junior members enter a lottery to see whose idea for a bill gets a ‘reading’ in the House of Commons, and hence a chance of becoming a law). For example, “This house would fund £1bn worth of transhumanism basic research over the next four years”. This is because it forces MPs to take a position on an issue they otherwise would not want to touch (because it is low-status) and one way out of this bind is to pretend the issue was high-status all along, which would be a good outcome for transhumanism as it means people might start funding it without the complicated information-coordination game I describe above.

One issue with this is that some groups – for example; Eurosceptics – are happy to single issue vote already, and there are far more Eurosceptics than there are rationalists in the UK. A US equivalent – as far as I understand – might be gun rights activists; they will vote for whatever party deregulates guns furthest, regardless of any other policies they might have and they are very numerous. This could be a problem, since a more numerous coalition will always beat a less numerous coalition at playing this information coordination game.

The first response is that it might actually be OK if this occurs. Being a Eurosceptic in no way implies a particular position on transhuman issues, so a politician could agree to the demands of the Eurosceptic bloc and transhuman bloc without issue. The numbers problem only occurs if a particular position automatically implies a position on another issue, so if there was a large single-issue anti-transhuman voting bloc, and there isn’t. There is a small problem if someone is both a Eurosceptic and a transhuman, since you can only categorically agree to vote the way one bloc tells you, but this is a personal issue where you have to decide which issue is more important and not a problem with the system as it stands.

The second response is that you are underestimating the difficulty of co-ordinating a vote in this way. For example, Eurosceptics – as a rule – will want to vote for the minority UKIP party to signal their affiliation with Eurosceptic issues. No matter what position the candidates agree to on Europe, UKIP will always be more extreme on European issues, since the candidate can only agree to sufficiently mainstream policies that the vote-cost of agreeing to the policy publically is less than the vote-gain of gaining the Eurosceptic bloc. Therefore there will be considerable temptation to defect and vote UKIP in the event of successfully coordinating a policy pledge from a candidate since the voter has a strong preference for UKIP over any other party. Transhumans – it is hypothesised – have a stronger preference for marginal gains in transhuman funding over any policy difference between the two major parties and so getting them to ‘hold their nose’ and vote for a candidate they would otherwise not want to is easier.

It is not just transhumanism that this vote-bloc scheme might work for, but transhumanism is certainly a good example. In my mind you could co-ordinate any issue where the proposed voting bloc is:

  1. Intelligent enough to understand why voting for a candidate you don’t like might result in outcomes you do like
  2. Sufficiently politically unaffiliated that voting for a party they disapprove of is a realistic prospect (hence I’m picking issues young people care about, since they typically don’t vote)
  3. Sufficiently internet-savvy that coordinating by email / reddit is a realistic prospect.
  4. Unopposed by any similar-sized or larger group which fits the above three criteria.
  5. Cares more about this particular issue than any other issue which fits the above four criteria

Some other good examples of this might be opposing homeopathy on the NHS, encouraging Effective Altruism in government foreign aid, spending a small portion of the Defence budget on FAI and so on.

Are there any glaring flaws I’ve missed?

Self-Congratulatory Rationalism

51 ChrisHallquist 01 March 2014 08:52AM

Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.

continue reading »

Lifestyle interventions to increase longevity

120 RomeoStevens 28 February 2014 06:28AM

There is a lot of bad science and controversy in the realm of how to have a healthy lifestyle. Every week we are bombarded with new studies conflicting older studies telling us X is good or Y is bad. Eventually we reach our psychological limit, throw up our hands, and give up. I used to do this a lot. I knew exercise was good, I knew flossing was good, and I wanted to eat better. But I never acted on any of that knowledge. I would feel guilty when I thought about this stuff and go back to what I was doing. Unsurprisingly, this didn't really cause me to make any positive lifestyle changes.

Instead of vaguely guilt-tripping you with potentially unreliable science news, this post aims to provide an overview of lifestyle interventions that have very strong evidence behind them and concrete ways to implement them.

continue reading »

Caelum est Conterrens: I frankly don't see how this is a horror story

26 chaosmage 06 March 2013 10:31AM

So Eliezer said in his March 1st HPMOR progress report:

I recommend the recursive fanfic “Friendship is Optimal: Caelum est Conterrens” (Heaven Is Terrifying).  This is the first and only effective horror novel I have ever read, since unlike Lovecraft, it contains things I actually find scary.

So I read that and it was certainly very much worth reading - thanks for the recommendation! Obviously, the following contains spoilers.

I'm confused about how the story is supposed to be "terrifying". I rarely find any fiction scary, but I suspect that this is about something else: I didn't think Failed Utopia #4-2 was "failed" either and in Three Worlds Collide, I thought the choice of the "Normal" ending made a lot more sense than choosing the "True" ending. The Optimalverse seems to me a fantastically fortunate universe, pretty much the best universe mammals could ever hope to end up in, and I honestly don't see how it is a horror novel, at all.

So, apparently there's something I'm not getting. Something that makes an individual's hard-to-define "free choice" more valuable than her much-easier-to-define happiness. Something like a paranoid schizophrenic's right not to be treated,

So I'd like the dumb version please. What's terrifying about the Optimalverse?

Meta Decision Theory and Newcomb's Problem

5 wdmacaskill 05 March 2013 01:29AM

Hi all,

As part of my PhD I've written a paper developing a new approach to decision theory that I call Meta Decision Theory. The idea is that decision theory should take into account decision-theoretic uncertainty as well as empirical uncertainty, and that, once we acknowledge this, we can explain some puzzles to do with Newcomb problems, and can come up with new arguments to adjudicate the causal vs evidential debate. Nozick raised this idea of taking decision-theoretic uncertainty into account, but he did not defend the idea at length, and did not discuss implications of the idea.

I'm not yet happy to post this paper publicly, so I'll just write a short abstract of the paper below. However, I would appreciate written comments on the paper. If you'd like to read it and/or comment on it, please e-mail me at will dot crouch at 80000hours.org. And, of course, comments in the thread on the idea sketched below are also welcome.

 

Abstract

First, I show that our judgments concerning Newcomb problems are stakes-sensitive. By altering the relative amounts of value in  the transparent box and the opaque box, one can construct situations in which one should clearly one-box, and one can construct situations in which one should clearly two-box. A plausible explanation of this phenomenon is that our intuitive judgments are sensitive to decision-theoretic uncertainty as well as empirical uncertainty: if the stakes are very high for evidential decision theory (EDT) but not for Causal Decision theory (CDT) then we go with EDT's recommendation, and vice-versa for CDT over EDT.

Second, I show that, if we 'go meta' and take decision-theoretic uncertainty into account, we can get the right answer in both the Smoking Lesion case and the Psychopath Button case.

Third, I distinguish Causal MDT (CMDT) and Evidential MDT (EMDT). I look at what I consider to be the two strongest arguments in favour of EDT, and show that these arguments do not work at the meta level. First, I consider the argument that EDT gets the right answer in certain cases. In response to this, I show that one only needs to have small credence in EDT in order to get the right answer in such cases. The second is the "Why Ain'cha Rich?" argument. In response to this, I give a case where EMDT recommends two-boxing, even though two-boxing has a lower average return than one-boxing.

Fourth, I respond to objections. First, I consider and reject alternative explanations of the stakes-sensitivity of our judgments about particular cases, including Nozick's explanation. Second, I consider the worry that 'going meta' leads one into a vicious regress. I accept that there is a regress, but argue that the regress is non-vicious.

In an appendix, I give an axiomatisation of CMDT.

What are your rules of thumb?

19 DataPacRat 15 February 2013 03:59PM

I'm not as smart as I like to think I am. Knowing that, I've gotten into a habit of trying to work out as many general principles as I can ahead of time, so that when I actually need to think of something, I've already done as much of the work as I can.

What are your most useful cached thoughts?

continue reading »

View more: Prev | Next