Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MarsColony_in10years comments on High Challenge - Less Wrong

22 Post author: Eliezer_Yudkowsky 19 December 2008 12:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (72)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: MarsColony_in10years 03 November 2015 09:07:09PM 2 points [-]

That provided me with some perspective. I'd only been thinking of cases where we imposed limitations, such as those we use with Alcohol and addictive drugs. But, as you point out, there are also regulations which push us toward immediate gratification, rather than away. If, after much deliberation, we collectively decide that 99% of potential values are long term, then perhaps we'd wind up abolishing most or all such regulations, assuming that most System 2 values would benefit.

However, at least some System 2 values are likely orthogonal to these sorts of motivators. For instance, perhaps NaNoWriMo participation would go down in a world with fewer social and economic safety nets, since many people would be struggling up Maslow's Hierarchy of Needs instead of writing. I'm not sure how large of a fraction of System 2 values would be aided by negative reinforcement. There would be a large number of people who would abandon their long-term goals in order to remove the negative stimuli ASAP. If the shortest path to removing the stimuli gets them 90% of the way toward a goal, then I'd expect most people to achieve the remaining 10%. However, for goals that are orthogonal to pain and hunger, we might actually expect a lower rate of achievement.

If descriptive ethics research shows that System 2 preferences dominate, and if the majority of that weighted value is held back by safety nets, then it'll be time to start cutting through red tape. If System 2 preferences dominate, and the majority of moral weight is supported by safety nets, then perhaps we need more cushions or even Basic Income. If our considered preference is actually to "live in the moment" (System 1 preferences dominate) then perhaps we should optimize for wireheading, or whatever that utopia would look like.

More likely, this is an overly simplified model, and there are other concerns that I'm not taking into account but which may dominate the calculation. I completely missed the libertarian perspective, after all.