Comment author: MarsColony_in10years 05 November 2015 05:15:53PM 0 points [-]

So, all arguments which do not make different predictions are extensionally equal, but are not intensional. From the Wikipedia page:

Consider the two functions f and g mapping from and to natural numbers, defined as follows:

  • To find f(n), first add 5 to n, then multiply by 2.

  • To find g(n), first multiply n by 2, then add 10.

These functions are extensionally equal; given the same input, both functions always produce the same value. But the definitions of the functions are not equal, and in that intensional sense the functions are not the same.

Comment author: Lumifer 03 November 2015 05:38:29PM *  2 points [-]

How can we build a society/world where there are strong optimization forces to enable people to choose System 2 preferences?

I think the real world qualifies quite well. People who listen to their System 2 achieve much more than people who are slaves to their System 1.

If you want stronger "optimization forces", take away the safety net. Hunger and pain are excellent incentives. Not many people would allow themselves to get addicted to WoW if it means they'll become homeless in a short while.

In response to comment by Lumifer on High Challenge
Comment author: MarsColony_in10years 03 November 2015 09:07:09PM 2 points [-]

That provided me with some perspective. I'd only been thinking of cases where we imposed limitations, such as those we use with Alcohol and addictive drugs. But, as you point out, there are also regulations which push us toward immediate gratification, rather than away. If, after much deliberation, we collectively decide that 99% of potential values are long term, then perhaps we'd wind up abolishing most or all such regulations, assuming that most System 2 values would benefit.

However, at least some System 2 values are likely orthogonal to these sorts of motivators. For instance, perhaps NaNoWriMo participation would go down in a world with fewer social and economic safety nets, since many people would be struggling up Maslow's Hierarchy of Needs instead of writing. I'm not sure how large of a fraction of System 2 values would be aided by negative reinforcement. There would be a large number of people who would abandon their long-term goals in order to remove the negative stimuli ASAP. If the shortest path to removing the stimuli gets them 90% of the way toward a goal, then I'd expect most people to achieve the remaining 10%. However, for goals that are orthogonal to pain and hunger, we might actually expect a lower rate of achievement.

If descriptive ethics research shows that System 2 preferences dominate, and if the majority of that weighted value is held back by safety nets, then it'll be time to start cutting through red tape. If System 2 preferences dominate, and the majority of moral weight is supported by safety nets, then perhaps we need more cushions or even Basic Income. If our considered preference is actually to "live in the moment" (System 1 preferences dominate) then perhaps we should optimize for wireheading, or whatever that utopia would look like.

More likely, this is an overly simplified model, and there are other concerns that I'm not taking into account but which may dominate the calculation. I completely missed the libertarian perspective, after all.

In response to High Challenge
Comment author: Roland2 19 December 2008 05:12:43AM 6 points [-]

@ D. Alex: Some important reasons why the game is so pleasurable seem to be:

a) the ultimate goals are pretty clear (so unlike real life...)

b) the "measures of progress" are likewise clear -

c) the rewards are clear -

This looks like real life without the hard parts. Sure, it makes it more fun, but at the end will you feel rewarded? If you look back now or in a few years to the time spent playing and consider what you could have achieved in real life if you invested the same time into real challenges how will you feel? From my own experience I can tell you that I regret every minute I wasted playing stupid games. Nowadays I still play chess ocasionally to relax, but I'm successfully getting rid of that habit. I avoid overly immersive/addictive games like the plague.

In response to comment by Roland2 on High Challenge
Comment author: MarsColony_in10years 03 November 2015 05:19:55PM 0 points [-]

Sounds like WoW is optimized for System 1 pleasures, and you explicitly reject this. I think that brings up an important point: How can we build a society/world where there are strong optimization forces to enable people to choose System 2 preferences? Once such a world iterated on itself for a couple generations, what might it look like?

I don’t think this would be a world with no WoW-like activities, because a world without any candy or simple pleasures strikes me as deeply lacking. My System 2 seems to place at least a little value on System 1 being happy. So I’d guess the world would just have many fewer of such activities, and be structured in such a way as to make it easy to avoid choices we’d regret the next day.

If this turns out to a physically impossible problem to overcome for some reason, then I could imagine a world with no System 1 pleasures, but such a world would be deeply lacking, even if that loss was more than made up for by gains in our System 2 values.

As a side note, it'd be an interesting question how much of the theoretical per capita maximum value falls into which categories. An easier question is how much of our currently actualized value is immediate gratifications. I'd expect that to be heavily biased toward System 1, since we suffer from Akrasia, but it might still be informative.

Comment author: MarsColony_in10years 02 November 2015 05:29:18PM 2 points [-]

I've recently started using RSS feeds. Does anyone have LW-related feeds they'd recommend? Or for that matter, anything they'd recommend following which doesn't have an RSS feed?

Here's my short list so far, in case anyone else is interested:

  • Less Wrong Discussion

  • Less Wrong Main (ie promoted)

  • Slate Star Codex

  • Center for the Study of Existential Risk

  • Future of Life Institute [they have a RSS button, but it appears to be broken. They just updated their webpage, so I'll subscribe once there's something to subscribe to.]

  • Global Priorities Project

  • 80,000 Hours

  • SpaceX [an aerospace company, which Elon Musk refuses to take public until they've started a Mars colony]

These obviously have an xrisk focus, but feel free to share anything you think any Less-Wrongers might be interested in, even if it doesn't sound like I would be.

For anyone looking to start using RSS, I'd recommend using the Bamboo Feed Reader extension in FireFox, and deleting all the default feeds. I started out using Sage as a feed aggregator, but didn't like the sidebar style or the tiled reader.

Comment author: turchin 29 October 2015 09:01:15AM 1 point [-]

"I'll grant that if we roll the dice enough times, the 1/100 cases will start to dominate, but we only have 2 categories of near misses. That doesn't seem like enough to let us assume a 1/100 ratio of catastrophes to near misses."

In this case total probability of near misses will be something like (1/100 +1/3000)/2 = almost 1/200. If we look into nature of cold war near misses we could see that 1/100 estimate is more probable. More research need to estimate what field is more equal to cold war. Probably it would be nuclear accidents on power stations. Large research on the topic is here.http://www-pub.iaea.org/MTCD/Publications/PDF/Pub1545_web.pdf But it doesn't give exacxt estimate of the frequency of NM, stating it as few to thousands. They define NM as chain of events and they also stated that rising security measures help to reduce NM frequency.

On my first link near-miss frequency is already aggregated: "Studies in several industries indicate that there are between 50 and 100 near misses for every accident. Also, data indicates that there are perhaps 100 erroneous acts or conditions for every near miss. This gives a total population of roughly 10,000 errors for every accident. Figure 1 illustrates the relationships between accidents, near misses and non-incidents." http://www.process-improvement-institute.com/_downloads/Gains_from_Getting_Near_Misses_Reported_website.pdf

Comment author: MarsColony_in10years 29 October 2015 03:39:48PM 0 points [-]

Ah, thanks for the explanation. I interpreted the statement as you trying to demonstrate that number of nuclear winters / number of near misses = 1/100. You are actually asserting this instead, and using the statement to justify ignoring other categories of near misses, since the largest will dominate. That's a completely reasonable approach.

I really wish there was a good way to estimate the accidents per near miss ratio. Maybe medical mistakes? They have drastic consequences if you mess up, but involve a lot of routine paperwork. But this assumes that the dominant factors in the ratio are severity of consequences. (Probably a reasonable assumption. Spikes on steering wheels make better drivers, and bumpers make less careful forklift operators.) I'll look into this when I get a chance.

Comment author: MarsColony_in10years 29 October 2015 05:54:02AM *  2 points [-]

Excellent start and setup, but I diverge from your line of thought here:

We will use a lower estimate of 1 in 100 for the ratio of near-miss to real case, because the type of phenomena for which the level of near-miss is very high will dominate the probability landscape. (For example, if an epidemic is catastrophic in 1 to 1000 cases, and for nuclear disasters the ratio is 1 to 100, the near miss in the nuclear field will dominate).

I'm not sure I buy this. We have two types of near misses (biological and nuclear). Suppose we construct some probability distribution for near-misses, ramping up around 1/100 and ramping back down at 1/1000. That's what we have to assume for any near-miss scenario, if we know nothing additional. I'll grant that if we roll the dice enough times, the 1/100 cases will start to dominate, but we only have 2 categories of near misses. That doesn't seem like enough to let us assume a 1/100 ratio of catastrophes to near misses.

Additionally, there does seem to be good reason to believe that the rate of near misses has gone down since the cold war ended. (Although if any happened, they'd likely still be classified.) That's not to say that our current low rate is a good indicator, either. I would expect our probability of catastrophe to be dominated by the probability of WWIII or another cold war.

We had 2 world wars in the first 50 years of last century, before nuclear deterrence substantially lowered the probability of a third. If that's a 10x reduction, then we can expect 0.4 a century instead of 4 a century. If there's a 100x reduction, then we might expect 0.04 world wars a century. Multiply that by the probability of nuclear winter given WWIII to get the probability of disaster.

However, I suspect that another cold war is more likely. We spent ~44 of the past 70 years in a cold war. If that's more or less standard, then on average we might expect to spend 63% of any given century in a cold war. This can give us a rough range of probabilities of armageddon:

  • 1 near miss a year spent in cold war * 63 years spent in cold war per century * 1 nuclear winter per 100 near misses = 63% chance of nuclear winter per century

  • 0.1 near miss a year spent in cold war * 63 years spent in cold war per century * 1 nuclear winter per 3000 near misses = 0.21% chance of nuclear winter per century

For the record, this range corresponds to a projected half life between roughly 1 century and ~100 centuries. That's much more broad then your 50-100 year prediction. I'm not even sure where to start to guesstimate the risk of an engineered pandemic.

In response to Ethical Injunctions
Comment author: RobinHanson 20 October 2008 11:38:14PM 24 points [-]

The problem here of course is how selective to be about rules to let into this protected level of "rules almost no one should think themselves clever enough to know when to violate." After all, your social training may well want you to include "Never question our noble leader" in that set. Many a Christian has been told the mysteries of God are so subtle that they shouldn't think themselves clever enough to know when they've found evidence that God isn't following a grand plan to make this the best of all possible worlds.

Comment author: MarsColony_in10years 28 October 2015 09:34:38PM 0 points [-]

The problem here of course is how selective to be about rules to let into this protected level

Couldn't this be determined experimentally? Ignore the last hundred years or so, or however much might influence our conclusion based on modern politics. Find a list of the people who had a large counterfactual impact on history. Which rules lead to desirable results?

For example, the trial of Socrates made him a martyr, significantly advancing his ideas. That's a couple points for "die for the principle of the matter" as an ethical injunction. After Alexander the great died, anti-Macedonian sentiment in Athens caused Aristotle to flee, saying "I will not allow the Athenians to sin twice against philosophy". Given this, perhaps Socrates's sacrifice didn't achieve as much as one might think, and we should update a bit in the opposite direction. Then again, Aristotle died a year later, having accomplished nothing noteworthy in that time.

In response to Ethical Injunctions
Comment author: MarsColony_in10years 28 October 2015 09:04:48PM 0 points [-]

All the happiness that the warm thought of an afterlife ever produced in humanity, has now been more than cancelled by the failure of humanity to institute systematic cryonic preservations after liquid nitrogen became cheap to manufacture. And I don't think that anyone ever had that sort of failure in mind as a possible blowup, when they said, "But we need religious beliefs to cushion the fear of death." That's what black swan bets are all about—the unexpected blowup.

That's a fantastic quote.

Comment author: MarsColony_in10years 28 October 2015 02:40:42AM 10 points [-]

Today, October 27th, is the 53rd anniversary of the day Vasili Arkhipov saved the world. I realize Petrov Day was only a month ago, and there was a post then. Although I appreciate our Petrov ceremony, I personally think Arkhipov had a larger counterfactual impact than Petrov, (since nukes might not have been launched even if Petrov hadn't been on shift at the time) and so I'd like to remember Vasili Arkhipov as well.

Comment author: MarsColony_in10years 27 October 2015 09:25:41PM *  2 points [-]

Donald E. Brown's list of human universals is a list of psychological properties which are found so commonly that anthropologists don't report them.

I've looked for that link before, and couldn't find it. It's closely related to Moral Foundations Theory, which is basically 6 categories for features of morality which are found in every culture.

View more: Prev | Next