Comment author: wwa 28 February 2014 10:35:02PM 1 point [-]

It seems to me that the odds look so grave to you because you gloss over several steps during this potential escalation.

Have a look at this: http://www.zerohedge.com/news/2014-02-28/ukraine-acting-president-says-russia-starts-aggression-against-country-russian-plane Original source: http://www.pravda.com.ua/rus/news/2014/02/28/7016674/

Posturing or not, if this info checks out then by existing treaties USA and UK are obliged to help Ukraine against Russia. You see, Ukraine gave up nukes in exchange for safety guarantees from Russia, UK and USA.

http://en.wikipedia.org/wiki/Nuclear_weapons_and_Ukraine

Before voting on accession, Ukraine demanded from Russia, the USA, France and the United Kingdom a written statement that these powers undertook to extend the security guarantees to Ukraine. Instead security assurances to Ukraine (Ukraine published the documents as guarantees given to Ukraine[5]) were given on 5 December 1994 at a formal ceremony in Budapest (known as the Budapest Memorandum on Security Assurances[6]), may be summarized as follows: Russia, the UK and the USA undertake to respect Ukraine's borders in accordance with the principles of the 1975 CSCE Final Act, to abstain from the use or threat of force against Ukraine, to support Ukraine where an attempt is made to place pressure on it by economic coercion, and to bring any incident of aggression by a nuclear power before the UN Security Council.

Of course more likely then not we'll find out once again that treaties and words aren't worth anything unless you have the upper hand... but this looks scary enough to me.

Comment author: Will_Sawin 28 February 2014 10:51:04PM 1 point [-]

None of those sound like they require military intervention?

Comment author: benkuhn 28 February 2014 12:28:29AM *  4 points [-]

For clarity, I don't trust Wiseman since I've never read anything and my prior for pop-sci is low. Luke's endorsement is a positive update to his credibility.

Fully verifying is expensive, but spot-checking is cheap (this post took me about 10 minutes, e.g.). Similarly, most people barely check GiveWell's research at all, but it still matters a lot that it's so transparent, because it's a hard-to-fake signal, and facilitates spot-checking.

Re: music--it looks like you were referring to a different study on the benefits of listening to music than the one I found in Amazon's preview of Wiseman. "Listen to classical music <to reduce blood pressure when stressed>" would have been another high-VoI addition to the OP.

Further studies indicate that "self-selected relaxing music" has the same effect, and that it's probably mediated by general reduction of SNS arousal. This suggests that (a) if you're doing an SNS-heavy task, like difficult math, you may not want to listen to music at the same time; (b) anything else you would expect to move you around the autonomic spectrum should work the same way (e.g. meditation). On the other hand, neither of the studies asked subjects to do anything while listening to music, so it's unclear whether the effect would stay visible. A possibly interesting meta-analysis is here. If doing anything while listening to music makes the effect go away, then I would guess that meditation or the autonomic-spectrum navigation that CFAR teaches is a more efficient way to reduce blood pressure.

I don't know if Wiseman went into any of those in his book, but my take-away is to do some research before installing any new habit.

Comment author: Will_Sawin 28 February 2014 02:54:30PM 0 points [-]

Difficult math is SNS-heavy?

In response to comment by satt on Polling Thread
Comment author: Gunnar_Zarncke 23 January 2014 09:59:27AM 1 point [-]

Seems that we have learned that P(A&B)<=P(A).

But I wonder whether we have an anchoring problem here. I myself used round numbers and notice that the median is a round number and that the probabilities go down in steps of 0.05 (and the mean follows suit almost linearly).

If anything the compound probabilities should show more or less geometric progression.

Anchoring to one of the values and then just roughly correcting for the difference in phrasing will not work (i.e. don't add any precision).

Do I notice this correctly? Can this be fixed? How?

Comment author: Will_Sawin 24 January 2014 10:45:27AM 0 points [-]

I rated the second question as more likely than the first because I think "most traits" means something different in the two questions.

Comment author: Eugine_Nier 21 January 2014 04:45:57AM -1 points [-]

The OP says they will stop calling it immoral once they can afford it.

Comment author: Will_Sawin 22 January 2014 02:15:31AM 0 points [-]

Only this particular thing.

Comment author: Kawoomba 16 January 2014 06:56:28PM 0 points [-]

?Que?

Comment author: Will_Sawin 16 January 2014 10:50:58PM 0 points [-]

That's what the Great Filter is, no?

Comment author: Kawoomba 16 January 2014 10:43:26AM 5 points [-]

If I'm never remembered for anything else in the rationalosphere, I would like to be known as the creator of the term "playdoughmanning".

Please stop with the prismatticmanning of tortured neologisms. The ensuing syllabilistic explosion might pose a memetic hazard (Great Filter = Tower of Babble).

Comment author: Will_Sawin 16 January 2014 06:30:27PM 3 points [-]

It would be amusing if the single primary reason that the universe is not buzzing with life and civilization is that any sufficiently advanced society develops terminology and jargon too complex to be comprehensible, and inevitably collapses because of that.

Comment author: shminux 16 January 2014 01:47:17AM 0 points [-]

Hmm, most of this went way over my head, unfortunately. I have no problem understanding probability in statements like "There is a 0.1% chance of the twin prime conjecture being proven in 2014", because it is one of many similar statements that can be bet upon, with a well-calibrated predictor coming out ahead on average. Is the statement "the twin prime conjecture is true with 99% probability" a member of some set of statements a well calibrated agent can use to place bets and win?

Comment author: Will_Sawin 16 January 2014 02:03:26AM 0 points [-]

For that purpose a better example is a computationally difficult statement, like "There are at least X twin primes below Y". We could place bets, and then acquire more computing power, and then resolve bets.

The mathematical theory of statements like the twin primes conjecture should be essentially the same, but simpler.

Comment author: Manfred 15 January 2014 09:40:27PM 0 points [-]

What is "the low computing power limit"? If our theories behave badly when you don't have computing power, that's unsurprising. Do you mean "the large computing power limit".

Nope. The key point is that as computing power becomes lower, Abram's process allows more and more inconsistent models.

the probability of them appearing in the random process is supposed to be this ratio

The probability of a statement appearing first in the model-generating process is not equal to the probability that it's modeled by the end.

Comment author: Will_Sawin 16 January 2014 02:00:33AM 0 points [-]

Nope. The key point is that as computing power becomes lower, Abram's process allows more and more inconsistent models.

So does every process.

The probability of a statement appearing first in the model-generating process is not equal to the probability that it's modeled by the end.

True. But for two very strong statements that contradict each other, there's a close relationship.

Comment author: Manfred 15 January 2014 06:47:03AM *  0 points [-]

His distribution also assigns a "reasonable probability" to statements like "the first 3^^^3 odd numbers are 'odd', then one isn't, then they go back to being 'odd'." In the low computing power limit, these are assigned very similar probabilities. Thus, if the first 3^^^3 odd numbers are 'odd', it's kind of a toss-up what the next one will be.

Do you disagree? If so, could you use math in explaining why?

Comment author: Will_Sawin 15 January 2014 02:40:52PM 0 points [-]

What is "the low computing power limit"? If our theories behave badly when you don't have computing power, that's unsurprising. Do you mean "the large computing power limit".

I think probability ( "the first 3^^^3 odd numbers are 'odd', then one isn't, then they go back to being 'odd'." ) / probability ("all odd numbers are 'odd'") is approximately 2^(length of 3^^^3) in Abram's system, because the probability of them appearing in the random process is supposed to be this ratio. I don't see anything about the random process that would make the first one more likely to be contradicted before being stated than the second.

Comment author: Manfred 01 January 2014 03:00:11AM 0 points [-]

Anyhow, the question is, why is throwing out non-90% models and trying again going to make [the probability that we assign true to P(x) using this random process] behave like we want the probability that P(x) is true to behave?

We can answer this with an analogy to updating on new information. If we have a probability distribution over models, and we learn that the correct model says that 90% of P(x) are true in some domain, what we do is we zero out the probability of all models where that's false, and normalize the remaining probabilities to get our new distribution. All this "output of the random process" stuff is really just describing a process that has some probability of outputting different models (that, is, Abram's process outputs a model drawn from some distribution, and then we call the probability that P(x) is true as the probability that the output of Abram's process assigns true to P(x)).

So the way you do updating is you zero out the probability that this process outputs a model where the conditioned-upon information is false, and then you normalize the outputs so that the process outputs one of the remaining models with the same relative frequencies. This is the same behavior as updating a probability distribution.

--

One thing I think you might mean by "we want conditioning to be conditioning" is that you don't want to store a (literal or effective) distribution over models, and then condition by updating that distribution and recalculating the probability of a statement. You want to store the probability of statements, and condition by doing something to that probability. Like, P(A|B) = P(AB)/P(B).

I like the aesthetics of that too - my first suggestion for logical probability was based off of storing probabilities of statements, after all. But to make things behave at all correctly, you need more than just that, you also need to be able to talk about correlations between probabilities. The easiest way to represent that? Truth tables.

Comment author: Will_Sawin 15 January 2014 02:35:59PM 0 points [-]

Yeah, updating probabilty distributions over models is believed to be good. The problem is, sometimes our probability distributions over models are wrong, as demonstrated by bad behavior when we update on certain info.

The kind of data that would make you want to zeroi out non-90% models. Is when you observe a bunch of random data points and 90% of them are true, but there are no other patterns you can detect.

The other problem is that updates can be hard to compute.

View more: Prev | Next