In response to Say It Loud
Comment author: TruePath 14 February 2016 07:50:57PM 0 points [-]

Sorry, but you can't get around the fact that humans are not well equipped to compute probabilities. We can't even state what our priors are in any reasonable sense much less compute exact probabilities.

As a result using probabilities has come to be associated with having some kind of model. If you've never studied the question and are asked how likely you think it is there are intelligent aliens you say something like "I think it's quite likely". You only answer with a number if you've broken it down into a model (chance life evolves * average time to evolve intelligencechance of disaster..).

Thus, saying something like "70% chance" indicates to most people that you are claiming your knowledge is the result of some kind of detailed computation and can thus be seen as an attempt to claim authority. You can't change this rule on your own.

Thankfully, there are easy verbal alternatives. "Ehh, I guess I would give 3:1 odds on it" and many others. But use of chance/probability language isn't it.

Comment author: TruePath 11 October 2015 09:56:34PM 1 point [-]

Uhh, why not just accept that you aren't and can never be perfectly rational and use those facts in positive ways.

Bubbles are psychologically comforting and help generate communities. Rationalist bubbling (which ironically includes the idea that they don't bubble) probably does more to build the community and correct other wrong beliefs than almost anything else.

Until and unless rationalist take over society the best strategy is probably just to push for a bubble that actively encourages breaking other (non-rationalist) bubbles.

Comment author: Fluttershy 12 November 2014 06:12:03AM *  2 points [-]

This error has been corrected; thank you for pointing this out!

I actually did a bit more research, and it really seems like flu vaccine efficacy in healthy adults is more like 70% (and sometimes as high as 90%), despite the fact that the average efficacy of the vaccine throughout the population is around 60%. The reason that efficacy in healthy adults is so high, relative to the average efficacy, is that efficacy in the elderly is around 30-40%.

Also, note that about 42% of the US population gets flu shots on any given year. So, if 10% of people on average get the flu, and the vaccine is 60% efficacious throughout the population, then we can write the following equations, defining sick1 as the event in which a person who was vaccinated gets the flu, and sick2 as the event in which a person who was not vaccinated gets the flu:

p(sick1) x 0.42 + p(sick2) x 0.58 = 1 x 0.10

p(sick1) = p(sick2) x 0.60

Solving this system of equations, we get:

p(sick1) = 0.0721

p(sick2) = 0.120 (previous typo: had been written as 0.0120)

The practical implication of this is that the conservative analysis conducted in this report, and shown in Figure 1, assumes that around 5.7% (rather than a more realistic 10 or 12%) of the population will catch the flu in any given year.

Comment author: TruePath 14 November 2014 10:35:23PM *  0 points [-]

So the equations should be (definition of vaccine efficacy from wikipedia)

.6 * p(sick2) = p(sick2) - p(sick1)
p(sick1) - .4 p(sick2) = 0 . i.e. efficacy is the difference be the unvaccinated and vacinated rates of infection divided by the unvaccinated rate. You have to assume there is no selective pressure in terms of who gets the vaccine (they have the same risk pool as the normal population for flu which is surely untrue) to get your assumtion that

.42* p(sick1) + .58*p(sick2) = .1 p(sick1) + 1.38p(sick2) = .238

or 1.78 p(sick2) = .238

p(sick2)=.13 (weird I getting a different result) p(sick1) = .05

Did I solve wrong or did you. I do math so I can't actually manipulate numbers very well but I not seeing the mistake.

Comment author: dspeyer 11 November 2014 08:18:00AM 10 points [-]

Omitting death seems like a big deal. Very crudely, it looks like p=10^-4. It's said that society values each life at $5M, so that's E=-$500 already, but each individual likely values their own life a bit higher.

Comment author: TruePath 14 November 2014 10:12:34PM 1 point [-]

Not with respect to their revealed preferences for working in high risk jobs I understand. There are a bunch of economic papers on this but it was a surprisingly low number.

Comment author: TruePath 14 November 2014 08:01:20PM 0 points [-]

Well it can't still be instrumental rationality anymore. I mean suppose the value being minimized is overall suffering and you are offered a (non-zero probability one time...and you know there are no other possible infinitary outcomes) threat that if you don't believe some false claim X god will create an infinite (no other infinite outcomes) amount of suffering. You know before the choice to believe the false claim that no effect of believing it will increase expected suffering to overwhelm the harm of not believing it.


But the real rub is what do you say about the situation where the rocks turn out to be rocks cleverly disguised as people. You still have every indication that your behavior convincing yourself is an attempt to believe a false statement but it is actually true.

Does the decision procedure which says whatever you want it to normally say but makes a special exception that you can deceive yourself if (description of this situation which happens to identify it uniquely in the world).

In other words is it a relation to truth that you demand. In which case the rule gets better whenever you make exceptions that happen (no matter how unlikely it is) in the actual world to generate true and instrumentally useful beliefs. Or is it some notion of approaching evidence?

If the latter you seem to be committed to the existence of something like Carnap's logical probability, i.e., something deducible from pure reason that assigns priors to all possible theories of the world. This is a notoriously unsolvable (in the sense that it doesn't have one) unsolvable problem.

At the very least can you state some formal conditions that constrain a rule for deciding between actions (or however you want to model it) that captures the constraint you want?

In response to 9/26 is Petrov Day
Comment author: CarlShulman 26 September 2007 07:06:36PM 7 points [-]

"I'm tempted to donate, to honor his deed." Presumably he has received some cash from the documentary, but the incentives created by his later life (and its publication) are horribly perverse.

It seems that this is right up the alley of the Nuclear Threat Initiative, which is supported by Warren Buffett and other donors. http://en.wikipedia.org/wiki/Nuclear_Threat_Initiative You could write a letter to them discussing the incentives and suggesting a prize for averting mega-disasters or existential risk, nominating Petrov for the first award.

Comment author: TruePath 14 November 2014 12:29:42PM 1 point [-]

Given that he would be dead otherwise (and the strong human survival drive) I don't see how the incentives are perverse.

I mean to make the incentives positive for pushing the button requires some really strong conditioning or torture threats.

Comment author: Normal_Anomaly 28 January 2012 02:11:47AM 8 points [-]

Would discovering that a wavefunction collapse postulate exists be evidence for simulation? A simulation that actually computed all Everett branches would demand exponentially more resources, so a simulator would be more likely to prune branches either randomly (true or pseudo-) or according to some criterion.

Comment author: TruePath 22 June 2014 01:00:37AM 1 point [-]

No since experientially we already know that we don't perceive the world as if all everett branches are computed.

In other words what is up for discovery is not 'not all everett branches are fully realized'....that's something from our apparent standpoint as belonging to a single such branch we could never actually know. All we could discover was that the collapse of the wavefunction is observable inside our world.

In other words nothing stops the aliens from simply not computing plenty of everett branches but leaving no trace in our observables to tell us that only one branch is actually real.

Comment author: thakil 28 January 2012 10:35:57AM 3 points [-]

One thing that confuses me about these discussions, and I'm very willing to be shown where my reasoning is wrong, is that there seems to be an implicit assumption that the simulators must follow any of the rules they've imposed upon us. If I simulate a universe powered by the energy generated by avacados, would the avacado beings try to spot an avacado limit, or an order to the avacados?

A simulator could have a completely different understanding as to how the universe works.

I would guess the argument against this is that why else would we be simulated if not to be a reflection of the universe above? I'm not sure I buy this, or necessarily assign a high probability to it.

Comment author: TruePath 22 June 2014 12:57:10AM 0 points [-]

I tried to avoid assuming this in the above discussion. You are correct that I do assume that the physics of the simulating world has two properties.

1) Effective physical computation (for the purposes of simulation) is the result of repeated essentially finite decisions. In other words the simulating world does not have access to a oracle that vastly aids in the computation of the simulated world. In other words they aren't simulating us by merely measuring when atoms decay in their world and that just happens to tell them data about a coherent lawlike physical reality.

I don't think this is so much an assumption as a definition of what it means to be simulated. If the description of our universe is embedded in the natural laws of the simulating universe we aren't so much a simulation as just a tiny part of the simulating universe.

2) I do assume that serial computation is more difficult to perform than serial computation, i.e., information can't be effectively transmitted infinitely fast in the simulating universe. Effectively is an important caveat there since even a world with an infinite speed of light would ultimately have to rely signals from sufficiently far off to avoid detection problems.

This is something that surely is plausible to be true. Maybe it isn't. THAT IS WHY I DON'T CLAIM THESE CONSIDERATIONS CAN EVER GIVE US A STRONG REASON TO BELIEVE WE AREN'T A SIMULATION. I do think they could give us strong reasons to believe we are.

Comment author: Eliezer_Yudkowsky 17 June 2014 08:49:51PM 40 points [-]

"Good people are consequentialists, but virtue ethics is what works," is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.

But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.

Comment author: TruePath 22 June 2014 12:39:28AM -5 points [-]

"Good people are consequentialists, but virtue ethics is what works,"

To nit pick a little I don't think consequentialism even allows one to coherently speak about good people and it certainly doesn't show that consequentialists are such people (standard alien who tortures people when they find consequentialists example).

Moreover, don't believe there is any sense in which one can show people who aren't consequentialists are making some mistake or even that people who value other consequences are doing so. You tacitly admit this with your examples of paper clip maximizing aliens and I doubt you can coherently claim that those who assert that objectively virtue ethics is correct are any less rational than those who assert that consequentialism is correct.

You and I both judge non-consequentialists to be foolish but we have to be careful to distinguish between simply strongly disapproving of their views and actually accusing them of irrationality. Indeed, the actions prescribed by any non-consequentialist moral theory are identical to those prescribed by some consequentialist theory (every possible choice pattern results in a different total world state so you can always order them to give identical results to whatever moral theory you like).

Given this point I think it is a little dangerous to speak to the meta-level. I mean ideally one would simply say I think objectively hedonic/whatever consequentialism is true regardless of what is pragmatically useful. Unfortunately, it's very unclear what the 'truth' of consequentialism even consists of if those who follow a non-consequentialist moral theory aren't logically incorrect.

Pedantically speaking it seems the best one can do is say that when given the luxury of considering situations you aren't emotionally close to and have time to think about you will apply consequentialist reasoning that values X to recommend actions to people and that in such moods you do strive to bind your future behavior as that reasoning demands.

Of course that too is still not quite right. Even in a contemplative mood we rarely become totally selfless and I doubt you (any more than I) actually strive to bind yourself so that given then choice you would torture and kill your loved ones to help n+1 strangers avoid the same fate (assuming those factors not relevant to the consequences you say you care about).

Overall it's all a big mess and I don't see any easy statements that are really correct.

Comment author: TruePath 19 March 2014 11:52:04AM 0 points [-]

I'd also like to point out the Cartesian barrier is actually probably a useful feature.

It's not objectively true in any sense but the relation between external input, output and effect is very very different than that between internal input (changes to your memories say), output and effect. Indeed, I would suggest there was a very good reason that we took so long to understand the brain. It would be just too difficult (and perhaps impossible) to do so at a direct level the way we understand receptors being activated in our eyes (yes all that visual crap we do is part of our understanding).

Take your example of a sensor aimed at the computer's memory circuit. Unlike almost every other situation there are cases that it can't check it's hypothesis against because such a check would be logically incoherent. In other words certain theories (or at least representations of them) will be diagonalized against because the very experiments you wish to do can't be effected because that 'intention' itself modifies the memory cells in such a way as to make the experiment impossible.

In short the one thing we do know is that assuming that we are free to choose from a wide range of actions independently of the theory we are trying to test and that how we came to choose that action is irrelevant is an effective strategy for understanding the world. It worked for us.

Once the logic of decision making is tightly coupled with the observations themselves the problem gets much harder and may be insoluble from the inside, i.e., we may need to experiment on others and assume we are similar.

View more: Next