Comment author: Will_Newsome 13 July 2013 12:15:59PM *  -2 points [-]

Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity.

Of course, there are also perspective-relative "highly probable" alternate explanations than sound reasoning for non-Christians' belief in non-Christianity. (I chose that framing precisely to make a point about what hypothesis privilege feels like.) E.g., to make the contrast in perspectives stark, demonic manipulation of intellectual and political currents. E.g., consider that "there are no transhumanly intelligent entities in our environment" would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote. Also "human minds are prone to see agency when there is in fact none, therefore no perception of agency can provide evidence of (non-human) agency" would be a useful idea for (Christian-)hypothetical demons to promote.

Of course, from our side that perspective looks quite discountable because it reminds us of countless cases of humans seeing conspiracies where it's in fact quite demonstrable that no such conspiracy could have existed; but then, it's hard to say what the relevance of that is if there is in fact strong but incommunicable evidence of supernaturalism—an abundance of demonstrably wrong conspiracy theorists is another thing that the aforementioned hypothetical supernatural processes would like to provoke and to cultivate. "The concept of 'evidence' had something of a different meaning, when you were dealing with someone who had declared themselves to play the game at 'one level higher than you'." — HPMOR. At roughly this point I think the arena becomes a social-epistemic quagmire, beyond the capabilities of even the best of Lesswrong to avoid getting something-like-mind-killed about.

Comment author: benelliott 14 July 2013 02:36:11AM 1 point [-]

consider that "there are no transhumanly intelligent entities in our environment" would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote

Why?

Comment author: drethelin 13 July 2013 04:04:29AM 15 points [-]

Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.

Comment author: benelliott 14 July 2013 02:30:35AM *  2 points [-]

It seems like a mess of tautologies and thought experiments

My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don't suggests there is still work to be done.

Comment author: Kaj_Sotala 13 July 2013 06:56:30AM 3 points [-]

Would you have any specific example?

Comment author: benelliott 14 July 2013 02:22:54AM *  2 points [-]

I don't know if this is what the poster is thinking of, but one example that came up recently for me is the distinction between risk-aversion and uncertainty-aversion (these may not be the correct terms).

Risk aversion is the what causes me to strongly not want to bet $1000 on a coin flip, even though the expectancy of is zero. I would characterise risk-aversion as an arational preference rather than an irrational bias, primarily becase it arises naturally from having a utility function that is non-linear in wealth ($100 is worth a lot if you're begging on the streets, not so much if you're a billionaire).

However, something like the Allais paradox can be mathematically proven to not arise from any utility function, however non-linear, and therefore is not explainable by risk aversion. Uncertainty aversion is roughly speaking my name for whatever-it-is-that-causes-people-to-choose-irrationally-on-Allais. It seems to work be causing people to strongly prefer certain gains to high probability gains, and much more weakly prefer high-probability gains to low-probability gains.

For the past few weeks I have been in an environment where casual betting for moderate sized amounts ($1-2 on the low end, $100 on the high end) is common, and disentangling risk-aversion from uncertainty aversion in my decision process has been a constant difficulty.

Comment author: gothgirl420666 13 July 2013 08:44:39PM *  3 points [-]

It's not obvious to me how tragedy of the commons/prisoner's dilemma is isomorphic to Newcomb's problem, but I definitely believe you that it could be. If TDT does in fact present a coherent solution to these types of problems, then I can easily see how it would be useful. I might try to read the pdf again sometime. Thanks.

Comment author: benelliott 14 July 2013 01:52:44AM 4 points [-]

They aren't isomorphic problems, however it is the case that CDT two-boxes and defects while TDT one boxes and co-operates (against some opponents).

Comment author: RomeoStevens 06 July 2013 10:26:07PM 12 points [-]

It's possible to write about characters cleverer than oneself by two means I can think of.

  1. having unlimited time to think about what your character arrives at in an instant

  2. getting multiple people to help with the above.

Comment author: benelliott 07 July 2013 02:18:23AM 5 points [-]

But at some point your character is going to think about something for more than an instant (if they don't then I strongly contest that they are very intelligent). In a best case scenario, it will take you a very long time to write this story, but I think there's some extent to which being more intelligent widens the range of thoughts you can think of ever.

Comment author: thomblake 03 July 2013 03:16:44PM 7 points [-]

What he means is that he wishes that books on memory charms fit that description - but in fact they're not guarded at all or even in the restricted section of the library.

Comment author: benelliott 04 July 2013 05:57:40AM 0 points [-]

That's clearly the first level meaning. He's wondering whether there's a second meaning, which is a subtle hint that he has already done exactly that, maybe hoping that Harry will pick up on it and not saying it directly in case Dumbledore or someone else is listening, maybe just a private joke.

Comment author: blacktrance 18 June 2013 07:44:29PM 0 points [-]

If you define "utility function" as "what agents maximize" then your above statement is true but tautological. If you define "utility function" as "an agent's relation between states of the world and that agent's hedons" then it's not true that you can only maximize your utility function.

Comment author: benelliott 18 June 2013 09:07:26PM 0 points [-]

I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don't see why a 'happiness function' would be even slightly interesting to decision theorists.

I think I'd want to define a utility function as "what an agent wants to maximise" but I'm not entirely clear how to unpack the word 'want' in that sentence, I will admit I'm somewhat confused.

However, I'm not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.

Comment author: blacktrance 18 June 2013 07:43:53AM 1 point [-]

People can intentionally maximize anything, including the number of paperclips in the universe. Suppose there was a religion or school of philosophy that taught that maximizing paperclips is deontologically the right thing to do - not because it's good for anyone, or because Divine Clippy would smite them for not doing it, just that morality demands that they do it. And so they choose to do it, even if they hate it.

Comment author: benelliott 18 June 2013 03:17:50PM 0 points [-]

In that case, I would say their true utility function was "follow the deontological rules" or "avoid being smited by divine clippy", and that maximising paperclips is an instrumental subgoal.

In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.

Comment author: patriota 19 May 2013 04:40:45AM 0 points [-]

The p-value for this problem is not 1/36. Notice that, we have the following two hypotheses, namely

H0: The Sun didn't explode, H1: The Sun exploded.

Then,

p-value = P("the machine returns yes", when the Sun didn't explode).

Now, note that the event

"the machine returns yes"

is equivalent to

"the neutrino detector measures the Sun exploding AND tells the true result" OR "the neutrino detector does not measure the Sun exploding AND lies to us".

Assuming that the dice throwing is independent of the neutrino detector measurement, we can compute the p-value. First define:

p0 = P("the neutrino detector measures the Sun exploding", when the Sun didn't explode),

then the p-value is

p-value = p035/36 + (1-p0)1/36

=> p-value = (1/36)(35p0 + 1 - p0)

=> p-value = (1/36)(1+34p0).

If p0 = 0, then we are considering that the detector machine will never measure that "the Sun just exploded". The value p0 is obviously incomputable, therefore, a classical statistician that knows how to compute a p-value would never say that the Sun just exploded. By the way, the cartoon is funny.

Best regards, Alexandre Patriota.

Comment author: benelliott 16 June 2013 10:31:58AM 1 point [-]

(1/36)(1+34p0) is bounded by 1/36, I think a classical statistician would be happy to say that the evidence has a p-value of 1/36 her. Same for any test where H_0 is a composite hypothesis, you just take the supremum.

A bigger problem with your argument is that it is a fully general counter-argument against frequentists ever concluding anything. All data has to be acquired before it can be analysed statistically, all methods of acquiring data have some probability of error (in the real world) and the probability of error is always 'unknowable', at least in the same sense that p0 is in your argument.

You might as well say that a classical statistician would not say the sun had exploded because he would be in a state of total Cartesian doubt about everything.

Comment author: CCC 11 June 2013 09:44:49AM 1 point [-]

Yes... at one million trials per run, you wouldn't expect much more than 20 flips in a run in any case. By my quick calculation, that should result in an average around 3.64 - with perhaps some variability due to a low-probability long string of, say, 30 heads turning up.

Yet you got an average around 8. This suggests that a long chain of heads may be turning up slightly more often than random chance would suggest; that your RNG may be slightly biased towards long sequences.

Comment author: benelliott 13 June 2013 11:20:04AM 0 points [-]

So, I wrote a similar program to Phil and got similar averages, here's a sample of 5 taken while I write this comment

8.2 6.9 7.7 8.0 7.1

These look pretty similar to the numbers he's getting. Like Phil, I also get occasional results that deviate far from the mean, much more than you'd expect to happen with and approximately normally distributed variable.

I also wrote a program to test your hypothesis about the sequences being too long, running the same number of trials and seeing what the longest string of heads is, the results are

19 22 18 25 23

Do these seem abnormal enough to explain the deviation, or is there a problem with your calculations?

View more: Prev | Next