Comment author: Davidmanheim 28 June 2016 04:19:55PM 0 points [-]

Where will the Jul 3. Meetup be? (Same place?) Is there contact information available for when I try to show up?

Comment author: Davidmanheim 28 June 2016 04:17:45PM 0 points [-]

Any further meetups planned?

Comment author: Davidmanheim 24 June 2016 04:16:58AM 0 points [-]

Is Venkatesh Rao (ribbonfarm.com) a useful LW-adjacent twitter account?

Comment author: Davidmanheim 03 May 2016 09:35:34PM *  0 points [-]

A link to Tooby and Cosmides' pape cited in the intro; http://www.cep.ucsb.edu/papers/pfc92.pdf (Very long, but enlightening.)

Comment author: TheAncientGeek 28 January 2016 09:05:09AM 0 points [-]

It is not clearly the case that all probability is epistemic uncertainty. There is no valid argument that establishes that. There can be no armchair argument that establishes that, since the existence or otherwise of objective probability is a property of the universe, and has to be established by looking.

Comment author: Davidmanheim 01 February 2016 03:56:46PM 1 point [-]

OK. But, there is still some important epistemic uncertainty that people nonetheless treat as intrinsic, purely because derp.

Comment author: TheAncientGeek 26 January 2016 10:19:17AM 1 point [-]

Which is?

Comment author: Davidmanheim 27 January 2016 02:29:13AM 0 points [-]

That humans fall prey to the mind projection fallacy about much more consequential parts of what is clearly epistemic uncertainty.

Comment author: MrMind 26 January 2016 11:15:49AM 2 points [-]

Just a nitpick: not all interpretations treat quantum uncertainty as ontological. Many Worlds indeed say that it's just indexical (and so, epistemical) uncertainty.

Comment author: Davidmanheim 27 January 2016 02:27:38AM 0 points [-]

Yes, and that means we have epistemic uncertainty about whether there is ontological uncertainty at that level - but again, it's irrelevant to almost any question we would ask in decision making.

Comment author: TheAncientGeek 24 January 2016 09:12:20AM 1 point [-]

This is in contrast to Eliezer's point that "Uncertainty exists in the map, not in the territory" - not that he's wrong, just that it's usually not a useful argument to have.

I don't know whether he is wrong in the sense that irreducible uncertainty exists in the territory, but the reasoning he uses to reach the conclusion is invalid.

Comment author: Davidmanheim 26 January 2016 02:43:14AM 0 points [-]

He's discussing a different point.

Comment author: TheMajor 23 January 2016 12:30:04PM *  2 points [-]

I am not convinced that there exists anything like aleatory uncertainty - even QM uncertainty lies in the map. Having said that I agree with your point: that this doesn't matter, and value of information is the relevant measure (which is clearly not binary).

Having read your response to Dagon I am now confused - you state that:

This is in contrast to Eliezer's point that "Uncertainty exists in the map, not in the territory"

but above you only show the orthogonal point that allowing for irresolvable uncertainty can provide useful models, regardless of the existence of such uncertainty. If this is your main point (along with introducing the standard notation used in these models), how is this a contrast with uncertainty being in the map? Lots of good models have elements that can not be found in real life, for example smooth surfaces, right angles or irreducible macroscopic building blocks.

Comment author: Davidmanheim 24 January 2016 02:47:27AM 0 points [-]

My main point was that it doesn't matter. Whether the irresolvable uncertainty exists in the territory isn't a question anyone can answer - I can only talk about my map.

Comment author: Dagon 22 January 2016 09:25:08PM 2 points [-]

I'd argue that there's a continuum of uncertainty from "already known" to "easily resolved" to "theoretically resolvable using current technology" to "theoretically resolvable with massive resources" to "theoretically resolvable by advanced civilizations".

There may or may not be an endpoint at "theoretically unknowable", but it doesn't matter. The point is that this isn't a binary distinction, and that categorizing by theory doesn't help us. The question for any decision theory is "what is the cost and precision that I can get with further modeling or measurements". Once the cost of information gathering is higher than the remaining risk due to uncertainty, you have to make the choice, and it's completely irrelevant how much of that remaining uncertainty is embedded in quantum unmeasurables and how much in simple failure to gather facts.

Comment author: Davidmanheim 22 January 2016 09:39:18PM *  0 points [-]

Absolutely - and that continuum is why I think that we should be OK with letting people call things "fundamental uncertainties." People in these circles spend a lot of time trying to define everything, or argue terminology, but we can get past that in order to note that for making most decisions, it's OK to treat some (not all) seemingly "random" things as fundamentally unknowable.

This is in contrast to Eliezer's point that "Uncertainty exists in the map, not in the territory" - not that he's wrong, just that it's usually not a useful argument to have. Instead, as you note, we should ask about value of information and make the decision.

View more: Next