Map:Territory::Uncertainty::Randomness – but that doesn’t matter, value of information does.

6 Davidmanheim 22 January 2016 07:12PM

In risk modeling, there is a well-known distinction between aleatory and epistemic uncertainty, which is sometimes referred to, or thought of, as irreducible versus reducible uncertainty. Epistemic uncertainty exists in our map; as Eliezer put it, “The Bayesian says, ‘Uncertainty exists in the map, not in the territory.’” Aleatory uncertainty, however, exists in the territory. (Well, at least according to our map that uses quantum mechanics, according to Bells Theorem – like, say, the time at which a radioactive atom decays.) This is what people call quantum uncertainty, indeterminism, true randomness, or recently (and somewhat confusingly to myself) ontological randomness – referring to the fact that our ontology allows randomness, not that the ontology itself is in any way random. It may be better, in Lesswrong terms, to think of uncertainty versus randomness – while being aware that the wider world refers to both as uncertainty. But does the distinction matter?

To clarify a key point, many facts are treated as random, such as dice rolls, are actually mostly uncertain – in that with enough physics modeling and inputs, we could predict them. On the other hand, in chaotic systems, there is the possibility that the “true” quantum randomness can propagate upwards into macro-level uncertainty. For example, a sphere of highly refined and shaped uranium that is *exactly* at the critical mass will set off a nuclear chain reaction, or not, based on the quantum physics of whether the neutrons from one of the first set of decays sets off a chain reaction – after enough of them decay, it will be reduced beyond the critical mass, and become increasingly unlikely to set off a nuclear chain reaction. Of course, the question of whether the nuclear sphere is above or below the critical mass (given its geometry, etc.) can be a difficult to measure uncertainty, but it’s not aleatory – though some part of the question of whether it kills the guy trying to measure whether it’s just above or just below the critical mass will be random – so maybe it’s not worth finding out. And that brings me to the key point.

In a large class of risk problems, there are factors treated as aleatory – but they may be epistemic, just at a level where finding the “true” factors and outcomes is prohibitively expensive. Potentially, the timing of an earthquake that would happen at some point in the future could be determined exactly via a simulation of the relevant data. Why is it considered aleatory by most risk analysts? Well, doing it might require a destructive, currently technologically impossible deconstruction of the entire earth – making the earthquake irrelevant. We would start with measurement of the position, density, and stress of each relatively macroscopic structure, and the perform a very large physics simulation of the earth as it had existed beforehand. (We have lots of silicon from deconstructing the earth, so I’ll just assume we can now build a big enough computer to simulate this.) Of course, this is not worthwhile – but doing so would potentially show that the actual aleatory uncertainty involved is negligible. Or it could show that we need to model the macroscopically chaotic system to such a high fidelity that microscopic, fundamentally indeterminate factors actually matter – and it was truly aleatory uncertainty. (So we have epistemic uncertainty about whether it’s aleatory; if our map was of high enough fidelity, and was computable, we would know.)

It turns out that most of the time, for the types of problems being discussed, this distinction is irrelevant. If we know that the value of information to determine whether something is aleatory or epistemic is negative, we can treat the uncertainty as randomness. (And usually, we can figure this out via a quick order of magnitude calculation; Value of Perfect information is estimated to be worth $100 to figure out which side the dice lands on in this game, and building and testing / validating any model for predicting it would take me at least 10 hours, my time is worth at least $25/hour, it’s negative.) But sometimes, slightly improved models, and slightly better data, are feasible – and then worth checking whether there is some epistemic uncertainty that we can pay to reduce. In fact, for earthquakes, we’re doing that – we have monitoring systems that can give several minutes of warning, and geological models that can predict to some degree of accuracy the relative likelihood of different sized quakes.

So, in conclusion; most uncertainty is lack of resolution in our map, which we can call epistemic uncertainty. This is true even if lots of people call it “truly random” or irreducibly uncertain – or if they are fancy, aleatory uncertainty. Some of what we assume is uncertainty is really randomness. But lots of the epistemic uncertainty can be safely treated as aleatory randomness, and value of information is what actually makes a difference. And knowing the terminology used elsewhere can be helpful.

Comment author: Davidmanheim 14 January 2016 05:42:55AM 0 points [-]

Also, a little bit of recommended reading: What is EA? - http://www.effectivealtruism.org/about-ea What Evidence Filtered Evidence - http://lesswrong.com/lw/jt/what_evidence_filtered_evidence/

Meetup : Finding Effective Altruism with Biased Inputs on Options - LA Rationality Weekly Meetup

1 Davidmanheim 14 January 2016 05:31AM

Discussion article for the meetup : Finding Effective Altruism with Biased Inputs on Options - LA Rationality Weekly Meetup

WHEN: 20 January 2016 07:00:00PM (-0800)

WHERE: 10850 West Pico Boulevard, Los Angeles, CA 90064Westside Pavilion - Upstairs Wine Bar (Next to the movie theater)

We're going to be discussing the general question of how to use biased information to make rational decisions, but talk about the specific context of how to be an Effective Altruist doing so.

The various EA nonprofits each have a claim to effective altruism, and there is lots of uncertainty about which will end up being the most effective; we can give to AMF and save lives in the near future for around $1,000 a life, or try policy interventions, with unknown effects, or perhaps we should try prevent one of severeal potential tail risks that could destroy humanity in the near or far future. The experts in each area argue for their cause, and we'd love a clearer way to think about the options. Come join us as we try to find one!

Discussion article for the meetup : Finding Effective Altruism with Biased Inputs on Options - LA Rationality Weekly Meetup

Comment author: Davidmanheim 07 December 2015 05:45:09PM 0 points [-]

There are very few majors / areas of study where a single focus isn't significantly improved with a minor - and frequently, if it's not your major, Comp Sci is a great additional skillset. This is especially true if you need to take the credits anyways, and can choose between random course, or completing a minor with just a bit more work.

You want to do science? Almost no area doesn't need programming as well - it will help you get into grad school. You want to work in business? You'll spend half your day working on spreadsheets, and a CS background is invaluable for making that work better.

You want to do computer programming already? Great, what type? because a minor elsewhere will be a bonus! Video games? Comp Sci + Graphic Design or Literature Corporate work? Comp Sci + Business, Accounting, or Finance.

Comment author: Clarity 25 August 2015 11:08:25AM *  4 points [-]

I want existing friends to be more aware of my preferences and interests so that they'll be able to match and meet them more often (ie suggesting an activity I will like). I guess I can do that by constructing a profile with just likes and such. I will do that.

I also want to meet people who share those interests, so I will begin joining and participating in appropriate groups.

I also want to interact more with other rationalists, but from what I've seen nothing of merit comes from rationalist facebook groups (based on LessWrong associated groups).

Thanks for your extremely actionable suggestion and question. I don't know how you came up with the appropriate question to ask me!

Comment author: Davidmanheim 26 August 2015 05:29:47PM 0 points [-]

In consulting and in policy analysis, one of the first steps of problem solving is laying out the problem clearly.

As a start, I recommend Ken Wanatabe's fun and readable "Problem Solving 101: A Simple Book for Smart People."

Comment author: Clarity 23 August 2015 12:39:26PM 1 point [-]

How should I use facebook, assuming I have a facebook but don't post anything, just message as of now?

Comment author: Davidmanheim 24 August 2015 11:57:46PM 2 points [-]

You need to be clearer about your goals.

Do you want more interaction with existing friends? Do you want to meet new people? Do you want an easier way to interact with other rationalists?

Comment author: ChristianKl 11 August 2015 06:53:50PM 1 point [-]

We're talking about ways to systematically lose money, which means you would need to systematically throw yourself into the front-runner's path

Simply making random trades in a market where some participants are front runners will mean that some of those trades are with front runners where you lose money.

I would call that systematically losing money. On the other hand it doesn't give you an ability to forcast where you will lose the money to make the opposite bet and win money.

Do you think our disagreement is about the way the EMH is defined or are you pointing to something more substantial?

Comment author: Davidmanheim 19 August 2015 11:47:27PM 0 points [-]

No, no disagreement about EMH, that's exactly the point.

Comment author: bbleeker 11 August 2015 08:50:44AM 5 points [-]

Anonymous voting is the default, and I always leave it on.

Comment author: Davidmanheim 19 August 2015 11:45:14PM -1 points [-]

I'd prefer to see accountability be a default, with anonymity whenever desired.

Comment author: ChristianKl 10 August 2015 06:30:39PM 2 points [-]

That leaves the question of whether that's okay or whether we should simply disable the account.

Submitting...

Comment author: Davidmanheim 11 August 2015 04:36:22AM 0 points [-]

It's interesting looking at the raw data breakdown of non -anonymous versus anonymous votes.

Comment author: Lumifer 11 August 2015 01:04:18AM 3 points [-]

Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent.

Well, that looks like an "offering to buy a stock for $1 more than its current price" scenario. You can easily lose a lot of money by buying things at the offer and selling them at the bid :-)

But let's imagine a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid.

Assuming you can competently express a market view, can you systematically lose money by consistently taking the wrong side under EMH?

Comment author: Davidmanheim 11 August 2015 04:22:39AM 1 point [-]

Yes. Unless you think that all possible market information is reflected now, before it becomes available, someone makes money when information emerges, moving the market.

View more: Prev | Next