What a reduction of "could" could look like

53 cousin_it 12 August 2010 05:41PM

By requests from Blueberry and jimrandomh, here's an expanded repost of my comment which was itself a repost of my email sent to decision-theory-workshop.

(Wait, I gotta take a breath now.)

A note on credit: I can only claim priority for the specific formalization offered here, which builds on Vladimir Nesov's idea of "ambient control", which builds on Wei Dai's idea of UDT, which builds on Eliezer's idea of TDT. I really, really hope to not offend anyone.

(Whew!)

Imagine a purely deterministic world containing a purely deterministic agent. To make it more precise, agent() is a Python function that returns an integer encoding an action, and world() is a Python function that calls agent() and returns the resulting utility value. The source code of both world() and agent() is accessible to agent(), so there's absolutely no uncertainty involved anywhere. Now we want to write an implementation of agent() that would "force" world() to return as high a value as possible, for a variety of different worlds and without foreknowledge of what world() looks like. So this framing of decision theory makes a subprogram try to "control" the output of a bigger program it's embedded in.

For example, here's Newcomb's Problem:

def world():
 box1 = 1000
 box2 = (agent() == 2) ? 0 : 1000000
 return box2 + ((agent() == 2) ? box1 : 0)

continue reading »

How to Measure Anything

50 lukeprog 07 August 2013 04:05AM

Douglas Hubbard’s How to Measure Anything is one of my favorite how-to books. I hope this summary inspires you to buy the book; it’s worth it.

The book opens:

Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods.

The sciences have many established measurement methods, so Hubbard’s book focuses on the measurement of “business intangibles” that are important for decision-making but tricky to measure: things like management effectiveness, the “flexibility” to create new products, the risk of bankruptcy, and public image.

 

Basic Ideas

A measurement is an observation that quantitatively reduces uncertainty. Measurements might not yield precise, certain judgments, but they do reduce your uncertainty.

To be measured, the object of measurement must be described clearly, in terms of observables. A good way to clarify a vague object of measurement like “IT security” is to ask “What is IT security, and why do you care?” Such probing can reveal that “IT security” means things like a reduction in unauthorized intrusions and malware attacks, which the IT department cares about because these things result in lost productivity, fraud losses, and legal liabilities.

Uncertainty is the lack of certainty: the true outcome/state/value is not known.

Risk is a state of uncertainty in which some of the possibilities involve a loss.

Much pessimism about measurement comes from a lack of experience making measurements. Hubbard, who is far more experienced with measurement than his readers, says:

  1. Your problem is not as unique as you think.
  2. You have more data than you think.
  3. You need less data than you think.
  4. An adequate amount of new data is more accessible than you think.


Applied Information Economics

Hubbard calls his method “Applied Information Economics” (AIE). It consists of 5 steps:

  1. Define a decision problem and the relevant variables. (Start with the decision you need to make, then figure out which variables would make your decision easier if you had better estimates of their values.)
  2. Determine what you know. (Quantify your uncertainty about those variables in terms of ranges and probabilities.)
  3. Pick a variable, and compute the value of additional information for that variable. (Repeat until you find a variable with reasonably high information value. If no remaining variables have enough information value to justify the cost of measuring them, skip to step 5.)
  4. Apply the relevant measurement instrument(s) to the high-information-value variable. (Then go back to step 3.)
  5. Make a decision and act on it. (When you’ve done as much uncertainty reduction as is economically justified, it’s time to act!)

These steps are elaborated below.

continue reading »

When (Not) To Use Probabilities

28 Eliezer_Yudkowsky 23 July 2008 10:58AM

Followup toShould We Ban Physics?

It may come as a surprise to some readers of this blog, that I do not always advocate using probabilities.

Or rather, I don't always advocate that human beings, trying to solve their problems, should try to make up verbal probabilities, and then apply the laws of probability theory or decision theory to whatever number they just made up, and then use the result as their final belief or decision.

The laws of probability are laws, not suggestions, but often the true Law is too difficult for us humans to compute.  If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute - even though the probabilities would be quite well-defined, if we could afford to calculate them.

So sometimes you don't apply probability theory.  Especially if you're human, and your brain has evolved with all sorts of useful algorithms for uncertain reasoning, that don't involve verbal probability assignments.

Not sure where a flying ball will land?  I don't advise trying to formulate a probability distribution over its landing spots, performing deliberate Bayesian updates on your glances at the ball, and calculating the expected utility of all possible strings of motor instructions to your muscles.

continue reading »

Train Philosophers with Pearl and Kahneman, not Plato and Kant

65 lukeprog 06 December 2012 12:42AM

Part of the sequence: Rationality and Philosophy

Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.

Bertrand Russell

 

I've complained before that philosophy is a diseased discipline which spends far too much of its time debating definitions, ignoring relevant scientific results, and endlessly re-interpreting old dead guys who didn't know the slightest bit of 20th century science. Is that still the case?

You bet. There's some good philosophy out there, but much of it is bad enough to make CMU philosopher Clark Glymour suggest that on tight university budgets, philosophy departments could be defunded unless their work is useful to (cited by) scientists and engineers — just as his own work on causal Bayes nets is now widely used in artificial intelligence and other fields.

How did philosophy get this way? Russell's hypothesis is not too shabby. Check the syllabi of the undergraduate "intro to philosophy" classes at the world's top 5 U.S. philosophy departmentsNYU, Rutgers, Princeton, Michigan Ann Arbor, and Harvard — and you'll find that they spend a lot of time with (1) old dead guys who were wrong about almost everything because they knew nothing of modern logic, probability theory, or science, and with (2) 20th century philosophers who were way too enamored with cogsci-ignorant armchair philosophy. (I say more about the reasons for philosophy's degenerate state here.)

As the CEO of a philosophy/math/compsci research institute, I think many philosophical problems are important. But the field of philosophy doesn't seem to be very good at answering them. What can we do?

Why, come up with better philosophical methods, of course!

Scientific methods have improved over time, and so can philosophical methods. Here is the first of my recommendations...

continue reading »

Scholarship: How to Do It Efficiently

113 lukeprog 09 May 2011 10:05PM

Scholarship is an important virtue of rationality, but it can be costly. Its major costs are time and effort. Thus, if you can reduce the time and effort required for scholarship - if you can learn to do scholarship more efficiently - then scholarship will be worth your effort more often than it previously was.

As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly. I'll share my research habits with you now.

 

Review articles and textbooks are king

My first task is to find scholarly review (or 'survey') articles on my chosen topic from the past five years (the more recent, the better). A good review article provides:

  1. An overview of the subject matter of the field and the terms being used (for scholarly googling later).
  2. An overview of the open and solved problems in the field, and which researchers are working on them.
  3. Pointers to the key studies that give researchers their current understanding of the topic.

If you can find a recent scholarly edited volume of review articles on the topic, then you've hit the jackpot. (Edited volumes are better than single-author volumes, because when starting out you want to avoid reading only one particular researcher's perspective.) Examples from my own research of just this year include:

If the field is large enough, there may exist an edited 'Handbook' on the subject, which is basically just a very large scholarly edited volume of review articles. Examples: Oxford Handbook of Evolutionary Psychology (2007), Oxford Handbook of Positive Psychology (2009), Oxford Handbook of Philosophy and Neuroscience (2009), Handbook of Developmental Cognitive Neuroscience (2008), Oxford Handbook of Neuroethics (2011), Handbook of Relationship Intitiation (2008), and Handbook of Implicit Social Cognition (2010). For the humanities, see the Blackwell Companions and Cambridge Companions.

If your questions are basic enough, a recent entry-level textbook on the subject may be just as good. Textbooks are basically book-length review articles written for undergrads. Textbooks I purchased this year include:

Use Google Books and Amazon's 'Look Inside' feature to see if the books appear to be of high quality, and likely to answer the questions you have. Also check the textbook recommendations here. You can save money by checking Library Genesis and library.nu for a PDF copy first, or by buying used books, or by buying ebook versions from Amazon, B&N, or Google.

continue reading »

Common sense as a prior

33 Nick_Beckstead 11 August 2013 06:18PM

Introduction

[I have edited the introduction of this post for increased clarity.]

This post is my attempt to answer the question, "How should we take account of the distribution of opinion and epistemic standards in the world?" By “epistemic standards,” I roughly mean a person’s way of processing evidence to arrive at conclusions. If people were good Bayesians, their epistemic standards would correspond to their fundamental prior probability distributions. At a first pass, my answer to this questions is:

Main Recommendation: Believe what you think a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to your evidence.

The rest of the post can be seen as an attempt to spell this out more precisely and to explain, in practical terms, how to follow the recommendation. Note that there are therefore two broad ways to disagree with the post: you might disagree with the main recommendation, or the guidelines for following main recommendation.

The rough idea is to try find a group of people whose are trustworthy by clear and generally accepted indicators, and then use an impartial combination of the reasoning standards that they use when they are trying to have accurate views. I call this impartial combination elite common sense. I recommend using elite common sense as a prior in two senses. First, if you have no unusual information about a question, you should start with the same opinions as the broad coalition of trustworthy people would have. But their opinions are not the last word, and as you get more evidence, it can be reasonable to disagree. Second, a complete prior probability distribution specifies, for any possible set of evidence, what posterior probabilities you should have. In this deeper sense, I am not just recommending that you start with the same opinions as elite common sense, but also you update in ways that elite common sense would agree are the right ways to update. In practice, we can’t specify the prior probability distribution of elite common sense or calculate the updates, so the framework is most useful from a conceptual perspective. It might also be useful to consider the output of this framework as one model in a larger model combination.

I am aware of two relatively close intellectual relatives to my framework: what philosophers call “equal weight” or “conciliatory” views about disagreement and what people on LessWrong may know as “philosophical majoritarianism.” Equal weight views roughly hold that when two people who are expected to be roughly equally competent at answering a certain question have different subjective probability distributions over answers to that question, those people should adopt some impartial combination of their subjective probability distributions. Unlike equal weight views in philosophy, my position is meant as a set of rough practical guidelines rather than a set of exceptionless and fundamental rules. I accordingly focus on practical issues for applying the framework effectively and am open to limiting the framework’s scope of application. Philosophical majoritarianism is the idea that on most issues, the average opinion of humanity as a whole will be a better guide to the truth than one’s own personal judgment. My perspective differs from both equal weight views and philosophical majoritarianism in that it emphasizes an elite subset of the population rather than humanity as a whole and that it emphasizes epistemic standards more than individual opinions. My perspective differs from what you might call "elite majoritarianism" in that, according to me, you can disagree with what very trustworthy people think on average if you think that those people would accept your views if they had access to your evidence and were trying to have accurate opinions.

I am very grateful to Holden Karnofsky and Jonah Sinick for thought-provoking conversations on this topic which led to this post. Many of the ideas ultimately derive from Holden’s thinking, but I've developed them, made them somewhat more precise and systematic, discussed additional considerations for and against adopting them, and put everything in my own words. I am also grateful to Luke Muehlhauser and Pablo Stafforini for feedback on this post.

In the rest of this post I will:

  1. Outline the framework and offer guidelines for applying it effectively. I explain why I favor relying on the epistemic standards of people who are trustworthy by clear indicators that many people would accept, why I favor paying more attention to what people think than why they say they think it (on the margin), and why I favor stress-testing critical assumptions by attempting to convince a broad coalition of trustworthy people to accept them.
  2. Offer some considerations in favor of using the framework.
  3. Respond to the objection that common sense is often wrong, the objection that the most successful people are very unconventional, and objections of the form “elite common sense is wrong about X and can’t be talked out of it.”
  4. Discuss some limitations of the framework and some areas where it might be further developed. I suspect it is weakest in cases where there is a large upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit, and cases where people are unwilling to carefully consider arguments with the goal of having accurate beliefs.

continue reading »

The Dark Arts: A Beginner's Guide

29 faul_sname 21 January 2012 07:05AM

 

The Dark Arts

So you've been reading this site and learning many valuable tools for becoming more rational. You're beginning to become irritated at the irrational behavior of the average person. You've noticed that many people refuse to accept even highly compelling arguments, even as they drink up the doctrine of their favorite religion/political party. What are you doing wrong?

As it turns out, it's less about what you're doing wrong than about what these highly influential groups are doing right. This is a brief intro to the Dark Arts, ranging from relatively harmless or even helpful techniques to truly dangerous ones. In this set of guidelines, I have used the example of Solar Suzy, a contractor for a company that sells solar panels, and Business Owner Bob, who runs an organic food store. You will quickly notice that Solar Suzy is not very ethical. This is not an accident: these techniques can very easily be used unethically. They're called the Dark Arts for a reason, and this example will make sure that's kept in mind.

The fact that a technique can be used to do bad things, however, doesn't mean we shouldn't learn the technique. These methods can be used when you don't have time to wait for someone to slowly change their mind, fighting every step of the way. Even if you plan to never use them, it is probably a good idea to be aware of what they look like.

 

The Rules

Be simple.

 

  • Use simple words. We're lazy. We don't like having to put in effort to understand something.
  • Be brief. Try to say your idea in 30 words or less.

 

 

Compare:

Bob: "I've run the numbers. It would take over 30 years for the solar panels to pay for themselves."

Suzy: "Despite the inherent economic disadvantage in the utilization of photovoltaic cells in preference to petroleum and coal, it may under particular circumstances benefit corporate entities insofar as the collective ethical standard of society values such a demonstration of conservationism."

Bob [baffled and a little annoyed]: ... Uhhhhh... Sure... I think I'm fine with what I have. Bye.

-----

Bob: "I've run the numbers. It would take over 30 years for the solar panels to pay for themselves."

Suzy: "Solar panels don't look cost-effective compared to fossil fuels, but sometimes the extra business a company gets for being 'green' makes them worthwhile."

Bob [interested, but skeptical]: Hmm. I hadn't thought of that. Is it really significant, though?

 

Use positive language.

 

  • When possible, use agreeable words. Imagine that you have to pay for each instance of 'no' type words.
  • If you can get your partner to agree to a chain of statements, they will be more likely to agree to the next one.

 

 

Now the conversation looks more like this:

Suzy: You are interested in solar panels, right?

Bob [wary]: That's correct.

Suzy [seriously]: Of course, we'll want to make sure it's actually worth it for your business.

Bob [more firmly]: Absolutely.

Suzy [after a slight, thoughtful pause*]: Would you say that your average customer is concerned about the environment?

Bob: Yes, I think so.

Suzy [as if coming to a realization]: You could probably increase your business by advertising that you're going green.

Bob [thoughtful]: I see your point. That might well work.

Suzy [enthusiastic]: Great! Let's get you signed up.

Bob [wary again]: ...

 

Make sure your partner thinks you are like them.

 

  • Emphasize common opinions. Mock opinions you both disagree with, being careful to stay away from any ideas they might be sympathetic to. This establishes two things. First, it demonstrates that you have high enough status that you can afford to alienate others (but not them, of course). Second, it establishes a common ground.
  • Use the same sensory language as they use. If they talk about seeing patterns, incorporate the words 'look' and 'examine' in your conversation.

 

 

Create an external cognitive load on your partner.

 

  • Here we begin to wander into the darker arts. People can only do so much with their mind at one time. Anything you can do to add a cognitive burden will take away from their ability to reject ideas. Taking a walk will impose approximately the right amount of strain. This cognitive burden is the reason you want to keep your ideas simple. Simple ideas can still be absorbed with an external load.

 

 

Have your partner come up with the ideas whenever possible

 

  • A thought you come up with on your own (or feel like you came up with on your own) goes through far less filtering than an outside idea. Creating a cognitive burden can reduce this filter, but will not eliminate it entirely.

 

 

Incorporating what we know so far, Suzy's sales pitch begins to look more effective.

Suzy: Let's take a look around the area. I'd like to get a feel for the neighborhood.

Bob: Okay.

Suzy [spoken in a lower voice, as if sharing a personal secret]: Besides, formal meetings are always so uncomfortable.

Bob [laughing]: I know. There's a reason I got out of the corporate world.

[small talk]

Suzy [more seriously]: Well, I suppose we should probably get down to business. You were looking into solar panels for your business.

Bob [momentarily off-balance]: ...Yes.

Suzy: Of course, we'll need to make sure it's a good decision for you to install them.

Bob: Of course.

Suzy: Now, looking at your customers, I see a lot of signs of environmentalism.

Bob: Yes. Our customers tend to be the sort who care about our planet.

Suzy [noticing the organic food]: Wow, you really cater to their tastes.

Bob: I do my best. Many of my customers come here because we only buy from organic growers.

Suzy: So the solar panels wouldn't look out of place here.

Bob: Out of place? They would probably attract customers.

Suzy [surprised]: I suppose they would. That alone might well make them worth it.

[contract, hopefully]

 

"You're the sort of person who..."

This is one of the most potent and dangerous tools in your arsenal. If you say that you admire someone's open-mindedness, they will make an effort not to let you down. Your mind evaluates the truth of a statement by judging how easy it is to come up with examples of truth. If you phrase it vaguely enough and make it a compliment, you can convince anyone that they have just about any trait.

This is dangerous because the idea of what kind of person they are will stick with the target for long after your conversation is over. This is the darkest of the dark arts that I am familiar with. Use it sparingly, or if possible not at all. I have used it once that I remember, and that was in a rather extreme situation.

 

Can the Chain Still Hold You?

108 lukeprog 13 January 2012 01:28AM

Robert Sapolsky:

Baboons... literally have been the textbook example of a highly aggressive, male-dominated, hierarchical society. Because these animals hunt, because they live in these aggressive troupes on the Savannah... they have a constant baseline level of aggression which inevitably spills over into their social lives.

Scientists have never observed a baboon troupe that wasn't highly aggressive, and they have compelling reasons to think this is simply baboon nature, written into their genes. Inescapable.

Or at least, that was true until the 1980s, when Kenya experienced a tourism boom.

Sapolsky was a grad student, studying his first baboon troupe. A new tourist lodge was built at the edge of the forest where his baboons lived. The owners of the lodge dug a hole behind the lodge and dumped their trash there every morning, after which the males of several baboon troupes — including Sapolsky's — would fight over this pungent bounty.

Before too long, someone noticed the baboons didn't look too good. It turned out they had eaten some infected meat and developed tuberculosis, which kills baboons in weeks. Their hands rotted away, so they hobbled around on their elbows. Half the males in Sapolsky's troupe died.

This had a surprising effect. There was now almost no violence in the troupe. Males often reciprocated when females groomed them, and males even groomed other males. To a baboonologist, this was like watching Mike Tyson suddenly stop swinging in a heavyweight fight to start nuzzling Evander Holyfield. It never happened.

continue reading »

Value of Information: Four Examples

76 Vaniver 22 November 2011 11:02PM

Value of Information (VoI) is a concept from decision analysis: how much answering a question allows a decision-maker to improve its decision. Like opportunity cost, it's easy to define but often hard to internalize; and so instead of belaboring the definition let's look at some examples.

continue reading »

Bayes for Schizophrenics: Reasoning in Delusional Disorders

88 Yvain 13 August 2012 07:22PM

Related to: The Apologist and the Revolutionary, Dreams with Damaged Priors

Several years ago, I posted about V.S. Ramachandran's 1996 theory explaining anosognosia through an "apologist" and a "revolutionary".

Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter's arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient's left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.

Ramachandran suggested that the left brain is an "apologist", trying to justify existing theories, and the right brain is a "revolutionary" which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient's arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.

In the almost twenty years since Ramachandran's theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.

continue reading »

View more: Next