GAZP vs. GLUT
Followup to: The Generalized Anti-Zombie Principle
In "The Unimagined Preposterousness of Zombies", Daniel Dennett says:
To date, several philosophers have told me that they plan to accept my challenge to offer a non-question-begging defense of zombies, but the only one I have seen so far involves postulating a "logically possible" but fantastic being — a descendent of Ned Block's Giant Lookup Table fantasy...
A Giant Lookup Table, in programmer's parlance, is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication—times when you're going to reuse the function a lot and it doesn't have many possible inputs; or when clock cycles are cheap while you're initializing, but very expensive while executing.
Giant Lookup Tables get very large, very fast. A GLUT of all possible twenty-ply conversations with ten words per remark, using only 850-word Basic English, would require 7.6 * 10585 entries.
Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But "in principle", as philosophers are fond of saying, it could be done.
There just has to be something more, you know?
A non-materialist thought experiment.
Okay, so you don't exactly believe in the God of the Abrahamic scriptures verbatim who punishes and sets things on fire and lives in the sky. But still, there just has to be something more than just matter and energy, doesn't there? You just feel it. If you don't, try to remember when you did, or at least empathize with someone you know who does. After all, you have a mind, you think, you feel — you feel for crying out loud — and you must realize that can't be made entirely of things like carbon and hydrogen atoms, which are basically just dots with other dots swirling around them. Okay, maybe they're waves, but at least sometimes they act like dots. Start with a few swirling dots… now add more… keep going, until it equals love. It just doesn't seem to capture it.
In fact, now that you think about it, you know your mind exists. It's right there: it's you. Your "experiencing self". Maybe you call it a spirit or soul; I don't want to fix too rigid a description in case it wouldn't quite match your own. But cogito-ergo-sum, it's definitely there! By contrast, this particle business is just a mathematical concept — a very smart one, of course — thought of by scientists to explain and predict a bunch of carefully designed and important measurements. Yes, it does that extremely well, and you're not downplaying that. But that doesn't explain how you see blue, or taste strawberry — something you have direct access to. Particles might not even exist, if that means anything to say. It might just be that observation itself follows a mathematical pattern that we can understand better by visualizing dots and waves. They might not be real.
So actually, your mind or spirit — that thing you feel, that you — is much more certain an extant than scientific "matter". That must be something very important to understand! Certainly you can tell your mind has different parts to it: hearing, seeing, reasoning, moving, remembering, empathizing, picturing, yearning… When you think of all the things you can remember alone — or could remember — the complexity of all that data is mindbogglingly vast. Imagine the task of actually having to take it all apart and describe it completely… it could take aeons…
The Machine Learning Personality Test
You've probably heard of the Briggs-Myers personality test, which is a classification system of 16 different personality types based on the writings of Carl Jung, a man who believed that his library books sometimes spontaneously exploded. Its main advantage is that it manages to classify people without insulting them. (This is accomplished by confounding dimensions: Instead of measuring one property of personality along one dimension, which leads to some scores being considered better than others, you subtract a measurement along one desirable property of personality from a measurement along another desirable property of personality, and call the result one dimension.)
You've probably also heard of the MMPI, a test designed by giving lots of questions to mental patients and seeing which ones were answered differently by people with particular diagnoses. This is more like personality clustering for fault diagnosis than a personality test. You may find it useful if you're crazy. (One of the criticisms of this test is that religious people often test as psychotic: "Do you sometimes think someone else is directing your actions? Is someone else trying to plan events in your life?" Is that a bug, or a feature?)
You may have heard of the Personality Assessment Inventory, a test devised by listing things that psychotherapists thought were important, and trying to come up with questions to test them.
The Big 5 personality test is constructed in a well-motivated way, using factor analysis to try to discover from the data what the true dimensions of personality are.
But these all work from the top down, looking at human behavior (answers), and trying to uncover latent factors farther down. I'm instead going to propose a personality system that, instead, starts from the very bottom of your hardware and leaves it to you to work your way up to the variables of interest: the Machine Learning Personality Test ("MLPT").
The Importance of Goodhart's Law
This article introduces Goodhart's law, provides a few examples, tries to explain an origin for the law and lists out a few general mitigations.
Goodhart's law states that once a social or economic measure is turned into a target for policy, it will lose any information content that had qualified it to play such a role in the first place. wikipedia The law was named for its developer, Charles Goodhart, a chief economic advisor to the Bank of England.
The much more famous Lucas critique is a relatively specific formulation of the same.
The most famous examples of Goodhart's law should be the soviet factories which when given targets on the basis of numbers of nails produced many tiny useless nails and when given targets on basis of weight produced a few giant nails. Numbers and weight both correlated well in a pre-central plan scenario. After they are made targets (in different times and periods), they lose that value.
My Fundamental Question About Omega
Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it. And it doesn't lie.
A quick peek at Omega's presence on LessWrong reveals Newcomb's problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.
Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn't obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?
The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.
Ethics has Evidence Too
A tenet of traditional rationality is that you can't learn much about the world from armchair theorizing. Theory must be epiphenomenal to observation-- our theories are functions that tell us what experiences we should anticipate, but we generate the theories from *past* experiences. And of course we update our theories on the basis of new experiences. Our theories respond to our evidence, usually not the other way around. We do it this way because it works better then trying to make predictions on the basis of concepts or abstract reasoning. Philosophy from Plato through Descartes and to Kant is replete with failed examples of theorizing about the natural world on the basis of something other than empirical observation. Socrates thinks he has deduced that souls are immortal, Descartes thinks he has deduced that he is an immaterial mind, that he is immortal, that God exists and that he can have secure knowledge of the external world, Kant thinks he has proven by pure reason the necessity of Newton's laws of motion.
These mistakes aren't just found in philosophy curricula. There is a long list of people who thought they could deduce Euclid's theorems as analytic or a priori knowledge. Epicycles were a response to new evidence but they weren't a response that truly privileged the evidence. Geocentric astronomers changed their theory *just enough* so that it would yield the right predictions instead of letting a new theory flow from the evidence. Same goes for pre-Einsteinian theories of light. Same goes for quantum mechanics. A kludge is a sign someone is privileging the hypothesis. It's the same way many of us think the Italian police changed their hypothesis explaining the murder of Meredith Kercher once it became clear Lumumba had an alibi and Rudy Guede's DNA and hand prints were found all over the crime scene. They just replaced Lumumba with Guede and left the rest of their theory unchanged even though there was no longer reason to include Knox and Sollecito in the explanation of the murder. These theories may make it over the bar of traditional rationality but they sail right under what Bayes theorem requires.
Most people here get this already and many probably understand it better than I do. But I think it needs to be brought up in the context of our ongoing discussion of normative ethics.
Unless we have reason to think about ethics differently, our normative theories should respond to evidence in the same way we expect our theories in other domains to respond to evidence. What are the experiences that we are trying to explain with our ethical theories? Why bother with ethics at all? What is the mystery we are trying to solve? The only answer I can think of is our ethical intuitions. When faced with certain situations in real life or in fiction we get strong impulses to react in certain ways, to praise some parties and condemn others. We feel guilt and sometimes pay amends. There are some actions which we have a visceral abhorrence of.
These reactions are for ethics what measurements of time and distance are for physics -- the evidence.
Deontology for Consequentialists
Consequentialists see morality through consequence-colored lenses. I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.
Consequentialism1 is built around a group of variations on the following basic assumption:
- The rightness of something depends on what happens subsequently.
It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article. "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism". I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints. All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple". But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.
To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.
Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong. This may include, but is not limited to:
- The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
- The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
- Historical facts (e.g. having made a promise, sworn a vow)
- Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
- Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
- The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)
The Price of Integrity
Related Posts: Prices or Bindings?
On the evening of August 14th, 2006 a pair of Fox News journalists, Steve Centanni and Olaf Wiig were seized by Islamic militants while on assignment in Gaza City. Nothing was heard of them for nine days until a group calling themselves the Holy Jihad Brigades took credit for the kidnappings. They issued an ultimatum, demanding the release of Muslims prisoners from American jails within a 72 hour time frame. Their demands were not met.
But then a few days later the journalists were allowed to go free... but not before they’d been forced into converting to Islam at gunpoint, and had each videotaped a statement denouncing U.S. and Israeli foreign policy.
The war raged on.
A couple of kidnapped journalists is nothing new (certainly not three years after the fact) and aside from the happy ending this particular case wouldn’t worth mentioning if not for a unique twist that occurred after they returned home. A fellow Fox News contributor, Sandy Rios, openly criticized the two men; she said that no true Christian would convert – falsely or otherwise – merely because they were threatened with death. As she later explained to Bill Maher:*
The Proper Use of Humility
It is widely recognized that good science requires some kind of humility. What sort of humility is more controversial.
Consider the creationist who says: "But who can really know whether evolution is correct? It is just a theory. You should be more humble and open-minded." Is this humility? The creationist practices a very selective underconfidence, refusing to integrate massive weights of evidence in favor of a conclusion he finds uncomfortable. I would say that whether you call this "humility" or not, it is the wrong step in the dance.
The persuasive power of false confessions
First paragraph from a Mind Hacks post:
The APS Observer magazine has a fantastic article on the power of false confessions to warp our perception of other evidence in a criminal case to the point where expert witnesses will change their judgements of unrelated evidence to make it fit the false admission of guilt.
The post and linked article are worth reading… and I don't have much to add.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)