(I'm catching up, so that's why this is posted so far after the original.)
When I attempted this exercise I tried to think of how I use the word "arbitrary" and came up with a definition along the lines of "Something is arbitrary if its choice from a set makes no difference to the veracity of a particular statement", i.e. arbitrary is a 2-part function, taking as input a choice and a statement, since without a statement to evaluate against calling something arbitrary to me just looks like membership.
But then I read on and realized that I was being too narrow in what I considered to be arbitrary. Perhaps from too much mathematical training, I didn't even think of the common use as described above. This is an subtle kind of error to watch out for: taking a technical term that happens to have the same spelling and pronunciation as a non-technical term and trying to apply the definition of the technical term back to the non-technical term. The effect is either that you confuse other people because you use a technical term that looks like a non-technical one or you confuse yourself by misunderstanding what people mean when they use the term in a non-technical sense. This sort of thing becomes a bigger problem, I reckon, as you become more and more specialized in a field with lots of technical language.
Eliezer,
You know that you can't succeed without the math, and slowing down for posts like this is taking away 24 hours that might have been better used to save humanity. Not that this was a bad post, but I think you would be better off letting others write the fun posts unless you need to write a fun post to recover from teaching.
I agree that it makes no sense, but as I was writing the comment I figured I would take you down the wrong path of what someone might naively think and then correct it. I think that someone who was overly trained in logic and not in probability might assume that if Raven(x)-->Black(x) being true leads to P(B|R) = 1, they might reason that since the reverse implication Black(x)-->Raven(x) is false, it leads to P(R|B) = 0. But based on the comments above, maybe only an ancient Greek philosopher would be inclined to make such a mistake.
Hopefully not taking away anyone's fun here, but to reconcile Raven(x)->Black(x) but not vice versa, what this statement wants to say, letting P(R) and P(B) be the probabilities of raven and black, respectively, is P(R|B)=0 and P(B|R)=1, which gives us that
P(R|B) = 0 P(RB)/P(B) = 0 P(RB) = 0
and
P(B|R) = 1 P(BR)/P(R) = 1 P(BR) = P(R)
But of course this leads to a contradiction, so it can't really be true that Black(x)-/->Raven(x), can it? Sure, because what is really meant by implies (-/->) is not P(B|R) = 0 but P(B|R)<1. But in logic we often forget this because anything with a probability less than 1 is assigned a truth value of false.
Logic has its value, since sometimes you want to prove something is true 100% of the time, but this is generally only possible in pure mathematics. If you try to do it elsewhere you'll get exceptions (e.g. albino ravens). So leave logic to mathematicians; you should use Bayesian inference.
I believe you made a slight typo, Eli.
You said: "Since there's an "unusually high" probability for P(Z1Y2) - defined as a probability higher than the marginal probabilities would indicate by default - it follows that observing Z1 is evidence which increases the probability of Y2. And by a symmetrical argument, observing Y2 must favor Z1."
But I think what you meant was "Since there's an "unusually high" probability for P(Z1Y2) - defined as a probability higher than the marginal probabilities would indicate by default - it follows that observing Y2 is evidence which increases the probability of Z1. And by a symmetrical argument, observing Z1 must favor Y2."
Nothing you said was untrue, but the implication of what you wrote doesn't match up with the example you actually gave just above that text.
For those saying they have nothing to protect or still need to find something to protect, remember that you are human and, unless you have no natural family or reproductive ties, you always have the people you love to protect. It may seem counterintuitive if you've bought into Hollywood rationality, but love is a powerful motivational force. If you think that, in theory, being more rational is good, but don't see how you can effect greater rationality in your mind, consider the many benefits of your increased rationality (again, not Hollywood rationality, but rationality of the type Eliezer describes above).
In my case, I know I'm trying harder than ever to become a better person because of my wife. And when I do something that hurts her, my first thought is to figure out what is wrong with my thinking that led to this. My second is to find a better way to express my love, through increasing her happiness and enjoyment of life. And, realizing that the best thing I can do is shut up an multiply, I figure out how to change myself to be a better multiplier.
Am I right in thinking that you've now brought the OB audience to where you need them in order to start trying to talk about AI (or "optimizing processes" or whatever terminology is sufficiently abstract to prevent linguistically inferred misunderstanding)?
Let's suppose we measure pain in pain points (pp). Any event which can cause pain is given a value in [0, 1], with 0 being no pain and 1 being the maximum amount of pain perceivable. To calculate the pp of an event, assign a value to the pain, say p, and then multiply it by the number of people who will experience the pain, n. So for the torture case, assume p = 1, then:
torture: 1*1 = 1 pp
For the spec in eye case, suppose it causes the least amount of pain greater than no pain possible. Denote this by e. Assume that the dust speck causes e amount of pain. Then if e < 1/3^^^3
spec: 1 * e < 1 pp
and if e > 1/3^^^3
spec: 1 * e > 1 pp
So assuming our moral calculus is to always choose whichever option generates the least pp, we need only ask if e is greater than or less than 1/n.
If you've been paying attention, I now have an out to give no answer: we don't know what e is, so I can't decide (at least not based on pp). But I'll go ahead and wager a guess. Since 1/3^^^3 is very small, I think that most likely any pain sensing system of any present or future intelligence will have e > 1/3^^^3, then I must choose torture because torture costs 1 pp but the specs cost more than 1 pp.
This doesn't feel like what, as a human, I would expect the answer to be. I want to say don't torture the poor guy and all the rest of us will suffer the spec so he need not be tortured. But I suspect this is human inability to deal with large numbers, because I think about how I would be willing to accept a spec so the guy wouldn't be torture since e pp < 1 pp, and every other individual, supposing they were pp-fearing people, would make the same short-sighted choice. But the net cost would be to distribute more pain with the specs than the torture ever would.
Weird how the human mind can find a logical answer and still expect a nonlogical answer to be the truth.
Between teaching mathematics to freshmen and spending most of my time learning mathematics, I've noticed this myself. When presented with a new result, the first inclination, especially depending on the authority of the source, is to believe it and figure there's a valid proof of it. But occasionally the teacher realizes that they made a mistake and may even scold the students for not noticing since it is incredibly obvious (e.g. changing something like ||z - z_0|| to ||z - z_1|| between steps, even though a few seconds thinking reveals it to be a typo rather than a mathematical insight).
Sometimes (and for a few lucky people, most of the time) individuals are in a mental state where they are actively thinking through everything being presented to them. For me, this happens a few times a semester in class, and almost always during meetings with my advisor. And occasionally I have a student who does it when I'm teaching. But in my experience this is a mentally exhausting task and often leaves you think-dead for a while afterwards (I find I can go about 40 minutes before I give out).
All this leads me to a conclusion, largely from my experience with what behavior produces what effects, that in mathematics the best way to teach is to assign problems and give students clues when they get stuck. The problems assigned, of course, should be ones that result in the student building up the mathematical theory. It's certainly more time consuming, but in the end more rewarding, in terms of both emotional satisfaction and understanding.
Interesting discussion.
Eli,
First, since no one has come out and said it yet, maybe it's just me but this post was kind of whiny. Maybe everyone else here is more in-tune with you (or living in your reality distortion field), but the writing felt like you were secretly trying to make yourself out to be a martyr, fishing for sympathy. Based on my knowledge of you from past interactions and your other writings I doubt this to be the case, but none the less it's the sense I got from your writing.
Second, I, too, have been through a similar experience. When I was younger, maybe around the age of 11 or 12, I can remember being able to step back from myself and see what I thought at the time was often the pointlessness of my own and others actions. I'd say to myself "Why am I doing this? I don't want to do it and I don't know why I'm doing it." At this point I wasn't fully reflective, but was stepping back, looking in, and getting confused.
Over the next several years I worked to eliminate those things from myself which confused me. Initially I fought to remove anger and succeeded brilliantly so that to this day I still cannot get angry: frustrated and annoyed are as much as I can muster. Next it was other things, like "useless" emotions such as impatience and fear, and troublesome patterns of behavior, especially my OCD behavior patterns. Back then I blindly kept things like love, friendship, and sexual desire, having never been confused by them in the same way I was by anger, and tried to maintain things like a reluctance to change, foolishly believing that since adults didn't seem to change their minds very often or very far that this was a desirable state.
Shortly after I joined the sl4 mailing list, I experienced a breakthrough reading CFAI section 2 and woke up to myself. The best way I know to describe what happened to me was that I saw the territory for the first time and realized that all my life I had only been starring at maps. Not that I would have said that back then, but it was the watershed moment in my life when everything changed. I was no longer blind to certain emotions and behaviors, and for the first time I had the ability to reflect on essentially anything I wanted to within myself, up to the physical limitations of my brain.
A year or two later I started looking into the literature of cognitive science and came across a book that described the inner narrative all non-zombies experience as the result of part of the way the brain functions. Essentially it said that the brain functions like a machine, and around X milliseconds after your brain does something you experience it when a part of your brain processes signals coming to it from the rest of your brain into memories. This completed the opening of myself to reflection.
A couple years later, after having finally gotten on medication for my OCD and finding myself able to pull out all the junk from my brain that I could (although I still didn't know that much about heuristics and biases at that time, so I thought I was doing a lot better than I actually was), I started dating the girl who eventually became my wife. Up to this time my mental cleaning had gone on unopposed and, although I had gotten rid of a lot of what had been myself, I never felt like I was gone and needed to rebuild myself. In fact, I liked being empty! But then sometime after our first anniversary my then girlfriend started to express frustration, anger, and other emotions I hadn't known had been inside her. As it turned out, my emptiness was causing her pain. So I rebuilt myself to not be so empty so that I could better love her, although it's something I still struggle with, such as to not make jokes about things that most people take seriously, but that I have a hard time taking seriously because I distance myself through reflection.
That's where I stand today, partially rebuilt, not entirely human.