Comment author: Jadagul 25 January 2009 02:10:13PM 20 points [-]

It occurred to me at some point that Fun Theory isn't just the correct reply to Theodicy; it's also a critical component of any religious theodicy program. And one of the few ways I could conceive of someone providing major evidence of God's existence.

That is, I'm fairly confident that there is no god. But if I worked out a fairly complete version of Fun Theory, and it turned out that this really was the best of all possible worlds, I might have to change my mind.

Comment author: Jadagul 01 December 2008 05:24:07PM 2 points [-]

I would agree with Karo, I think. I'm actually surprised by how accurate this list of predictions is; it's not at 50% but I'm not sure why we would expect it to be with predictions this specific. (I'm not saying he was epistemically justified, just that he's more accurate than I would have expected).

Following up on Eliezer's point, it seems like the core of his claims are: 1) computers will become smaller and people will have access to them basically 24/7. If you remember that even my cell phone, which is a total piece of crap and cost $10, would look like a computer to someone from 1999, this seems fairly accurate. 2) Keyboards will cease to be the primary method of interaction with computers. We're getting there--slowly--between improved voice recognition and the growth of tablet PCs. But we're not there yet. I wouldn't be surprised at all if this were true by 2020 (wouldn't be surprised if it weren't, either. I don't know how far off we are from speech recognition good enough that people can assume it will work). 3) People will start using computers for things that in 1999 had to be done in hard copy. This is starting to happen, but we're not there yet. Again, wouldn't surprise me either way in 2020. 4) People will be able to use computers as their primary means of interaction with the world. Some basement-dwelling geeks like myself aside, not quite true. People like dealing with other people. I think this is the least likely to be true ten years from now.

In response to Magical Categories
Comment author: Jadagul 25 August 2008 09:45:53PM 3 points [-]

Shane, the problem is that there are (for all practical purposes) infinitely many categories the Bayesian superintelligence could consider. They all "identify significant regularities in the environment" that "could potentially become useful." The problem is that we as the programmers don't know whether the category we're conditioning the superintelligence to care about is the category we want it to care about; this is especially true with messily-defined categories like "good" or "happy." What if we train it to do something that's just like good except it values animal welfare far more (or less) than our conception of good says it ought to? How long would it take for us to notice? What if the relevant circumstance didn't come up until after we'd released it?

Comment author: Jadagul 22 August 2008 07:50:20AM 0 points [-]

This talk about metaethics is trying to justify building castles in the clouds by declaring the foundation to be supported by the roof. It doesn't deal with the fundamental problem at all - it makes it worse.

Caledonian, I don't want to speak for Eliezer. But my contention, at least, is that the fundamental problem is insoluble. I claim, not that this particular castle has a solid foundation, but that there exist no solid foundations, and that anywhere you think you've found solid earth there's actually a cloud somewhere beneath it. The fact that you're reacting so strongly makes me think you're interpreting Eliezer as saying what I believe. Similarly,

Why should we care about a moral code that Eliezer has arbitrarily chosen to call right? What relevance does this have to anything?

There's no particular reason we should care about a moral code Eliezer has chosen. You should care about the moral code you have arbitrarily chosen. I claim, and I think Eliezer would too, that there will be a certain amount of overlap because you're both human (just as you both buy into Occam because you're both human). But we couldn't give, say, a pebblesorter any reason to care about Eliezer's moral code.

Larry D'ana: Is anyone who does not believes in universally compelling arguments a relativist?

Is anyone who does not believe that morality is ontologically primitive a relativist?

Yeah, pretty much.

If there are no universally compelling arguments, then there's no universally compelling moral code. Which means that whatever code compels you has to compel relative to who you are; thus it's a relativist position.

Eliezer tries to get around this by saying that he has this code he can state (to some low degree of precision), and everyone can objectively agree on whether or not some action comports with this code. Or at least that perfect Bayesian superintelligences could all agree. (I'm not entirely sold on that, but we'll stipulate). I claim, though, that this isn't the way most people (including most of us) use the words 'morality' and 'right'; I think that if you want your usage to comport with everyone else's, you would have to say that the pebblesorters have 'a' moral code, and that this moral code is "Stack pebbles in heaps whose sizes are prime numbers."

In other words, in general usage a moral code is a system of rules that compels an agent to action (and has a couple other properties I haven't figured out how to describe without self-reference). A moral absolutist claims that there exists such a system of rules that is rightly binding and compelling to all X, where X is usually some set like "all human beings" or "all self-aware agents." (Read e.g. Kant who claimed that the characteristic of a moral rule is that it is categorically binding on all rational minds). But Eliezer and I claim that there are no universally compelling arguments of any sort. Thus in particular there are no universally compelling injunctions to act, and thus no absolute moral code. Instead, the injunction to act that a particular agent finds compelling varies with the identity of the agent; thus 'morality' is relative to the agent. And thus I'm a moral relativist.

Now, it's possible that you could get away with restricting X to "human beings"; if you then claimed that humans had enough in common that the same moral code was compelling to all of them, you could plausibly reclaim moral objectivism. But I think that claim is clearly false; Eliezer seems to have rejected it (or at least refused to defend it) as well. So we don't get even that degree of objectivity; the details of each person's moral code depend on that person, and thus we have a relative standard. This is what has Caledonian's knickers in such a twist.

Kenny: exactly. That's why we're morally relative.

Comment author: Jadagul 21 August 2008 06:48:14AM 7 points [-]

Eliezer: Good post, as always, I'll repeat that I think you're closer to me in moral philosophy than anyone else I've talked to, with the probable exception of Richard Rorty, from whom I got many of my current views. (You might want to read Contingency, Irony, Solidarity; it's short, and it talks about a lot of the stuff you deal with here). That said, I disagree with you in two places. Reading your stuff and the other comments has helped me refine what I think; I'll try to state it here as clearly as possible.

1) I think that, as most people use the words, you're a moral relativist. I understand why you think you're not. But the way most people use the word 'morality,' it would only apply to an argument that would persuade the ideal philosopher of perfect emptiness. You don't believe any such arguments exist; neither do I. Thus neither of us think that morality as it's commonly understood is a real phenomenon. Think of the priest in War of the Worlds who tried to talk to the aliens, explaining that since we're both rational beings/children of God, we can persuade them not to kill us because it's wrong. You say (as I understand you) that they would agree that it's wrong, and just not care, because wrong isn't necessarily something they care about. I have no problem with any claim you've made (well, that I've made on your behalf) here; but at this point the way you're using the word 'moral' isn't a way most people would use it. So you should use some other term altogether.

2) I like to maintain a clearer focus on the fact that, if you care about what's right, I care about what's right_1, which is very similar to but not the same as what's right. Mainly because it helps me to remember there are some things I'm just not going to convince other people of (e.g. I don't think I could convince the Pope that God doesn't exist. There's no fact pattern that's wholly inconsistent with the property god_exists, and the Pope has that buried deep enough in its priors that I don't think it's possible to root it out). But (as of reading your comment on yesterday's post) I don't think we disagree on the substance, just on the emphasis.

Thanks for an engaging series of posts; as I said, I think you're the closest or second-closest I've ever come across to someone sharing my meta-ethics.

Comment author: Jadagul 21 August 2008 06:44:22AM 0 points [-]

Ah, thanks Eliezer, that comment explains a lot. I think I mostly agree with you, then. I suspect (on little evidence) that each one of us would, extrapolated, wind up at his own attractor (or at least at a sparsely populated one). But I have no real evidence for this, and I can't imagine off the top of my head how I would find it (nor how I would find contradictory evidence), and since I'm not trying to build fAI I don't need to care. But what you've just sketched out is basically the reason I think we can still have coherent moral arguments; our attractors have enough in common that many arguments I would find morally compelling, you would also find morally compelling (as in, most of us have different values but we (almost) all agree that the random slaughter of innocent three-year-olds is bad). Thanks for clearing that up.

Comment author: Jadagul 20 August 2008 05:14:32AM 0 points [-]

Especially given that exposure to different fact patterns could push you in different directions. E.g. suppose right now I try to do what is right_1 (subscripts on everything to avoid appearance of claim to universality). Now, suppose that if I experience fact pattern facts_1 I conclude that it is right_1 to modify my 'moral theory' to right_2. but if I experience fact pattern facts_2 I conclude that it is right_1 to modify to right_3.

Now, that's all well and good. Eliezer would have no problem with that, as long as the diagram commutes: that is, if it's true that ( if I've experienced facts_1 and moved to right_2, and then I experience facts_2, I will move to right_4), it must also be true that ( if I've experienced facts_2 and moved to right_3, and then experience facts_1, I will move to right_4).

I suppose that at least in some cases this is true, but I see no reason why in all cases it ought to be. Especially if you allow human cognitive biases to influence the proceedings; but even if you don't (and I'm not sure how you avoid it), I don't see any argument why all such diagrams should commute. (this doesn't mean they don't, of course. I invite Eliezer to provide such an argument).

I still hold that Eliezer's account of morality is correct, except his claim that all humans would reflectively arrive at the same morality. I think foundations and priors are different enough that, functionally, each person has his own morality.

Comment author: Jadagul 16 August 2008 09:49:04AM 0 points [-]

But Mario, why not? In J-morality it's wrong to hurt people, both because I have empathy towards people and so I like them, and because people tend to create net positive externalities. But that's a value judgment. I can't come up with any argument that would convince a sociopath that he "oughtn't" kill people when he can get away with it. Even in theory.

There was nothing wrong with Raskolnikov's moral theory. He just didn't realize that he wasn't a Napoleon.

Comment author: Jadagul 16 August 2008 04:13:00AM 1 point [-]

Eliezer, I think you come closer to sharing my understanding of morality than anyone else I've ever met. Places where I disagree with you:

First, as a purely communicative matter, I think you'd be clearer if you replaced all instances of "right" and "good" with "E-right" and "E-good."

Second, as I commented a couple threads back, I think you grossly overestimate the psychological unity of humankind. Thus I think that, say, E-right is not at all the same as J-right (although they're much more similar than either is to p-right). The fact that our optimization processes are close enough in many cases that we can share conclusions and even arguments doesn't mean that they're the same optimization process, or that we won't disagree wildly in some cases.

Simple example: I don't care about the well-being of animals. There's no comparison in there, and there's no factual claim. I just don't care. When I read the famous ethics paper about "would it be okay to torture puppies to death to get a rare flavor compound," my response was something along the lines of, "dude, they're puppies. Who cares if they're tortured?" I think anyone who enjoys torturing for the sake of torturing is probably mentally unbalanced and extremely unvirtuous. But I don't care about the pain in the puppy at all. And the only way you could make me care is if you showed that causing puppies pain came back to affect human well-being somehow.

Third, I think you are a moral relativist, at least as that claim is generally understood. Moral absolutists typically claim that there is some morality demonstrably binding upon all conscious agents. You call this an "attempt to persuade an ideal philosopher of perfect emptiness" and claim that it's a hopeless and fundamentally stupid task. Thus you don't believe what moral absolutists believe; instead, you believe different beings embody different optimization processes (which is the name you give to what most people refer to as morality, at least in conscious beings). You're a moral relativist. Which is good, because it means you're right.

Excuse me. It means you're J-right.

Comment author: Jadagul 14 August 2008 03:31:00AM 0 points [-]

Caledonian and Tim Tyler: there are lots of coherent defenses of Christianity. It's just that many of them rest on statements like, "if Occam's Razor comes into conflict with Revealed Truth, we must privilege the latter over the former." This isn't incoherent; it's just wrong. At least from our perspective. Which is the point I've been trying to make. They'd say the same thing about us.

Roko: I sent you an email.

View more: Prev | Next