Comment author: Paul_Gowder 03 July 2008 09:13:40AM 2 points [-]

That's a really fascinating question. I don't know that there'd be a "standard" answer to this -- were the questions taken up, they'd be subject to hot debate.

Are we specifying that this ultrapowerful superintelligence has mind-reading power, or the closest non-magical equivalent in the form of access to every mental state that an arbitrary individual human has, even stuff that now gets lumped under the label "qualia"/ability to perfectly simulate the neurobiology of such an individual?

If so, then two approaches seem defensible to me. First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away, viz., that the questioners know how to tell the superintelligence where to look (or the superintelligence can figure it out itself).

We might not be able to produce a well-formed specification of what is to be computed when we're talking about moral questions (it's easy to think that any attempt to do so would rig the answer in advance -- for example, if you ask it for universal principles, you're going to get something different from what you'd get if you left the universality variable free...). But if the superintelligence could simulate our mental processes such that it could tell what it is that we want (for some appropriate values of we, like the person asking or the whole of humanity if there was any consensus -- which I doubt), then in principle it could simply answer that by declaring what the truth of the matter is with respect to that which it has determined that we desire.

That assumes the superintelligence has access to moral truth, but once we do that, I think the standard arguments against "guardianship" (e.g. the first few chapters of Robert Dahl, Democracy and its Critics) fail, in that if they're true -- if people are really better off deciding for themselves (the standard argument), and making people better off is what is morally correct, then we can expect the superintelligence to return "you figure it out." And then the answer to "friendly to who" or "so you get to decide what's friendly" is simply to point to the fact that the superintelligence has access to moral truth.

The more interesting question perhaps is what should happen if the superintelligence doesn't have access to moral truth (either because there is no such thing in the ordinary sense, or because it exists but is unobservable). I assume here that being responsive to reasons is an appropriate way to address moral questions (if not, all bets are off). Then the superintelligence loses one major advantage over ordinary human reasoning (access to the truth on the question), but not the other (while humans are responsive to reasons in a limited and inconsistent sense, the supercomputer is ideally responsive to reasons). For this situation, I think the second defensible outcome would be that the superintelligence should simulate ideal democracy. That is, it should simulate all the minds in the world, and put them into an unlimited discussion with one another, as if they were bayesians with infinite time. The answers it would come up with would be the equivalent to the most legitimate conceivable human decisional process, but better...

I'm pretty sure this is a situation that hasn't come under sustained discussion in the literature as such (in superintelligence terms -- though it has come up in discussions of benevolent dictators and the value of democracy), so I'm talking out my ass a little here, but drawing on familiar themes. Still, the argument defending these two notions -- especially the second -- isn't a blog comment, it's a series of long articles or more.

Comment author: Paul_Gowder 03 July 2008 08:25:31AM 2 points [-]

Eliezer, to the extent I understand what you're referencing with those terms, the political philosophy does indeed go there (albeit in very different vocabulary). Certainly, the question about the extent to which ideas of fairness are accessible at what I guess you'd call the object level are constantly treated. Really, it's one of the most major issues out there -- the extent to which reasonable disagreement on object-level issues (disagreement that we think we're obligated to respect) can be resolved on the meta-level (see Waldron, Democracy and Disagreement, and, for an argument that this leads into just the infinite recursion you suggest, at least in the case of democratic procedures, see the review of the same by Christiano, which google scholar will turn up easy).

I think the important thing is to separate two questions: 1. what is the true object-level statement, and 2. to what extent do we have epistemic access to the answer to 1? There may be an objectively correct answer to 1, but we might not be able to get sufficient grip on it to legitimately coerce others to go along -- at which point Xannon starts to seem exactly right.

Oh, hell, go read Ch. 5. of Hobbes, Leviathan. And both of Rawls's major books.

I mean, Xannon has been around for hundreds of years. Here's Hobbes, from previous cite.

But no one mans Reason, nor the Reason of any one number of men, makes the certaintie; no more than an account is therefore well cast up, because a great many men have unanimously approved it. And therfore, as when there is a controversy in account, the parties must by their own accord, set up for right Reason, the Reason of some Arbitrator, or Judge, to whose sentence they will both stand, or their controversie must either come to blowes, or be undecided, for want of a right Reason constituted by Nature...

Comment author: Paul_Gowder 03 July 2008 06:33:53AM 1 point [-]

What's the point?

You realize, incidentally, that there's a huge literature in political philosophy about what procedural fairness means. Right? Right?

Comment author: Paul_Gowder 17 April 2008 10:18:30PM 5 points [-]

gaaahhh. I stop reading for a few days, and on return, find this...

Eliezer, what do these distinctions even mean? I know philosophers who do scary bayesian things, whose work looks a lot -- a lot -- like math. I know scientists who make vague verbal arguments. I know scientists who work on the "theory" side whose work is barely informed by experiments at all, I know philosophers who are trying to do experiments. It seems like your real distinction is between a priori and a posteriori, and you've just flung "philosophy" into the former and "science" into the latter, basically at random.

(I defy you to find an experimental test for Bayes Rule, incidentally -- or to utter some non-question-begging statistical principle by which the results could be evaluated.)

In response to Zombie Responses
Comment author: Paul_Gowder 05 April 2008 03:39:52PM 0 points [-]

I think part of the problem is that your premise 3 is question-begging: it assumes away epiphenomenalism on the spot. An epiphenomenalist has to bite the bullet that our feeling that we consciously cause things is false. (Also, what could it mean to have an empirical probability over a logical truth?)

In response to Hand vs. Fingers
Comment author: Paul_Gowder 31 March 2008 07:18:00AM 0 points [-]

Unknown: that's not an ontological claim (at least for the dangerous metaethical commitments I mentioned in the caveat above).

In response to Hand vs. Fingers
Comment author: Paul_Gowder 31 March 2008 05:48:00AM 0 points [-]

Richard: the claim I'm trying out depends on us not being able to learn that information, for if we could learn it, the claim would have some observable content, and thereby have scientific implications.

In response to Hand vs. Fingers
Comment author: Paul_Gowder 30 March 2008 09:53:56PM 0 points [-]

Richard: I'm making a slightly stronger claim, which is that ontological claims with no scientific implications aren't even relevant for philosophical issues of practical reason, so, for example, the question of god's existence has no relevance for ethics (contra, e.g., Kant's second critique). (Of course, to make this fly at all, I'm going to have to say that metaethical positions aren't ontological claims, so I'm probably getting all kinds of commitments I don't want here, and I'll probably have to recant this position upon anything but the slightest scrutiny, but it seems like it's worth considering.)

In response to Hand vs. Fingers
Comment author: Paul_Gowder 30 March 2008 07:41:58PM 1 point [-]

Although I prefer an even weaker kind of scientism: scientism'': an ontological claim is boring if it has no scientific implications. By boring, I mean, tells us nothing relevant to practical reason. Which is why I'm happy to take Richard's property dualism: I accept scientism'', ergo, it doesn't matter.

In response to Initiation Ceremony
Comment author: Paul_Gowder 29 March 2008 07:39:50AM 4 points [-]

If you guys are going to rig elections, I want in.

View more: Prev | Next