Comment author: dlthomas 23 May 2012 12:00:56AM 22 points [-]

Azathoth should probably link here. I think using our jargon is fine, but links to the source help keep it discoverable for newcomers.

Comment author: Strange7 22 May 2012 11:22:16PM 0 points [-]

Wait, are we talking O2 molecules in the atmosphere, or all oxygen atoms in Earth's gravity well?

Comment author: dlthomas 22 May 2012 11:54:58PM 0 points [-]

I wish I could vote you up and down at the same time.

Comment author: TimS 18 May 2012 07:11:18PM *  4 points [-]

Heinlein's "Starship Troopers" discusses the death penalty imposed on a violent child rapist/murder. The narrator says there are two possibilities:

1) The killer was so deranged he didn't know right from wrong. In that case, killing (or imprisoning him) is the only safe solution for the rest. Or,
2) The killer knew right from wrong, but couldn't stop himself. Wouldn't killing (or stopping) him be a favor, something he would want?

Why can't that type of reasoning exist behind the veil of ignorance? Doesn't it completely justify certain kinds of oppression? That said, there's also an empirical question whether the argument applies to the particular group being oppressed.

Comment author: dlthomas 18 May 2012 09:57:59PM 1 point [-]

As long as we're using sci-fi to inform our thinking on criminality and corrections, The Demolished Man is an interesting read.

Comment author: steven0461 18 May 2012 08:39:01PM *  6 points [-]

Some quotes from the CEV document:

Coherence is not a simple question of a majority vote. Coherence will reflect the balance, concentration, and strength of individual volitions. A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity. The variables are quantitative, not qualitative.

(...)

It should be easier to counter coherence than to create coherence.

(...)

In qualitative terms, our unimaginably alien, powerful, and humane future selves should have a strong ability to say "Wait! Stop! You're going to predictably regret that!", but we should require much higher standards of predictability and coherence before we trust the extrapolation that says "Do this specific positive thing, even if you can't comprehend why."

Though it's not clear to me how the document would deal with Wei Dai's point in the sibling comment. In the absence of coherence on the question of whether to protect, persecute, or ignore impopular minority groups, does CEV default to protecting them or ignoring them? You might say that as written, it would obviously not protect them, because there was no coherence in favor of doing so; but what if protection of minority groups is a side effect of other measures CEV was taking anyway?

(For what it's worth, I suspect that extrapolation would in fact create enough coherence for this particular scenario not to be a problem.)

Comment author: dlthomas 18 May 2012 08:56:29PM 0 points [-]

Thank you. So, not quite consensus but similarly biased in favor if inaction.

Comment author: Wei_Dai 18 May 2012 06:40:56PM 4 points [-]

My understanding is that CEV is based on consensus, in which case the majority is meaningless.

If CEV doesn't positively value some minority group not being killed (i.e., if it's just indifferent due to not having a consensus), then the majority would be free to try to kill that group. So we really do need CEV to saying something about this, instead of nothing.

Comment author: dlthomas 18 May 2012 06:42:45PM 0 points [-]

Assuming we have no other checks on behavior, yes. I'm not sure, pending more reflection, whether that's a fair assumption or not...

Comment author: TheOtherDave 18 May 2012 06:21:01PM 9 points [-]

Upvoting back to zero because I think this is an important question to address.

If I prefer that people not be tortured, and that's more important to me than anything else, then I ought not prefer a system that puts all the torturers in their own part of the world where I don't have to interact with them over a system that prevents them from torturing.

More generally, this strategy only works if there's nothing I prefer/antiprefer exist, but merely things that I prefer/antiprefer to be aware of.

Comment author: dlthomas 18 May 2012 06:26:43PM 0 points [-]

It's a potential outcome, I suppose, in that

[T]here's nothing I prefer/antiprefer exist, but merely things that I prefer/antiprefer to be aware of.

is a conceivable extrapolation from a starting point where you antiprefer something's existence (in the extreme, with MWI you may not have much say what does/doesn't exist, just how much of it in which branches).

It's also possible that you hold both preferences (prefer X not exist, prefer not to be aware of X) and the existence preference gets dropped for being incompatible with other values held by other people while the awareness preference does not.

Comment author: [deleted] 18 May 2012 05:24:06PM 2 points [-]

Just because I wouldn't value that, doesn't mean that the majority of the world wouldn't. Which is my whole point.

Comment author: dlthomas 18 May 2012 05:28:58PM 2 points [-]

My understanding is that CEV is based on consensus, in which case the majority is meaningless.

Comment author: [deleted] 18 May 2012 01:33:14AM 9 points [-]

"A point I may not have made in these posts, but made in comments, is that the majority of humans today think that women should not have full rights, homosexuals should be killed or at least severely persecuted, and nerds should be given wedgies. These are not incompletely-extrapolated values that will change with more information; they are values. Opponents of gay marriage make it clear that they do not object to gay marriage based on a long-range utilitarian calculation; they directly value not allowing gays to marry. Many human values horrify most people on this list, so they shouldn't be trying to preserve them."

This has always been my principal objection to CEV. I strongly suspect that were it implemented, it would want the death of a lot of my friends, and quite possibly me, too.

Comment author: dlthomas 18 May 2012 05:19:07PM 2 points [-]

Um, if you would object to your friends being killed (even if you knew more, thought faster, and grew up further with others), then it wouldn't be coherent to value killing them.

Comment author: kalla724 17 May 2012 05:25:24AM 4 points [-]

Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.

Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.

The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an unpredicted and previously unknown obstacle, which is uncovered by experimentation. 4- work is done to overcome this obstacle. 5- goto 2, for many cycles, until a goal is achieved - which may or may not be close to the original idea.

I am not the one who is making positive claims here. All I'm saying is that what has happened before is likely to happen again. A team of human researchers or an AGI can use currently available information to build something (anything, nanoscale or macroscale) to the place to which it has already been built. Pushing it beyond that point almost invariably runs into previously unforeseen problems. Being unforeseen, these problems were not part of models or simulations; they have to be accounted for independently.

A positive claim is that an AI will have a magical-like power to somehow avoid this - that it will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step. I find that to be unlikely.

Comment author: dlthomas 17 May 2012 09:28:53PM *  0 points [-]

I am not the one who is making positive claims here.

You did in the original post I responded to.

All I'm saying is that what has happened before is likely to happen again.

Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of "likely" into probability, but it is also not what you said.

"It can't deduce how to create nanorobots" is a concrete, specific, positive claim about the (in)abilities of an AI. Don't misinterpret this as me expecting certainty - of course certainty doesn't exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as "X will likely happen" asserts a much weaker belief than an unqualified sentence like "X will happen." "It likely can't deduce how to create nanorobots" is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is.

A positive claim is that an AI will have a magical-like power to somehow avoid this.

That is not a claim I made. "X will happen" implies a high confidence - saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time.

This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I've adjusted my estimate down, although I still don't discount it entirely), in amongst your argument with straw-dlthomas.

Comment author: kalla724 17 May 2012 02:56:21AM 2 points [-]

With absolute certainty, I don't. If absolute certainty is what you are talking about, then this discussion has nothing to do with science.

If you aren't talking about absolutes, then you can make your own estimation of likelihood that somehow an AI can derive correct conclusions from incomplete data (and then correct second order conclusions from those first conclusions, and third order, and so on). And our current data is woefully incomplete, many of our basic measurements imprecise.

In other words, your criticism here seems to boil down to saying "I believe that an AI can take an incomplete dataset and, by using some AI-magic we cannot conceive of, infer how to END THE WORLD."

Color me unimpressed.

Comment author: dlthomas 17 May 2012 04:28:21AM 3 points [-]

No, my criticism is "you haven't argued that it's sufficiently unlikely, you've simply stated that it is." You made a positive claim; I asked that you back it up.

With regard to the claim itself, it may very well be that AI-making-nanostuff isn't a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor - I don't know how limiting. I also don't know how incomplete our data is, with regard to producing nanomagic stuff. We've already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn't be done in simulation? I just don't know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.

View more: Prev | Next