Comment author: [deleted] 12 April 2013 08:07:33PM *  2 points [-]

and similarly low confidence in the negation of the latter [“Deep in their hearts, all men desire to torture women. Some of them are just too afraid of legal consequences.”]

On the off chance that you're speaking personally rather than hypothetically (I hope not)... What??? FWIW, I am a man and, while I can't see arbitrarily deep into my heart, I have no desire to torture women (or anyone else, actually) so far as I can see, and I seldom think about possible legal consequences of my actions. Now I might be lying about that (so you have to take my word for it), or maybe I do have such a desire but it's so deep in my heart that I can't see it (but how would you make that belief pay rent?), but still... I'd find it appalling that anyone would give a non-negligible probability that “Deep in their hearts, all men [emphasis as in the original] desire to torture women”, for any value of deep that wouldn't make that statement useless-whether-true-or-false. (BTW, Gandhi was also a man, wasn't he?)

In response to comment by [deleted] on LW Women Submissions: On Misogyny
Comment author: randallsquared 12 April 2013 09:54:18PM 5 points [-]
Comment author: OrphanWilde 14 March 2013 09:40:31PM *  1 point [-]

Born's Rule is a -bit- beyond the scope of Schroedinger's Cat. That's a bit like saying the Chinese Room Experiment isn't dissolved because we haven't solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]

Comment author: randallsquared 17 March 2013 04:44:02PM 0 points [-]

But it's actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it's still possible that the Room isn't understanding anything, even if you don't regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn't necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.

Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.

Comment author: shminux 09 March 2013 05:19:31PM *  11 points [-]

Do you guys need an outlet for your need for the mysterious when cold hard rational thought gets too much, or something? Anyway, rationalist fiction does not have to be a rationalist fantasy or a rationalist science fiction. No magic, no vampires, no space aliens, no friendly pink immortal horses, imagine that.

Plenty of fan fiction possibilities, too. 50 shades of rationality, fighting pride and prejudice, crime and punishment: tales of a buggy mind, war and peace: System 1 vs System 2...

Comment author: randallsquared 09 March 2013 06:07:48PM 5 points [-]

no immortal horses, imagine that.

No ponies or friendship? Hard to imagine, indeed. :|

In response to comment by [deleted] on The value of Now.
Comment author: drethelin 01 February 2013 08:45:23PM 0 points [-]

In a billion years your whole life might have no evidence of it ever happening. Does that mean it's magical gibberish?

Comment author: randallsquared 01 February 2013 09:54:53PM 0 points [-]

Not Michaelos, but in this sense, I would say that, yes, a billion years from now is magical gibberish for almost any decision you'd make today. I have the feeling you meant that the other way 'round, though.

Comment author: jimrandomh 26 January 2013 06:53:53PM 3 points [-]

There seem to be two objections here. The first is that CEV does not uniquely identify a value system; starting with CEV, you don't have actual values until you've identified the set of people/nonpeople you're including, an extrapolation procedure, and a reconciliation procedure. But when this is phrased as "the set of minds included in CEV is totally arbitrary, and hence, so will be the output," an essential truth is lost: while parts of CEV are left unspecified, other parts are, and so the output is not fully arbitrary. The set of CEV-compatible value systems is smaller than the set of possible value systems; and while the set of CEV-compatible value systems is not completely free of systems I find abhorrent, it is nevertheless systematically better than any other class of value systems I know of.

The second objection seems to be that there are humans out there with abhorrent values, and that extrapolation and reconciliation might not successfully eliminate those values. The best outcome, I think, is if they're initially included but either extrapolated into oblivion or cancelled by other values. (eg: valuing others' not having abortions loses to their valuing choice, but the AI arranges things so that most pregnancies are wanted and it doesn't come up often; valuing the torture of sinful children loses to their desire to not be tortured, and also goes away with a slight increase in intelligence and wisdom).

But just excluding some values outright seems very problematic. On a philosophical level, it requires breaking the symmetry between humans. On a practical level, it would mean that launching an AGI first becomes competitive, potentially replacing careful deliberation with a race to finish. And the risks of mistakes in a race to finish seem to far outweigh the importance of almost any slight differences in value systems.

Comment author: randallsquared 28 January 2013 04:06:02AM 1 point [-]

In the context of

But when this is phrased as "the set of minds included in CEV is totally arbitrary, and hence, so will be the output," an essential truth is lost

I think it's clear that with

valuing others' not having abortions loses to their valuing choice

you have decided to exclude some (potential) minds from CEV. You could just as easily have decided to include them and said "valuing choice loses to others valuing their life".

But, to be clear, I don't think that even if you limit it to "existing, thinking human minds at the time of the calculation", you will get some sort of unambiguous result.

Comment author: Benito 27 January 2013 12:06:42AM *  -1 points [-]

The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. Otherwise, people can be mistaken about what they really want, or what the facts prescribe (given their values).

'Objective ethics'? 'Merely points... at where you wish you were'? "Merely"!?

Take your most innate desires. Not 'I like chocolate' or 'I ought to condemn murder', but the most basic levels (go to a neuroscientist to figure those out). Then take the facts of the world. If you had a sufficiently powerful computer, and you could input the values and plug in the facts, then the output would be what you wanted to do best.

That doesn't mean whichever urge is strongest, but it takes into account the desires that make up your conscience, and the bit of you saying 'but that's not what's right'. If you could perform this calculation in your head, you'd get the feeling of 'Yes, that's what is right. What else could it possibly be? What else could possibly matter?' This isn't 'merely' where you wish you were. This is the 'right' place to be.

This reply is more about the meta-ethics, but for interpersonal ethics, please see my response to peter_hurford's comment above.

Comment author: randallsquared 28 January 2013 02:20:07AM *  3 points [-]

A very common desire is to be more prosperous than one's peers. It's not clear to me that there is some "real" goal that this serves (for an individual) -- it could be literally a primary goal. If that's the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can't think of any satisfactory solution to this. Now, one might say, "well, if they'd grown up farther together this would be solvable", but I don't see any reason that should be true. People don't necessarily grow more altruistic as they "grow up", so it seems that there might well be no CEV to arrive at. I think, actually, a weaker version of the UFAI problem exists here: sure, humans are more similar to each other than UFAI's need be to each other, but they still seem fundamentally different in goal systems and ethical views, in many respects.

Comment author: Benito 26 January 2013 08:26:16PM *  3 points [-]

I think that there's a misunderstanding about CEV going on.

At some point, we have to admit that human intuitions are genuinely in conflict in an irreconcilable way.

I don't think an AI would just ask us what we want, and then do what suits most of us. It would consider how our brains work, and exactly what shards of value make us up. Intuition isn't a very good guide to what is the best decision for us - the point of CEV is that if we knew more about the world and ethics, we would do different things, and think different thoughts about ethics.

You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth. I don't know what CEV would do with a human and a paperclips maximiser, but with just humans?

We're pretty similar.

Comment author: randallsquared 26 January 2013 09:30:40PM 5 points [-]

The point you quoted is my main objection to CEV as well.

You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth.

Right now there are large groups who have specific goals that fundamentally clash with some goals of those in other groups. The idea of "knowing more about [...] ethics" either presumes an objective ethics or merely points at you or where you wish you were.

Comment author: Kawoomba 26 January 2013 07:30:57AM 0 points [-]

An entity that didn't care about goals would never do anything at all.

I agree with the rest of your comment, and depending on how you define "goal" with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of "go left when there is a light on the right", think Braitenberg vehicles minus the evolutionary aspect.

In response to comment by Kawoomba on Welcome to Heaven
Comment author: randallsquared 26 January 2013 03:06:19PM 1 point [-]

Yes, I thought about that when writing the above, but I figured I'd fall back on the term "entity". ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).

Comment author: Raoul589 26 January 2013 01:08:25AM 0 points [-]

We disagree if you intended to make the claim that 'our goals' are the bedrock on which we should base the notion of 'ought', since we can take the moral skepticism a step further, and ask: what evidence is there that there is any 'ought' above 'maxing out our utility functions'?

A further point of clarification: It doesn't follow - by definition, as you say - that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that 'what is valuable is what we value' tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.

Note: I took the statement 'what is valuable is what we value' to be equivalent to 'things are valuable because we value them'. The statement has another possbile meaning: 'we value things because they are valuable'. I think both are incorrect for the same reason.

In response to comment by Raoul589 on Welcome to Heaven
Comment author: randallsquared 26 January 2013 04:38:57AM 3 points [-]

I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say:

what evidence is there that there is any 'ought' above 'maxing out our utility functions'?

I know of no such evidence. We do act in pursuit of goals, and that's enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it's not very close at all, and I agree, but I don't see a path to closer.

So, to recap, we value what we value, and there's no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about "ought" presume a given goal both can agree on.

Would making paperclips become valuable if we created a paperclip maximiser?

To the paperclip maximizer, they would certainly be valuable -- ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)

By the way, you can't say the wirehead doesn't care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn't care about goals would never do anything at all.

Comment author: Raoul589 20 January 2013 12:04:24PM 0 points [-]

What evidence is there that we should value anything more than what mental states feel like from the inside? That's what the wirehead would ask. He doesn't care about goals. Let's see some evidence that our goals matter.

In response to comment by Raoul589 on Welcome to Heaven
Comment author: randallsquared 22 January 2013 08:00:58PM 0 points [-]

Just to be clear, I don't think you're disagreeing with me.

View more: Prev | Next