Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jdgalt 28 February 2015 07:26:53PM 1 point [-]

The obvious next question would be to ask if you're OK with your family being tortured under the various circumstances this would suggest you would be.

I've lost the context to understand this question.

How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?

I mean, it's unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)

I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.

The usual version of this I hear is from people who've read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated -- maybe not perfectly, but to an arbitrarily high difficulty of disproving it -- by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.

Oh. That's an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.

Um, if something is smart enough to solve every problem a human can, [how] relevant is the distinction? I mean, sure, it might (say) be lying about it's preferences, but ... surely it'll have exactly the same impact on society, regardless?

That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.

On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.

It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn't open to reason?

ahem ... I'm ... actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I'm not sure I'd go quite so far as to say it's "obvious" and anyone who disagrees must be "senseless ... not open to reason".

Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.

I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.

When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).

I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It's a problem. (That's why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)

One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.

For the same reason, I never expect judges, journalists, or historians to be "unbiased" because I don't believe true "unbiasedness" is possible even in principle.

Comment author: MugaSofer 02 March 2015 11:50:21AM 0 points [-]

I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.

Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.

But I see we agree on this.

That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.

But is it possible to impersonate intelligence? Isn't anything that can "fake" problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)

I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.

When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).

I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.

What makes you think that "individual rights" are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they're the correct moral theory, what evidence would you point to? You might change my mind.

One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.

Oh, everyone is misguided. (Hence the name of the site.) But they generally aren't actual evil strawmen.

Comment author: advancedatheist 28 February 2015 01:21:09AM *  1 point [-]

I hate to spoil the mood for nerd grieving and geek hermeneutics, but Star Trek made a certain kind of sense in the late 1960's (nearly 50 years ago!) when the U.S. and the Soviet Union had real space programs which tried to do new things, one after another. But because astronautics has regressed since then, despite all accelerationist propaganda you hear from transhumanists, this genre of mythological framework for thinking about "the future" makes less and less sense. Given the failure of the "space age," would people 50 years from now, in a permanently Earth-bound reality, bother to watch these ancient shows and obsess over the characters?

Comment author: MugaSofer 02 March 2015 11:32:05AM 1 point [-]

Actually, they mention every so often that the Cold War turned hot in the Star Trek 'verse and society collapsed. They're descended from the civilization that rebuilt.

Comment author: Mark_Friedenbach 29 January 2015 09:25:01PM 0 points [-]

You have a credible reason for thinking it will take longer?

Comment author: MugaSofer 29 January 2015 10:21:01PM *  0 points [-]

I'm no expert, but even Kurzweil - who, from past performance, is usually correct but over-optimistic by maybe five, ten years - doesn't expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.

2020 is in five years. The kind of progress that would seem to imply - from where we are now to full-on human-level AI in just five years - seems incredible.

Comment author: ike 28 January 2015 04:52:00AM *  3 points [-]

So if you were trying to maximise total points, wouldn't it be best to never let it out because you lose a lot more if it destroys the world than you gain from getting solutions?

What values for points make it rational to let the AI out, and is it also rational in the real-world analogue?

Comment author: MugaSofer 29 January 2015 01:28:41PM -1 points [-]

We rolled randomly the ethics of the AI, rolled random events with dice and the AI offered various solutions to those problems... You lost points if you failed to deal with the problems and lost lots of points if you freed the AI and they happened to have goals you disagreed with like annihilation of everything.

Comment author: MugaSofer 29 January 2015 01:08:14PM 0 points [-]

many now believe that strong AI may be achieved sometime in the 2020s

Yikes, but that's early. That's a lot sooner than I would have said, even as a reasonable lower bound.

Comment author: dxu 16 January 2015 08:17:48PM *  3 points [-]

This may be somewhat off-topic, but I've been noticing for some time that your comments frequently receive seemingly "random" downvotes--that is to say, downvotes that appear to be there for no good reason. As an example, the comment that I am replying to right now has a karma score of -1 at the time of posting, despite not being clearly mistaken or in violation of some LW social norm. Checking your profile reveals that your total karma score is in the negatives, despite the fact that you seem like a fairly well-read person, as well as a long-time user of LW. Does anyone have any idea why this is the case?

Comment author: MugaSofer 27 January 2015 03:46:20AM *  1 point [-]

Yikes, you're right. I had noticed something odd, but forgot to look into it. Dangit.

I'm pretty sure this is somebody going to the trouble of downvoting every comment of mine, which has happened before.

It's against the rules, so I'll ask a mad to look into it; but obviously, if someone cares enough about something I'm doing or wrong about this much, please, PM me. I can't interpret you through meaningless downvotes, but I'll probably stop whatever is bothering you if I know what it is.

Comment author: TheOtherDave 16 January 2015 10:01:08PM 2 points [-]

A quick scan of the last couple of pages of MugaSofer's comments seems to indicate that the last handful of comments have not received negative votes, and a long sequence of comments before that have consistently received a single negative vote each, which looks like systematic downvoting to me (by someone who hasn't yet caught up) but of course is not remotely definitive.

That said, the net balance of them is generally positive, which means the negative balance isn't accounted for either way.

Comment author: MugaSofer 27 January 2015 03:30:52AM *  1 point [-]

I can give you a little more data - this has happened before, which is why I'm in the negatives. Which I guess makes it more likely to happen again, if I'm that annoying :/

It turned out to be a different person to the famous case, they were reasonable and explained their (accurate) complaint via PM. Probably not the same person this time, but if it happened once ...

Comment author: JoshuaZ 15 January 2015 09:46:37PM 3 points [-]

Ooh, I hadn't thought of that.

This is one of the standard scholarly explanations May I suggest this shows that you should maybe read more on this subject?

Comment author: MugaSofer 27 January 2015 03:11:43AM -1 points [-]

Yup, definitely. Interested amateur here.

Comment author: gjm 15 January 2015 05:01:49PM 4 points [-]

I think the claim isn't quite "it has a mistake, therefore it can't be meant to be interpreted at face value" but "it has a really glaringly obvious mistake, therefore it can't be meant to be interpreted at face value".

That's a lot more sensible, and using this principle doesn't make you incapable of recognizing mistakes. It does make you incapable of recognizing when the people who put together your sacred text did something incredibly stupid, but maybe that's OK.

Except that I think another reasonable interpretation is: whoever edited the text into a form that contains both stories did notice that they are inconsistent, didn't imagine that somehow they are both simultaneously correct, but did intend them to be taken at face value -- the implicit thinking being something like "obviously at least one of these is wrong somewhere, but both of them are here in our tradition; probably one is right and the other wrong; I'll preserve them both, so that at least the truth is in here somewhere".

If this sort of thing is possible -- and I think it's very plausible -- then the inference from "glaring inconsistency" to "intended metaphorically or something like that" no longer works. On the other hand, in that case you at least have some precedent for it being OK not to assume that everything in the text is literally correct.

Comment author: MugaSofer 15 January 2015 07:45:41PM *  -1 points [-]

There's also the problem of people taking things meant to be metaphorical as literal, simply because, well, it's right there, right?

For example (just ran into this today):

Early in the morning, as Jesus was on his way back to the city, he was hungry. Seeing a fig tree by the road, he went up to it but found nothing on it except leaves. Then he said to it, “May you never bear fruit again!” Immediately the tree withered. Matthew 21:18-22 NIV

This is pretty clearly an illustration. "Like this tree, you'd better actually give results, not just give the appearance of being moral". (In fact, I believe Jesus uses this exact illustration in a sermon later.)

And yet, I saw this on a list of "God's Temper Tantrums that Christians Never Mention", presumably interpreted as "Jesus zapped a tree because it annoyed him."

Except that I think another reasonable interpretation is: whoever edited the text into a form that contains both stories did notice that they are inconsistent, didn't imagine that somehow they are both simultaneously correct, but did intend them to be taken at face value -- the implicit thinking being something like "obviously at least one of these is wrong somewhere, but both of them are here in our tradition; probably one is right and the other wrong; I'll preserve them both, so that at least the truth is in here somewhere".

Ooh, I hadn't thought of that.

Comment author: JoshuaZ 09 January 2015 10:08:50PM 0 points [-]

USian fundamentalist-evangelical Christianity, however, is ... exceptionally bad at reading their supposedly all-important sacred text, though. And, indeed, facts in general. We're talking about the movement that came up with and is still pushing "creationism", here.

Historically that isn't quite true to credit anything there to the US. Pre-Darwin insistence on a literal global flood could be found in locations all over Europe. But more relevant to the point, I don't see how this is a good example: if anything this is one where the fundamentalists are actually reading the text closer to what a naive reading means, without any stretched attempts to claim a metaphorical intent that is hard to see in the text. The problem of trying to read the Genesis text in a way that is consistent with the evidence is something that smart people have been trying for a very long time now, so that leads to a lot of very well done apologetics to choose from, but that doesn't mean it is actually what the text intended. It is true that the more, for lack of a better term, sophisticated creationists due stretch the text massive (claims about mats of vegetation to help preserve life during the flood and claims of rapid post-deluge speciation both fall into that category), but they A) aren't that common claims and B) aren't any more stretches than what liberal interpretations of the text are doing.

Comment author: MugaSofer 15 January 2015 01:01:07PM *  -1 points [-]

I don't see how this is a good example: if anything this is one where the fundamentalists are actually reading the text closer to what a naive reading means, without any stretched attempts to claim a metaphorical intent that is hard to see in the text. The problem of trying to read the Genesis text in a way that is consistent with the evidence is something that smart people have been trying for a very long time now, so that leads to a lot of very well done apologetics to choose from, but that doesn't mean it is actually what the text intended.

Well, I'm a Christian, so I might be biased in favour of interpretations that make that seem reasonable. But even so, I find it hard to believe a text that includes two mutually-contradictory creation stories (right next to each other in the text, at that) intended them to be interpreted literally.

View more: Next