Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: falenas108 10 March 2015 08:29:48PM 3 points [-]

"In that extremity, I went into the Department of Mysteries and I invoked a password which had never been spoken in the history of the Line of Merlin Unbroken, did a thing forbidden and yet not utterly forbidden."

So, this is the single change that makes this story an AU?

Comment author: MugaSofer 11 March 2015 12:17:01PM -1 points [-]

Well, that and the differences in the setting/magic (there's no Free Transfiguration in canon, for instance, and the Mirror is different - there are less Mysterious Ancient Artefacts generally - and Horcruxes run on different mechanics ... stuff like that.)

And Voldemort is just inherently smarter than everyone else, too, for no in-story reason I can discern; he just is, it's part of the conceit. (Although maybe that was Albus' fault too, somehow?)

Comment author: buybuydandavis 11 March 2015 01:58:14AM 11 points [-]

But didn't he note in the confrontation in the Defense Against the Dark Arts class that Harry had chosen Quirrell as his Wise Old Wizard?

"“Harry… you must realize that if you choose this man as your teacher and your friend, your first mentor, then one way or another you will lose him, and the manner in which you lose him may or may not allow you to ever get him back.”"

Dumbledore's comment in his note just don't seem congruent with this comment earlier on, and it's this comment and not the note which seems congruent with reality.

Comment author: MugaSofer 11 March 2015 12:06:27PM 4 points [-]

To be fair, we don't know when he wrote the note.

Comment author: jdgalt 28 February 2015 07:26:53PM 1 point [-]

The obvious next question would be to ask if you're OK with your family being tortured under the various circumstances this would suggest you would be.

I've lost the context to understand this question.

How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?

I mean, it's unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)

I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.

The usual version of this I hear is from people who've read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated -- maybe not perfectly, but to an arbitrarily high difficulty of disproving it -- by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.

Oh. That's an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.

Um, if something is smart enough to solve every problem a human can, [how] relevant is the distinction? I mean, sure, it might (say) be lying about it's preferences, but ... surely it'll have exactly the same impact on society, regardless?

That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.

On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.

It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn't open to reason?

ahem ... I'm ... actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I'm not sure I'd go quite so far as to say it's "obvious" and anyone who disagrees must be "senseless ... not open to reason".

Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.

I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.

When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).

I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It's a problem. (That's why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)

One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.

For the same reason, I never expect judges, journalists, or historians to be "unbiased" because I don't believe true "unbiasedness" is possible even in principle.

Comment author: MugaSofer 02 March 2015 11:50:21AM -1 points [-]

I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.

Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.

But I see we agree on this.

That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.

But is it possible to impersonate intelligence? Isn't anything that can "fake" problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)

I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.

When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).

I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.

What makes you think that "individual rights" are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they're the correct moral theory, what evidence would you point to? You might change my mind.

One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.

Oh, everyone is misguided. (Hence the name of the site.) But they generally aren't actual evil strawmen.

Comment author: advancedatheist 28 February 2015 01:21:09AM *  2 points [-]

I hate to spoil the mood for nerd grieving and geek hermeneutics, but Star Trek made a certain kind of sense in the late 1960's (nearly 50 years ago!) when the U.S. and the Soviet Union had real space programs which tried to do new things, one after another. But because astronautics has regressed since then, despite all accelerationist propaganda you hear from transhumanists, this genre of mythological framework for thinking about "the future" makes less and less sense. Given the failure of the "space age," would people 50 years from now, in a permanently Earth-bound reality, bother to watch these ancient shows and obsess over the characters?

Comment author: MugaSofer 02 March 2015 11:32:05AM 0 points [-]

Actually, they mention every so often that the Cold War turned hot in the Star Trek 'verse and society collapsed. They're descended from the civilization that rebuilt.

Comment author: Mark_Friedenbach 29 January 2015 09:25:01PM 0 points [-]

You have a credible reason for thinking it will take longer?

Comment author: MugaSofer 29 January 2015 10:21:01PM *  -1 points [-]

I'm no expert, but even Kurzweil - who, from past performance, is usually correct but over-optimistic by maybe five, ten years - doesn't expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.

2020 is in five years. The kind of progress that would seem to imply - from where we are now to full-on human-level AI in just five years - seems incredible.

Comment author: ike 28 January 2015 04:52:00AM *  3 points [-]

So if you were trying to maximise total points, wouldn't it be best to never let it out because you lose a lot more if it destroys the world than you gain from getting solutions?

What values for points make it rational to let the AI out, and is it also rational in the real-world analogue?

Comment author: MugaSofer 29 January 2015 01:28:41PM -1 points [-]

We rolled randomly the ethics of the AI, rolled random events with dice and the AI offered various solutions to those problems... You lost points if you failed to deal with the problems and lost lots of points if you freed the AI and they happened to have goals you disagreed with like annihilation of everything.

Comment author: MugaSofer 29 January 2015 01:08:14PM 0 points [-]

many now believe that strong AI may be achieved sometime in the 2020s

Yikes, but that's early. That's a lot sooner than I would have said, even as a reasonable lower bound.

Comment author: dxu 16 January 2015 08:17:48PM *  3 points [-]

This may be somewhat off-topic, but I've been noticing for some time that your comments frequently receive seemingly "random" downvotes--that is to say, downvotes that appear to be there for no good reason. As an example, the comment that I am replying to right now has a karma score of -1 at the time of posting, despite not being clearly mistaken or in violation of some LW social norm. Checking your profile reveals that your total karma score is in the negatives, despite the fact that you seem like a fairly well-read person, as well as a long-time user of LW. Does anyone have any idea why this is the case?

Comment author: MugaSofer 27 January 2015 03:46:20AM *  1 point [-]

Yikes, you're right. I had noticed something odd, but forgot to look into it. Dangit.

I'm pretty sure this is somebody going to the trouble of downvoting every comment of mine, which has happened before.

It's against the rules, so I'll ask a mad to look into it; but obviously, if someone cares enough about something I'm doing or wrong about this much, please, PM me. I can't interpret you through meaningless downvotes, but I'll probably stop whatever is bothering you if I know what it is.

Comment author: TheOtherDave 16 January 2015 10:01:08PM 2 points [-]

A quick scan of the last couple of pages of MugaSofer's comments seems to indicate that the last handful of comments have not received negative votes, and a long sequence of comments before that have consistently received a single negative vote each, which looks like systematic downvoting to me (by someone who hasn't yet caught up) but of course is not remotely definitive.

That said, the net balance of them is generally positive, which means the negative balance isn't accounted for either way.

Comment author: MugaSofer 27 January 2015 03:30:52AM *  1 point [-]

I can give you a little more data - this has happened before, which is why I'm in the negatives. Which I guess makes it more likely to happen again, if I'm that annoying :/

It turned out to be a different person to the famous case, they were reasonable and explained their (accurate) complaint via PM. Probably not the same person this time, but if it happened once ...

Comment author: JoshuaZ 15 January 2015 09:46:37PM 3 points [-]

Ooh, I hadn't thought of that.

This is one of the standard scholarly explanations May I suggest this shows that you should maybe read more on this subject?

Comment author: MugaSofer 27 January 2015 03:11:43AM -1 points [-]

Yup, definitely. Interested amateur here.

View more: Next