Comment author: FiftyTwo 18 November 2012 05:24:39PM 8 points [-]

We say "politics is the mindkiller" but ti seems an seperate question why certain political topics are more 'mindkillery' than others.

Recent example that brought this to mind is the conflict going on in Gaza, its unusual in that my friends and acquaintances who are normally fairly moderate and willing to see both sides on political topics are splitting very heavily onto opposite sides, and refusing to see the others point of view.

Comment author: hankx7787 18 November 2012 05:31:30PM 2 points [-]

Well if there's anything you can do to increase the level of mindkill in politics, it's mix in religion. Are they less polarized on issues like, say, abortion?

Comment author: Giles 17 November 2012 10:23:35PM 5 points [-]

Another dimension: value discovery.

  • Fantastic: There is a utility function representing human values (or a procedure for determining such a function) that most people (including people with a broad range of expertise) are happy with.
  • Pretty good: Everyone's values are different (and often contradict each other), but there is broad agreement as to how to aggregate preferences. Most people accept FAI needs to respect values of humanity as a whole, not just their own.
  • Sufficiently good: Many important human values contradict each other, with no "best" solution to those conflicts. Most people agree on the need for a compromise but quibble over how that compromise should be reached.
Comment author: hankx7787 18 November 2012 12:50:28AM *  -2 points [-]

Agree with your Fantastic but disagree with how you arrange the others... it wouldn't be rational to favor a solution which satisfies others' values in larger measure at the expense of one's own values in smaller measure. If the solution is less than Fantastic, I'd rather see a solution which favors in larger measure the subset of humanity with values more similar to my own, and in smaller measure the subset of humanity whose values are more divergent from my own.

I know, I'm a damn, dirty, no good egoist. But you have to admit that in principle egoism is more rational than altruism.

Comment author: [deleted] 17 November 2012 09:36:27PM *  3 points [-]

Why is his pessimistic (realistic?) take down voted without counterargument? Hankx isn't in negative karma so I don't think he isn't usually disruptive and I think he is making this argument in good faith.

Comment author: hankx7787 18 November 2012 12:15:53AM *  3 points [-]

Well I didn't really substantively defend my position with reasons, and heaping on all the extra adjectives didn't help :P

I was trying to figure out how to strike-through the unsupported adjectives, now I can't figure out how to un-retract the comment... bleh what a mess.

While I still agree with all the adjectives, I'll take them out to be less over the top. Here's what the edit should say:

I'd argue this entire exercise is an indictment of Eliezer's approach to Friendly AI. The notion of a formal, rigorous success of "Friendliness theory" coming BEFORE the Singularity is astronomically improbable.

What a Friendly Singularity will actually look like is an AGI researcher or researchers forging ahead at an insane risk to themselves and the rest of humanity, and then somehow managing the improbable task of not annihilating humanity through intensive, inherently faulty, safety engineering, before later eventually realizing a formal solution to Friendliness theory post-Singularity. And of course it goes without saying that the odds are heavily against any such safety mechanisms succeeding, let alone the odds that they will ever even be attempted.

Suffice to say a world in which we are successfully prepared to implement Friendly AI is unimaginable at this point.

And just to give some indication of where I'm coming from, I would say that this conclusion follows pretty directly if you buy Eliezer's arguments in the sequences and elsewhere about locality and hard-takeoff, combined with his arguments that FAI is much harder than AGI. (see e.g. here)

Of course I have to wonder, is Eliezer holding out to try to "do the impossible" in some pipe dream FAI scenario like the OP imagines, or does he agree with this argument but still thinks he's somehow working in the best way possible to support this more realistic scenario when it comes up?

Comment author: Steve_Rayhawk 17 November 2012 01:40:37AM *  17 points [-]

you know what I mean.

Right, but this is a public-facing post. A lot of readers might not know why you could think it was obvious that "good guys" would imply things like information security, concern for Friendliness so-named, etc., and they might think that the intuition you mean to evoke with a vague affect-laden term like "good guys" is just the same argument-disdaining groupthink that would be implied if they saw it on any other site.

To prevent this impression, if you're going to use the term "good guys", then at or before the place where you first use it, you should probably put an explanation, like

(I.e. people who are familiar with the kind of thinking that can generate arguments like those in "The Detached Lever Fallacy", "Fake Utility Functions" and the posts leading up to it, "Anthropomorphic Optimism" and "Contaminated by Optimism", "Value is Fragile" and the posts leading up to it, and the "Envisioning perfection" and "Beyond the adversarial attitude" discussions in Creating Friendly AI or most of the philosophical discussion in Coherent Extrapolated Volition, and who understand what it means to be dealing with a technology that might be able to bootstrap to the singleton level of power that could truly engineer a "forever" of the "a boot stamping on a human face — forever" kind.)

Comment author: hankx7787 17 November 2012 01:44:02AM -1 points [-]

well said.

Comment author: hankx7787 17 November 2012 01:21:09AM *  -1 points [-]

The scenarios you listed are absurd, incoherent, irrelevant, and wildly, wildly optimistic.

Actually I'd argue this entire exercise is an indictment of Eliezer's approach to Friendly AI. The notion of a formal, rigorous success of "Friendliness theory" coming BEFORE the Singularity is astronomically improbable.

What a Friendly Singularity will actually look like is an AGI researcher or researchers forging ahead at an insane risk to themselves and the rest of humanity, and then somehow managing the improbable task of not annihilating humanity through intensive, inherently faulty, safety engineering, before later eventually realizing a formal solution to Friendliness theory post-Singularity. And of course it goes without saying that the odds are heavily against any such safety mechanisms even succeeding, let alone the odds that they will ever even be attempted.

Suffice to say a world in which we are successfully prepared to implement Friendly AI is unimaginable at this point.

EDIT: how the hell do I un-retract this? see below

Signalling fallacies: Implication vs. Inference

-7 hankx7787 14 November 2012 09:15PM

The signalling fallacy that seems to get all the attention is what I call a fallacy of signalling "implication", i.e. when someone says:

"Justin Bieber's music is crappy"

The rational implication of this communication is that Justin Bieber's music is, in fact, crappy according to some standard. But if what they were actually implying is that they don't like Justin Bieber or that they are signalling a tribal affiliation with fellow JB haters, then they are committing a fallacy of signalling implication.

 

But that's not the only type of signalling fallacy. You can also commit a fallacy of signalling "inference", i.e. consider someone who says:

"Atlas Shrugged is the greatest book ever written".

Again the rational implication of this communication is that Atlas Shrugged is, in fact, the greatest according to some standard. And if that's what they were actually implying, but then you infer that they simply enjoyed the book a lot or are signalling a tribal affiliation with fellow AS lovers, then you are committing a fallacy of signalling inference. (Note that this goes the other way, too. If they *were* implying simply that they enjoyed the book or were affiliating themselves with a tribe and you inferred they were making some factual claim, that would be just as wrong).

 

So it's important to be aware of two sides of this fallacy. If you happen to be overly concerned with the former, you might fall victim to the latter.

Comment author: hankx7787 14 November 2012 08:50:44PM *  1 point [-]

tldr, but:

I don't think shorthand interpretations like these are accurate for most people who claim that JB sucks. Instead, I suspect most people who argue this are communicating some combination of (a) negative affect towards JB and (b) tribal affiliation with fellow JB haters. I've taken to referring to statements like these, that are neither preference claims nor empirical claims, as "attitude claims".

Did it ever occur to you that maybe they simply mean what they said? That JB's music is crappy music according to some standard? I know, far be it from a rationality community to focus on the rational communication presumably being made, instead focusing on "signalling" is what's in style for some stupid reason.

Same goes for:

"Atlas Shrugged is the best book ever written."

Which is a direct quote from a comment I made here long ago :)

EDIT: removed "objectively". I keep forgetting this word causes people's brains to explode.

Comment author: moshez 09 November 2012 04:20:36PM 10 points [-]

I took it. No SAT scores or classical IQ scores, didn't take Myer-Briggs (because it's stupid) or Autism (because freakin' hell, amateur psychology diagnosis on the 'net).

Comment author: hankx7787 09 November 2012 11:11:51PM -1 points [-]

Same here, agree 100%

Comment author: hankx7787 09 November 2012 02:28:41AM 9 points [-]

done!

Comment author: DaFranker 25 October 2012 03:44:54PM *  0 points [-]

(...) it wants to live and reproduce because evolution designed it that way.

This is a key crucial wording which propagates popular misconceptions about evolution.

Evolution did not design anything. Evolution has no will, no objective, no motives, no utility function, no overarching plot or grand plan for humanity and the world, unlike what an uninformed reader would instantly infer from reading the above quote.

A slightly better formulation would be:

"(...) it wants to live and reproduce because it is a modified copy of other organisms that randomly wanted to live and reproduce, while the organisms that did not want to live and reproduce were eliminated long ago in the history of evolution."

The following sentences that talk about evolution are similarly extremely misleading. I will retract my downvote once these issues are corrected and the signal-noise ratio is improved a bit.

Comment author: hankx7787 25 October 2012 04:39:56PM -1 points [-]

I think you missed the entire point of the post ^_^

View more: Prev | Next