You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on What does the world look like, the day before FAI efforts succeed? - Less Wrong Discussion

23 Post author: michaelcurzi 16 November 2012 08:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread.

Comment author: shminux 16 November 2012 10:47:38PM *  4 points [-]

What are those "good guys" you speak of?

Comment author: michaelcurzi 16 November 2012 10:58:23PM 1 point [-]

People pursuing a positive Singularity, with the right intentions, who understand the gravity of the problem, take it seriously, and do it on behalf of humanity rather than some smaller group.

I haven't offered a rigorous definition, and I'm not going to, but I think you know what I mean.

Comment author: Steve_Rayhawk 17 November 2012 01:40:37AM *  17 points [-]

you know what I mean.

Right, but this is a public-facing post. A lot of readers might not know why you could think it was obvious that "good guys" would imply things like information security, concern for Friendliness so-named, etc., and they might think that the intuition you mean to evoke with a vague affect-laden term like "good guys" is just the same argument-disdaining groupthink that would be implied if they saw it on any other site.

To prevent this impression, if you're going to use the term "good guys", then at or before the place where you first use it, you should probably put an explanation, like

(I.e. people who are familiar with the kind of thinking that can generate arguments like those in "The Detached Lever Fallacy", "Fake Utility Functions" and the posts leading up to it, "Anthropomorphic Optimism" and "Contaminated by Optimism", "Value is Fragile" and the posts leading up to it, and the "Envisioning perfection" and "Beyond the adversarial attitude" discussions in Creating Friendly AI or most of the philosophical discussion in Coherent Extrapolated Volition, and who understand what it means to be dealing with a technology that might be able to bootstrap to the singleton level of power that could truly engineer a "forever" of the "a boot stamping on a human face — forever" kind.)

Comment author: michaelcurzi 17 November 2012 02:39:58AM 5 points [-]

Okay, I'm convinced. I think I will just remove the term altogether, because it's confusing the issue.

Comment author: hankx7787 17 November 2012 01:44:02AM -1 points [-]

well said.

Comment author: shminux 17 November 2012 01:19:58AM 3 points [-]

I haven't offered a rigorous definition, and I'm not going to, but I think you know what I mean.

I might have some inkling of what you want to mean, but on this forum, you ought to be able to define your terms to be taken seriously. I suspect that if you honestly try defining "good guys", you will find that it is harder than it looks and not at all obvious.

Comment author: michaelcurzi 17 November 2012 02:58:21AM *  0 points [-]

I'm not saying that the definition is obvious - I'm saying that it's besides the point. It was clearly detracting from the quality of the conversation, though, so I've removed the term.

Comment author: Decius 17 November 2012 12:43:05AM 1 point [-]

What do the good guys look like? Do they look like a cabal with government sanction that performs research in secret facilities offshore, control the asteroid deflection system (and therefore the space program), and prohibit anyone else from using the most effective mind (and presumably quality-of-life) enhancing techniques?

Basically, should one of the very first thing a Friendly AI does be to wipe out the group of people who succeed in creating the first FAI?