You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Konkvistador comments on Politics Discussion Thread February 2013 - Less Wrong Discussion

1 Post author: OrphanWilde 06 February 2013 09:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread.

Comment author: [deleted] 07 February 2013 03:50:20PM 8 points [-]

I still think these threads are a bad idea.

Comment author: Jack 07 February 2013 10:47:33PM *  10 points [-]

This seems like an odd position for someone who spends a relatively larger fraction of his LW time on politics.

Edit: Didn't mean to make it personal. Was just interested in the rationale.

Comment author: Vladimir_Nesov 07 February 2013 11:22:37PM *  1 point [-]

(It's good to have less social pressure against odd-seeming positions, so that they can be freely examined according to their more carefully construed meaning rather than surface appearance.)

Comment author: Jack 07 February 2013 11:59:09PM 2 points [-]

Having less pressure against unorthodox or novel positions is a good thing. But I think it makes sense to have minimal social pressure to give some account of apparent discrepancies between actions and beliefs-- since it suggests (though doesn't necessitate) contradictory beliefs somewhere.

Comment author: Vladimir_Nesov 08 February 2013 12:30:29AM *  1 point [-]

This seems to act as an incentive for both resolving the conflict, and for obscuring its presence or nature. I feel that the latter effect can be more damaging, so it might be safer to avoid this pressure. For example, drawing of attention to the presence of an apparent conflict (if it's plausible that it has been missed) that isn't accompanied by (implied) disapproval.

Comment author: Jack 08 February 2013 12:42:37AM 1 point [-]

My original comment was about as devoid of implications of disapproval as I could make it. I'd be interested to hear better formulations.

Comment author: OrphanWilde 07 February 2013 09:29:37PM -1 points [-]

Incidentally, my biggest problem with these threads comes from the fact that the positions I'm most interested in hearing good arguments in opposition to, I suspect I wouldn't find any opposition on here. I'm fairly aware of the first-principles differences which result in most of my disagreements; the baffling ones are things like support of drone warfare coming from people who believe in universal healthcare. (I can see support of one, or the other, but not both at the same time. And yet people exist who do support both at the same time.)

Comment author: Watercressed 07 February 2013 10:38:10PM 10 points [-]

I see no particular reason why someone can't believe that healthcare consequentially saves lives and that drone warfare also consequentially saves lives.

Comment author: ikrase 08 February 2013 11:09:57AM 1 point [-]

Yeah, this claim confuses me. ( I mean, I see this kind of thing every day, but Less Wrong seems to be where it would never occur.)

I do support universal healthcare, for pretty much all the normal reasons.

I don't support drone warfare, but I am willing to criticize people who make bad arguments against it, because I don't think I'm smarter than the US military strategists.

Comment author: RichardKennaway 08 February 2013 07:50:47PM 2 points [-]

the baffling ones are things like support of drone warfare coming from people who believe in universal healthcare. (I can see support of one, or the other, but not both at the same time. And yet people exist who do support both at the same time.)

I am baffled by your bafflement. Kill your enemies, save your allies. Where's the contradiction?

Comment author: OrphanWilde 08 February 2013 08:29:58PM 0 points [-]

Sorry about the confusion; I just realized exactly where the disconnect is. I was discussing drone warfare in another forum, specifically the use of drones against a nation's own citizens. Absent that context my statement doesn't make much sense at all, no.

Does it make more sense when I clarify that I'm referring to the use of drone warfare against a nation's own citizens without judicial oversight?

Comment author: RichardKennaway 09 February 2013 08:34:23AM 0 points [-]

Does it make more sense when I clarify that I'm referring to the use of drone warfare against a nation's own citizens without judicial oversight?

Which country is that happening in? But presumably that government, rightly or wrongly, has decided that some of its citizens are enemies.

Comment author: drethelin 08 February 2013 08:19:37PM *  -1 points [-]

are you against drone warfare vs OTHER types of warfare or are you just against warfare? I think that might be where the confusion is. If you think we should try to save more people and therefore support healthcare and oppose warfare, I think that makes sense. I think it also makes sense to say you support healthcare because it saves lives and you support drone warfare because it saves lives in comparison to other warfare, vs the less realistic no warfare.

Comment author: OrphanWilde 08 February 2013 08:28:45PM 0 points [-]

I was referring to a very specific use of drone warfare and was insufficiently explicit in my comment. (A peril of switching back and forth between different forums of discussion, dropping context.) It wasn't even until the latest round of comments that I realized why exactly people were baffled by my position.

Specifically I was referring to the use of drone warfare to target a nation's own citizens without judicial review.

Comment author: Eugine_Nier 08 February 2013 10:05:22PM 2 points [-]

I still don't see the contradiction. Both universal healthcare and drone warfare are fundamentally come from a belief or alief that life or death decisions about citizens should be made by the government.

Comment author: OrphanWilde 08 February 2013 10:18:48PM 0 points [-]

Not really; universal healthcare is based on a belief (or alief) that life is a fundamental right. A simple belief that government should be making these decisions might lead to a belief in government-provided or government-run healthcare, but that's hardly the same thing as universal healthcare, which holds that government doesn't have a right to decide, only a responsibility to provide.

Comment author: Eugine_Nier 08 February 2013 10:43:00PM 3 points [-]

Ok, I think a better way to formulate my point is that both universal healthcare and drone warfare come from an alief that the government has unlimited moral authority, in the sense Arnold Kling discusses here and here.

doesn't have a right to decide, only a responsibility to provide.

I don't see the difference, especially when you remember that resources are finite.

Comment author: OrphanWilde 11 February 2013 04:09:15PM 0 points [-]

You seem to be conflating intention and results in the opposite direction I usually see; you're suggesting that the practical necessities of implementing universal healthcare are a part of the ideology or principles which lead one to seek it.

Comment author: Eugine_Nier 12 February 2013 04:31:29AM 2 points [-]

you're suggesting that the practical necessities of implementing universal healthcare are a part of the ideology or principles which lead one to seek it.

Specifically an ideology/alief that causes one to decide which policies to support without thinking about how they would actually be implemented in practice.

Comment author: OrphanWilde 07 February 2013 04:00:36PM 1 point [-]

Fair enough, but they do seem pretty civil thus far. I've been monitoring them to make sure they don't get out of hand, and that they don't start infecting the rest of the discussions. (There have been a couple of political-leaning topics, but no more than before, and I think maybe less.)

Comment author: Larks 07 February 2013 05:52:30PM 4 points [-]

The objection is mind-killing and agent-reputational effects, not incivility.

Comment author: buybuydandavis 07 February 2013 09:10:56PM *  2 points [-]

I find it strange that the potential for political bias is seen as so much worse than a self imposed ban on The-Subject-Which-Must-Not-Be-Discussed. Is intellectual evasion really seen as preferable to potential bias?

Comment author: Eugine_Nier 08 February 2013 03:21:06AM 3 points [-]

Is intellectual evasion really seen as preferable to potential bias?

If one doesn't know, it is better to know that one doesn't know.

Comment author: ikrase 08 February 2013 11:14:50AM 0 points [-]

The Subject Which Must Not Be Discussed ? Is that still a thing? (infohazard related to Super AIs?)

I can see two other reasons. The first is that a culture WILL develop, and if outsiders see the political culture, we might not get a chance to teach them enough rationality for them to not be mindkilled instantly.

The second is that it's well established that smart people often believe wierd and/or untrue things. This, combined with the lack of respect for political correctness (in both the old-timey 'within the realm of policy you can actually talk about' and in the modern offensive language sense) and contrarianism, and a cultural site, could result in really bad politics.

Comment author: Rukifellth 07 February 2013 06:48:13PM 1 point [-]

We've got to deal with politics eventually. The whole world isn't going to listen to the Singularity Institute just because they've got a Friendly AI, and it's not like those cognitive biases will disappear by that time. Besides, I feel like LW could get more done with discussions about political brainstorming, at least in the near future.

Comment author: wedrifid 08 February 2013 04:38:51AM 3 points [-]

The whole world isn't going to listen to the Singularity Institute just because they've got a Friendly AI

'Just' because they've got an FAI? Once you have an FAI (and nobody else has a not-friendly-to-you-AI) you've more or less won already.

We've got to deal with politics eventually.

Apart from being able to protect against any political threat (and so make persuasion optional, not necessary) an FAI could, for example, upgrade Eliezer to have competent political skills.

The politics that MIRI folks would be concerned about are the politics before they win, not after they win.

Comment author: Rukifellth 09 February 2013 02:29:08AM 0 points [-]

Work done by Lesswrongians could decrease the workload of such an FAI while providing immediate results. If it takes twenty years for such a thing to be developed, that's twenty years in either direction on the good/bad scale civilization could go. This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.

Comment author: [deleted] 09 February 2013 02:48:28AM 1 point [-]

This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.

You are not taking AI seriously. Is this intentional?

A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn't just rewrite their brains with nano.)

It may not do this, for the sake of our comfort, but if anything was urgent, it would be done.

Comment author: Jack 11 February 2013 06:44:35PM 1 point [-]

A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn't just rewrite their brains with nano.)

While I wouldn't dismiss this possibility at all you seem a little overconfident. The best current marketing tactics can shift market share a percentage point or two or maybe make a half-percentage-point difference in a political campaign. Obviously better than the best is better. But assuming ethical limitations on persuasion tactics and general human suspicion of new things "days" seems pretty optimistic (and twenty-years pessimistic). No good reason to think the persuasive power of marketing is at all linear with the intelligence of the creator. We ought to have very large error bars on this kind of thing and while the focus on these fast take-over scenarios makes sense for emphasizing risk that focus will make them appear more likely to us than they actually are.

Comment author: Larks 07 February 2013 09:56:49PM 4 points [-]

The whole world isn't going to listen to the Singularity Institute just because they've got a Friendly AI

If an AGI wants you to listen, you won't have any choice. If it doesn't want you to listen, you won't have the option. The set of "problems for us after we get FAI" is the null set.

Comment author: wedrifid 08 February 2013 04:33:38AM 0 points [-]

The set of "problems for us after we get FAI" is the null set.

Kind of, almost. It could be that we (implicitly) choose to have problems for ourselves.

Comment author: [deleted] 08 February 2013 04:45:30AM 1 point [-]

It could be that we (implicitly) choose to have problems for ourselves.

In case it's not clear. This means the FAI causing problems for us on our behalf, not literally making a choice we are aware of.

Comment author: wedrifid 08 February 2013 04:52:20AM 0 points [-]

In case it's not clear. This means the FAI causing problems for us on our behalf, not literally making a choice we are aware of.

(Or 'choosing not to intervene to solve all problems'. The difference matters to some, even if it is somewhat arbitrary.)

Comment author: Rukifellth 07 February 2013 11:49:10PM 0 points [-]

Are you saying that an AGI would distribute relevant information to the public, compelling them to make sound political choices?

Comment author: Desrtopa 08 February 2013 02:41:32AM 0 points [-]

That doesn't sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.

As a general rule, I'd say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.

Comment author: ikrase 08 February 2013 11:17:08AM 1 point [-]

I thought he was saying that once you have a Super AI, you don't have to deal with politics.

Comment author: Desrtopa 08 February 2013 02:35:48PM 0 points [-]

That doesn't sound like something I'd infer from his previous comment

We've got to deal with politics eventually. The whole world isn't going to listen to the Singularity Institute just because they've got a Friendly AI, and it's not like those cognitive biases will disappear by that time.