TheOtherDave comments on Complexity based moral values. - Less Wrong

-6 Post author: Dmytry 06 April 2012 05:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 07 April 2012 02:00:05PM 6 points [-]

I can't speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of "reasoning", though I'd be more inclined to say "algorithms" to avoid misleading connotations) of which we're unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.

Comment author: Dmytry 07 April 2012 02:40:22PM *  0 points [-]

I can't speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant.

Sorry for that, was a bit of leak out of how the interactions here about the AI issues are rather adversarial in nature, in the sense that ambiguity - unavoidable in human language - of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense. The AI is, definitely, a very scary risk. Scariness doesn't result in most reasonable processing. I do not claim to be immune to this.

Comment author: TheOtherDave 07 April 2012 04:37:03PM *  6 points [-]

I agree that some level of ambiguity is unavoidable, especially on initial exchange.
Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails.
I agree that some folks here have the habit you describe, of interpreting other people's comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things.
I don't find it as common here as you describe it as being, either with respect to AI risks or anything else.
Perhaps it's more common here than I think but I attend to the exceptions disproportionally; perhaps it's less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of "the interpretation that makes the least amount of sense" is not what I would consider an uncharitable interpretation; perhaps something else is going on.
I agree that fear tends to inhibit reasonable processing.

Comment author: Dmytry 07 April 2012 04:40:33PM 0 points [-]

Well, I think it is the case that the fear is mind killer to some extent. Fear rapidly assigns the truth value to a proposition, using a heuristic. That is necessary for survival. Unfortunately this value makes a very bad prior.

Comment author: TheOtherDave 07 April 2012 04:53:20PM 4 points [-]

Yup, that's one mechanism whereby fear tends to inhibit reasonable processing.

Comment author: wedrifid 07 April 2012 05:40:51PM 6 points [-]

Excellent use of fogging in this conversation Dave.

Comment author: cousin_it 08 April 2012 12:36:51PM *  3 points [-]

Seconding TheOtherDave's thanks. I stumbled on this technique a couple days ago, it's nice to know that it has a name.

Comment author: TheOtherDave 07 April 2012 08:55:47PM 2 points [-]

Upvoted back to zero for teaching me a new word.
.