Wei_Dai comments on Complexity based moral values. - Less Wrong

-6 Post author: Dmytry 06 April 2012 05:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 08 April 2012 06:02:16AM *  0 points [-]

Establish a track record of being a careful thinker, who usually spends a lot of time looking for holes in their own ideas and arguments before posting them. And not in a cursory way or out of a sense of obligation, but because you know deep down that most new ideas, including your own, and even new arguments pointing out that other ideas are wrong, are wrong. Looks for steps in your argument that are weak. Intuitions that other people may not share. Equally plausible arguments with contradictory conclusions. Analogous arguments that lead to obviously wrong conclusions. Alternative hypotheses that can explain your observations.

TBH with this community i'm feeling i'm dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.

The topic here is fuzzy, and I do say that it is rather unfinished; it is implied that I think it may not be true, doesn't it? It is also, a discussion post. At the same time what I do not say, is 'lets go ahead and implement AI based on this', or something similar. It is immediately presumed of me, that I has posted this with utter and complete certainty - even though this can not be inferred from anything. The disagreement I get also is that of utter - crackpot grade - certainty that no theres no way it is in any way related to human moral decisionmaking. Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.

For example - it is plausible that we humans do use size of our internal representation of concepts as proxy for something, because it generally e.g. associates with closer people, etc. Assuming any kind of compression, size of internal representation is a form of complexity measure.

Forget about fairness

I'll just go to a less pathological place. The issue is not the fairness; here it is not enough (nor needed) to have any domain specific knowledge (such as e.g. knowing that size of compressed representation = some form of complexity). What is necessary, is very extensive knowledge of a large body of half baked (or entirely un-baked while verbose), vague stuff, 'less you contradict any of it while trying to do any form of search for any kind of solution. What you're doing here is pathologically counter productive to any form of problem solving that involves several individuals (and likely counter productive to problem solving by individuals as well). You (lesswrong) are still apes with pretensions, and your 'you have not proved it' still leaks into 'your belief is wrong' just as much as for anyone else, because that's how brains work, nearby concepts collapse, and just because you know they do doesn't magically make it not so; the purpose of knowing fallibility of human brain is not for (frankly, very naive) assumption that now - that you know - you are magically not fallible. This is like those toy decision agents that second guess themselves into a faulty answer.

Comment author: Wei_Dai 08 April 2012 07:30:46AM 7 points [-]

Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.

The thing is, the idea that our values may have something to do with complexity isn't a new one. See this thread for example. It's the kind of idea that occurs to a lot of smart people, but doesn't seem to lead anywhere interesting (e.g., some formal definition of complexity that actually explains our apparent values, or good arguments for why such a definition must exist). What you see as unreasonable certainty may just reflect the fact that you're not offering anything new (or if you are, it's not clearly expressed) and others have already thought it over and decided that "complexity based moral values" is a dead end. If you don't want to take their word for it and find their explanations unsatisfactory, you'll just have to push ahead yourself and come back when you have stronger and/or clearer arguments (or decide that they're right after all).

I'll just go to a less pathological place.

Where?