David_Gerard comments on Complexity based moral values. - Less Wrong

-6 Post author: Dmytry 06 April 2012 05:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 08 April 2012 06:02:16AM *  0 points [-]

Establish a track record of being a careful thinker, who usually spends a lot of time looking for holes in their own ideas and arguments before posting them. And not in a cursory way or out of a sense of obligation, but because you know deep down that most new ideas, including your own, and even new arguments pointing out that other ideas are wrong, are wrong. Looks for steps in your argument that are weak. Intuitions that other people may not share. Equally plausible arguments with contradictory conclusions. Analogous arguments that lead to obviously wrong conclusions. Alternative hypotheses that can explain your observations.

TBH with this community i'm feeling i'm dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.

The topic here is fuzzy, and I do say that it is rather unfinished; it is implied that I think it may not be true, doesn't it? It is also, a discussion post. At the same time what I do not say, is 'lets go ahead and implement AI based on this', or something similar. It is immediately presumed of me, that I has posted this with utter and complete certainty - even though this can not be inferred from anything. The disagreement I get also is that of utter - crackpot grade - certainty that no theres no way it is in any way related to human moral decisionmaking. Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.

For example - it is plausible that we humans do use size of our internal representation of concepts as proxy for something, because it generally e.g. associates with closer people, etc. Assuming any kind of compression, size of internal representation is a form of complexity measure.

Forget about fairness

I'll just go to a less pathological place. The issue is not the fairness; here it is not enough (nor needed) to have any domain specific knowledge (such as e.g. knowing that size of compressed representation = some form of complexity). What is necessary, is very extensive knowledge of a large body of half baked (or entirely un-baked while verbose), vague stuff, 'less you contradict any of it while trying to do any form of search for any kind of solution. What you're doing here is pathologically counter productive to any form of problem solving that involves several individuals (and likely counter productive to problem solving by individuals as well). You (lesswrong) are still apes with pretensions, and your 'you have not proved it' still leaks into 'your belief is wrong' just as much as for anyone else, because that's how brains work, nearby concepts collapse, and just because you know they do doesn't magically make it not so; the purpose of knowing fallibility of human brain is not for (frankly, very naive) assumption that now - that you know - you are magically not fallible. This is like those toy decision agents that second guess themselves into a faulty answer.

Comment author: David_Gerard 08 April 2012 07:22:53AM 0 points [-]