David_Gerard comments on Complexity based moral values. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (100)
TBH with this community i'm feeling i'm dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.
The topic here is fuzzy, and I do say that it is rather unfinished; it is implied that I think it may not be true, doesn't it? It is also, a discussion post. At the same time what I do not say, is 'lets go ahead and implement AI based on this', or something similar. It is immediately presumed of me, that I has posted this with utter and complete certainty - even though this can not be inferred from anything. The disagreement I get also is that of utter - crackpot grade - certainty that no theres no way it is in any way related to human moral decisionmaking. Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.
For example - it is plausible that we humans do use size of our internal representation of concepts as proxy for something, because it generally e.g. associates with closer people, etc. Assuming any kind of compression, size of internal representation is a form of complexity measure.
I'll just go to a less pathological place. The issue is not the fairness; here it is not enough (nor needed) to have any domain specific knowledge (such as e.g. knowing that size of compressed representation = some form of complexity). What is necessary, is very extensive knowledge of a large body of half baked (or entirely un-baked while verbose), vague stuff, 'less you contradict any of it while trying to do any form of search for any kind of solution. What you're doing here is pathologically counter productive to any form of problem solving that involves several individuals (and likely counter productive to problem solving by individuals as well). You (lesswrong) are still apes with pretensions, and your 'you have not proved it' still leaks into 'your belief is wrong' just as much as for anyone else, because that's how brains work, nearby concepts collapse, and just because you know they do doesn't magically make it not so; the purpose of knowing fallibility of human brain is not for (frankly, very naive) assumption that now - that you know - you are magically not fallible. This is like those toy decision agents that second guess themselves into a faulty answer.
But everyone else is actually stupid.