Dmytry comments on Complexity based moral values. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (100)
Where's the notion of anyone conspiring with anyone? Clearly there isn't conspiracy if what i post averages net more likes than dislikes. The best predictor i can make so far for whenever my post will be voted +10 or -10 is how well it aligns with the views held there (and some of the first vote noise of course), not how good quality it is. Some of the most vague and low actual quality crap I post is most high voted, the high being double digit positive. It's not as if i was netting negative.
I hope you don't end up concluding that it's impossible for contrary idea to be taken seriously around here. Just in case, I've collected some of my highly upvoted posts arguing against or questioning Eliezer's ideas:
Well, one has to be ultra careful to keep number of contrary ideas very low within a post, one has to already have a giant body of posts aligning with the opinions (and it is boring to just generate texts that are in agreement). I may post on this exact topic with the wording more refined. edit: Also you may have way more skill at converting people to contrary ideas than I do. I lose patience.
Anyway, an idea for you: there is a huge range of behaviours that we human may deem moral enough. Within this huge range, there could well be something that is conceptually simple. It is necessary that the morality values are easily calculated by humans, as the humans do not like to live in constant anticipation of unpredictable intervention. Especially when the intervention may be based on other people's volitions. There can well be a simple (but not too simple) agreeable morality system.
edit: also, look at law making. All successful legal systems are based on few principles, on which the constitution is based, on which the law is based. The law needs to be predictable by the citizens. Easily and quickly, knee jerk reflex level predictable.
Yes, this seems likely.
I also find it boring to generate texts that are in agreement, and hence rarely do so. I don't think that's the main issue.
I don't think "skill at converting people" and "patience" are the right way to think about it either. I think what helps are:
TBH with this community i'm feeling i'm dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.
The topic here is fuzzy, and I do say that it is rather unfinished; it is implied that I think it may not be true, doesn't it? It is also, a discussion post. At the same time what I do not say, is 'lets go ahead and implement AI based on this', or something similar. It is immediately presumed of me, that I has posted this with utter and complete certainty - even though this can not be inferred from anything. The disagreement I get also is that of utter - crackpot grade - certainty that no theres no way it is in any way related to human moral decisionmaking. Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.
For example - it is plausible that we humans do use size of our internal representation of concepts as proxy for something, because it generally e.g. associates with closer people, etc. Assuming any kind of compression, size of internal representation is a form of complexity measure.
I'll just go to a less pathological place. The issue is not the fairness; here it is not enough (nor needed) to have any domain specific knowledge (such as e.g. knowing that size of compressed representation = some form of complexity). What is necessary, is very extensive knowledge of a large body of half baked (or entirely un-baked while verbose), vague stuff, 'less you contradict any of it while trying to do any form of search for any kind of solution. What you're doing here is pathologically counter productive to any form of problem solving that involves several individuals (and likely counter productive to problem solving by individuals as well). You (lesswrong) are still apes with pretensions, and your 'you have not proved it' still leaks into 'your belief is wrong' just as much as for anyone else, because that's how brains work, nearby concepts collapse, and just because you know they do doesn't magically make it not so; the purpose of knowing fallibility of human brain is not for (frankly, very naive) assumption that now - that you know - you are magically not fallible. This is like those toy decision agents that second guess themselves into a faulty answer.
The thing is, the idea that our values may have something to do with complexity isn't a new one. See this thread for example. It's the kind of idea that occurs to a lot of smart people, but doesn't seem to lead anywhere interesting (e.g., some formal definition of complexity that actually explains our apparent values, or good arguments for why such a definition must exist). What you see as unreasonable certainty may just reflect the fact that you're not offering anything new (or if you are, it's not clearly expressed) and others have already thought it over and decided that "complexity based moral values" is a dead end. If you don't want to take their word for it and find their explanations unsatisfactory, you'll just have to push ahead yourself and come back when you have stronger and/or clearer arguments (or decide that they're right after all).
Where?
And this community gets the impression that they are dealing with what amounts to a straw-man generator. Let's agree to disagree.
Please do. As you have said, you can expect to achieve more social success with your preferred behaviors if you execute them in different social hierarchies. And success here would require drastically changing how you behave in response to social incentives and local standards - something that you are not willing to do. So of you go elsewhere everybody wins. You can continue to believe you are superior to us and all disagreement with you is the result of us being brainwashed or inferior or whatever and we can go about and have more enjoyable conversations.
Really, you don't need to write a whole series of comments to 'break up with us'. You can just click the logout button and type a new address into the address bar. Parting declarations of superiority don't really achieve much.
I thought Dmytry sometimes has interesting ideas, and it'd be worth trying to convince him to stick around but be more careful and less adversarial. As orthonormal said, LW needs better contrarians, and Dmytry seems like one of the more promising candidates. Why tell him to go away? Do you think my effort was doomed or counterproductive?
There is some potential there - Dmytry has what seems to be a decent IQ and some technical knowledge in there somewhere. But the indications suggest that he has more potential to be destructive than useful. I would expect him to end up as a XiXiDu only far more powerful (more intelligent and rhetorically proficient) and far more hostile (XiXiDu's attitude hovers just on the border, Dmytry given time would be more consistently hostile).
His idea, I merely agree that it would benefit him and us. For what it is worth I don't think my agreement is likely to encourage him to leave. If anything he would be inclined to do the opposite of what my preference is.
In terms of my own personal interests - I incur a cost when there are people like Dmytry around. My nature (and considered, self-endorsed nature at that) is such that when I see people try to intellectually bully others with disingenuous non-sequiturs and straw men I am naturally inclined to interfere. Dmytry is far from the worst I've seen in this regard but he's not too far down the list.
If the guy wants to leave and has concluded we are too toxic for him then I'm not going to argue with that. It seems better for everyone. Arrogant nerds are a dime a dozen - we have plenty around here so don't need another. And communities where one can show off technical competence and rhetorical flair are a dime a dozen too so Dmytry doesn't need us. I'd recommend he try MENSA. He would fit in well (based on what I recall of my time there and what I have seen of Dmytry.)
Doomed.
Why did you do it then?
Sigh... I should probably just let it go, given that it was a long shot anyway, but it's kind of frustrating to have put in the effort, and not even get a clean negative result back as evidence.
Perhaps you could let this one go but tell us how to catch the next one?
I can't say I noticed anything worthwhile. What has Dmytry said that you regard as promising?
Well, he has written 9 discussion posts with >10 karma in the last 4 months or so. Do you not like any of them? Or think of it this way: if he is the kind of person we want to drive away instead of help better fit into our community, then where are we going to find those "better contrarians"?
Looking through his posts, most are downvoted, and the bulk of his karma seems to be coming from a conjunction fallacy post which says nothing new that wasn't covered in previous posts by say Eliezer (or myself, in prediction-related posts), and another content-less post composed pretty much just of discussion (of a very low level). Brain shrinkage was a good topic, but unlike my essay on similar topics (covering brain shrinkage as a special case), Dmytry completely fails to bring the references. And so on.
So again, what do you regard as promising?
In my experience you don't find 'better contrarians' among people who are naturally contrary and have a chip on their shoulder. A good contrarian will mostly agrees with stuff (unless the community they are in really is defective) - but thinks things through and then carefully presents their contrary positions as though they are making a natural contribution.
Don't seek the contrariness. Seek good thinking and willingness to contribute. You get the contrarian positions for free when the generally good thinking gets results. For example you get lukeprog.
But everyone else is actually stupid.
You may also be lacking in the skill of telling when your contrary ideas are actually wrong. I don't doubt are certainly correct ideas that go against what many LessWrongers think, but there are many more wrong ideas that do. It may be that Wei Dai brings the first kind, and you bring the second kind. Or it may be that Wei Dai is just a better writer than you. I'd say it's a mix of both.
The disagreement is mostly in the areas where LW does speculate massively.