EniScien

I grew up in Russia, not in Silicon Valley, so I didn't know the other "people in our cluster", and unfortunately I didn't like to read, so I'm not familiar with many of the obvious background facts.  5 years ago I read HPMoR, but unfortunately not sequences, I read them only a couple of years ago and then only the part that was translated into Russian, in English I could not read fluently enough, but then I noticed that Google Translate began to cope  with a much better translation than before, and in most cases produces a readable text from English into Russian, so that I could finally read the "Sequences" to the end and generally begin to read and write in Lesswrong.

Now I write here in "Short Forms" any thoughts that I have not seen anyone else express, but since I have not read many books, many concepts are probably expressed somewhere else by someone before me, I just do not  saw, in that case it would be desirable to add a link there in the comments.  Unfortunately, I have many thoughts that I wrote down even before lessvrong, but rather than re-reading and editing them, it’s easier for me to write again, so many such thoughts lie unpublished, and since even at least I wrote down my thoughts far from  births, then even more of them are not even recorded anywhere except in my head, however, again, if I stumble upon them again, I will try to write them down and publish them.

Wiki Contributions

Comments

Sorted by

I saw that a lot of people are confused by "what does Yudkowsky mean by this difference between deep causes and surface analogies?". I didn't have this problem, with no delay I had interpretation what he means.

I thought that it's difference between deep and surface regarding to black box metaphor. Difference between searching correlation between similar inputs and outputs and building a structure of hidden nodes and checking the predictions with rewarding correct ones and dividing that all by complexity of internal structure.

Difference between making step from inputs to outputs and having a model. Looking only at visible things and thinking about invisible ones. Looking only at experiment results and building theories from that.

Just like difference between deep neural networks and neural networks with no hidden layers, the first ones are much more powerful.

I am really unsure that it is right, because if it was so, why he just didn't say that? But I write it here just in case.

EniScien-10

I noticed that some names here have really bad connotations (although I am not saying that I know which don't, or even that any hasn't).

"LessWrong" looks like "be wrong more rare" and one of obvious ways to it is to avoid difficult things, "be less wrong" is not a way to reach any difficult goal. (Even if different people have different goals)

"Rationality: from A to Z" even worse, it looks like "complete professional guide about rationality" instead of "incomplete basic notes about a small piece of rationality weakly understood by one autodidact" which it actually is.

EniScien30

There are no common words upvote/downvote in Russian, so I just said like/dislike. And it was really a mistake, these are two really different types of positive/negative marks, agree/disagree is third type and there may be any amount of other types. But I named it like/dislike, so I so thought about it like it means your power of liking it in form of outcome to author, not just adjusting the sorting like "do I want to see more posts like that higher in suggestions".

And actually it looks for me like a more general tendency in my behaviour to avoid finding subtle differences between thing and, especially, terms. Probably, I've seen like people are trying to find difference in colloquial terms which are not strictly determined and next argue to that difference, I was annoyed by that and that annoyance forced me to avoid finding subtle differences in terms. Or maybe it is because they said us that synonyms are words with the same meaning, instead of near meanings (or "equal or near meanings"), and didn't show us that there is difference in connotations. Or maybe the first was because of the second. Or maybe it was because I too much used programming languages instead of normal languages when I was only 8. Anyway, I probably need now to start developing a 24/7 automatically working habit to search and notice subtle differences.

Does the LessWrong site use a password strength check like the one Yudkowsky talks about (I don't remember that one)? And if not, why not? It doesn't seem particularly difficult to hook this up to a dictionary or something. Or is it not considered worth implementing because there's Google registration?

Hmm. Judging from the brief view, it feels like I'm the only one who included reactions in my brief forms. I wonder why?

It occurred to me that on LessWrong there doesn't seem to be a division of posts in evaluations into those that you want to promote as relevant right now, and those that you think will be useful over the years. If there was such an evaluation... Or such a response, then you could take a list not of karma posts, which would include those that were only needed sometime in a particular moment, but a list of those that people find useful beyond time.

That is, a short-term post might be well-written, really required for discussion at the time, rather than just reporting news, so there would be no reason to lower its karma, but it would be immediately obvious that it was not something that should be kept forever. In some ways, introducing such a system would make things easier with Best Of. And I also remember when choosing which of the sequences to include in the book, there were a number of grades on scales other than karma. This could also be added as reactions, so that such scores could be left in an independent mode.

A. I saw a post that reactions were added. I was just thinking that this would be very helpful and might solve my problem. Included them for my short forms. I hope people don't just vote no more without asking why through reactions.

On the one hand, I really like that on LessWrong, unlike other platforms, everything unproductive is downgraded in the rating. But on the other hand, when you try to publish something yourself, it looks like a hell of a black box, which gives out positive and negative reinforcements for no reason at all.

This completely chaotic reward system seems to be bad for my tendency to post anything at all on LessWrong, just in the last few weeks that I've been using EverNote, it has counted 400 posts, and by a quick count, I have about 1500 posts lying in Google Keep , at the same time, on LessWrong I have published only about 70 over the past year, that is, this is 6-20 times less, although according to EverNote estimates ~ 97% of these notes belong to the "thoughts" category, and not to something like lists shopping.

I tried literally following the one advice given to me here and treating any scores less than ±5 as noise, but that didn't negate the effect. I don't even know, maybe if the ratings of the best posts here don't match up with my rating of my best posts, I should post a couple of really terrible posts to make sure they get rated extremely bad and not good or not?

I must say, I wonder why I did not see here speed reading and visual thinking as one of the most important tips for practical rationality, that is, a visual image is 2 + 1 d, and an auditory image is 0 + 1 d, plus auditory images use sequential thinking, in which people are very bad, and visual thinking is parallel. And according to Wikipedia, the transition from voice to visual reading should speed you up 5 (!) times, and in the same way, visual thinking should be 5 times faster compared to voice, and if you can read and think 5 times in a lifetime more thoughts, it's just an incredible difference in productivity.

Well, the same applies to the use of visual imagination instead of voice, here you can also use pictures. (I don’t know, maybe it was all in Korzybski’s books and my problem is that I didn’t read them, although I definitely should have done this?)

Yudkowsky says that public morality should be derived from personal morality, and that personal morality is paramount. But I don't think this is the right way to put it, in my view morality is the social relationships that game theory talks about, how not to play games with a negative sum, how to achieve the maximum sum for all participants.

And morality is independent of values, or rather, each value system has its own morality, or even more accurately, morality can work even if you have different value systems. Morality is primarily about questions of justice, sometimes all sorts of superfluous things like god worship are dragged under this kind of human sentiment, so morality and justice may not be exactly equivalent.

And game theory and answers questions about how to achieve justice. Also, justice may concern you as directly one of your values, and then you won't betray even in a one-time prisoner's dilemma without penalty. Or it may not bother you and then you will pass on always when you do not expect to be punished for it.

In other words, morality is universal between value systems, but it cannot be independent of them. It makes no sense to forbid someone to be hurt if he has absolutely nothing against being hurt.

In other words, I mean that adherence to morality just feels different from inside than conformity to your values, the former feels like an obligation and the latter feels like a desire, in one case you say "should" and in the other "wants."

I've read "Sorting Pebbles into Different Piles" several times and never understood what it was about until it was explained to me. Certainly the sorters aren't arguing about morality, but that's because they're not arguing about game theory, they're arguing about fun theory... Or more accurately not really, they are pure consequentialists after all, they don't care about fun or their lives, only piles into external reality, so it's theory of value, but not theory of fun, but theory of prime.

But in any case, I think people might well argue with them about morality. If people can sell primes to sorters and they can sell hedons to people, would it be moral to betray in a prisoner's dilemma and get 2 primes by giving -3 hedons. And most likely they will come to the conclusion that no, that would be wrong, even if it is just ("prime").

That you shouldn't kill people, even if you can get yourself the primeons you so desire, and they shouldn't destroy the right piles, even if they get pleasure from looking at the blowing pebbles.

Load More