In response to comment by Vaniver on LessWrong 2.0
Comment author: Lumifer 09 December 2015 04:28:07AM 0 points [-]

close-knitness comes from ape-things like familiarity and spending a lot of time around each other and looking into each other's eyes

Well, yes, but that takes time. A lot of time.

There are a couple of shortcuts. One is shared strong emotions, but that might be a bit difficult in this case. Another is purpose which leads to shared activity and forced cooperation.

I'm not seriously proposing trying to reorganize LW into purposeful teams, but you mentioned groups and plans and feedback -- what kind of activity will those groups undertake?

In response to comment by Lumifer on LessWrong 2.0
Comment author: Vamair0 09 December 2015 07:55:43PM 1 point [-]

I believe at least some people here have some stuff they want to do that is not orthogonal with rationality and may be helped by a group effort. Translation of some materials, writing articles, research, programming projects, just discussions of some topics. Then there is going to be a Group Bragging thread, where people can tell how much they have managed to do in a month or so. If the group hasn't bragged for a few months, it's considered dead. That can also give us some new info about group building and maintainance, which seems like a neglected topic here, as well as some data about which groups survive better than the others.

In response to LessWrong 2.0
Comment author: Vamair0 08 December 2015 05:05:41PM 2 points [-]

I thought an idea of a greeting party and a closer-tied community sounds good. Maybe something like a number of small teams, so that any newcomer would be taken into one and shown the most valuable stuff, with bonus ability to cooperate on articles or code projects, or research, or wherever the team advantage is. Together with some in-group chat where people may get to know each other better. And, of course, the big free-for-all discussions and articles should stay, so the community would not be divided too much. There should also be less nitpicks at the main articles comments if the articles were already discussed and edited by the group.

Comment author: juliawise 16 February 2012 06:00:28PM 2 points [-]

And we have airplanes but we dream about flying on brooms.

Comment author: Vamair0 13 November 2015 09:50:43AM 0 points [-]

Broom is to an airplane as a motorcycle is to a train. Also, I'd guess a lot of people want their own broom exactly because nobody else has one.

Comment author: Vamair0 25 September 2015 07:49:08PM 2 points [-]

It also seems that I was always thinking like that. But I don't really know if that is really the case or if it's just the way memory works. Anyway, I'm going to tell the things I remember that may be relevant. Everything started with reading. I was taught to read since I was three and I liked it almost immediately. You don't need to ask your parents to read you what was next in the story. That's great! Also, dinosaurs. I really liked dinosaurs and I got quite a few books about them. When I went to the first grade, I've been somewhat familiar with the geochronology, when did different dinosaurs live and so on. I've also been given a great book about the history of biology, and I was a fan of J. L. Cuvier. The school I got into had a very nice biology classroom, in our first day they let us look into a microscope and showed us a few other interesting things. When I came home I asked my parents to buy me a biology textbook. My father decided botanics was boring, so he bought me a really interesting seventh grade zoology textbook. A few results (apart from lots of fun I got) was that I had a grasp of evolution at seven and also that I expected the world to make sense and the science to be able to discover and understand it. That was also helped by getting a few books about the history of electricity and similar stuff. The middle of the nineties in Russia was a so-called Spiritual Revolution. With the government control of ideology null and void, the media space was filled with conspiracy theories, cults and ancient miracles. While I didn't doubt their existance, as a lot of sane people were convinced these were true, I was sure they still somehow make sense. This feeling was one of the reasons I really got into one New Age book I'm not going to name. Well, my trust in books that were not obviously fairytales was the other one. Actually, the book was surprisingly reasonable. The main idea, aside from a description of a cool Planescape-like setting that was a half of the book went somehow like this: people sometimes have mystical experiences. These experiences are evidence for existence of their substance. These experiences together with the observable natural phenomena with a lot of traditions, tales and distortions compose a religion. But to remember the experiences right, to describe them using usual human terms and to make sense of them is so difficult that the kernel of truth is actually quite small. There are only two ways - to try getting these experiences yourself (something like they call a mystic's path) and to collect experiences of a lot of different cultures and search for the common features that are not explained by common human biases like anthropomorphism (an occultist's path). This made much more sense than just "our religion is the right one, all the others are wrong". A completely unrelated features of that time were a Spiderman cartoon and a book about biotechnology I was given. They told me there is a cool thing called genetics (the history of biology book was really old, and the last stories there were XIX century), so I went to the library to learn about it. That was probably the time I got my transhumanist leanings. Fifth grade, probably. The New Age book was surprisingly reductionistic, which was really nice for a young science fan. There was talking about the worlds with different number of dimensions (I looked up the geometry later) and parallel time streams. The author believed evolution was true. Souls were matter, there were laws for them, and even the afterlife was decided by the soul's "density" where it got into a place with the most similar spiritual "density". With all the caveats that this "density" is a methaphor. And the author explicitly told that if science tells us he was wrong anywhere, then he was wrong, and this admission was a huge point in his favor. I guess in my childhood this book has played the role of sci-fi. I wasn't sure I will be able to achieve any mystical experiences myself. And my grandmother, who was the best conversation partner about this kind of stuff ever was often saying that I shouldn't experiment on myself before later, so out of respect I didn't. And I was curious and I really don't believe most people here aren't going to go and investigate this kind of info if they believe it to be real. So the second path was the preferred one. I never really thought that magic is something unintelligible in principle, just something that the wizard already understands and I don't. Yet. So yes, I was investigating these things a lot. Well, until I finally found that it's quite easy to make people believe stuff without a shred of evidence as long as it makes a good story, moves their soul and is comfortable. Or frightening, that also works. And the spiritual experiences have a common thing. That is, they're human, and they may be and probably are some common bugs in human algorithms. They can even be induced by chemicals, of all things! That was also the time I've reinvented memetics. Which explained a lot of common features of the biggest religions. A little bit later I learned about the existence of creationists. No, really. I've heard about them before, but I was sure there wasn't anyone left, except maybe in some really wild places. I've joked I thought creationists were a tale mothers use to scare little biologists. This was a really vivid demonstration how people can be wrong about simple stuff. Add to this the info about the contemporary cults believeing their leaders do miracles like ressurecting people. If you remember my relations with the evolution theory you're going to understand I was afraid I may be wrong about something as simple as the young-earthers or these cultists are. In the college I tried to learn about the best reasons to believe in God and found them... well... not convincing at all. So, epistemic rationality, learning about human biases and atheism. And there's probably no really cool afterlife for everyone after they pay for their sins. And that seems like a problem that needs a solution.

Comment author: SolveIt 19 September 2015 05:12:45PM 2 points [-]

Well, the obvious objection is that clearly not everybody's going to do what you do, so your hypothetical scenario is often going to be irrelevant. Furthermore, I'd think that

"If everyone here always smoked, they'd install a powerful ventilation system, so I'd be okay" is exactly what you should think. Of course, you should factor in the cost of the ventilation system, but that those costs exist isn't any reason to assume that the marginal change in utility you effect by your actions is going to stay constant when multiplied by seven billion.

I've just noticed that I'm confused, and that's because your comments on the second error seem to be saying that you should shut up and sum utilities, which kind of renders your comments on the first (and my reply) obsolete. Oh well.

I'll just point out that if you could measure utilities well enough to actually shut up and multiply, you wouldn't need this kind of heuristic.

Also, this heuristic fails miserably in the face of any kind of conflict. Of course unilateral disarmament works if everybody does it at the same time. While I understand that your heuristic isn't supposed to be used in such cases, you'll find actual situations without underlying conflicts are rather difficult to find.

Finally, your grammar is mostly fine and certainly no significant obstacle to communication.

Comment author: Vamair0 19 September 2015 06:43:07PM 2 points [-]

When you can multiply you don't need this or any other heuristic. You just do that. This method is a method of adding utility using System 1 instead of System 2 thinking, as you don't round small disutilities to other people down to zero. Often if some action gives a good utility calculation in a separate case but doesn't generalize, it may be not a good idea because of small disutilities it creates. And the technique I'm talking about is mostly useful when it's difficult to put a number on the utilities in question. It's similar to collecting all the losses and gains it gives to other people and applying them all to the person using the calculation. When it's possible, the heuristic works, when it's not possible, this method usually fails.

You can put numbers on utilities when it's about lives or QALYs. A lot of important questions are this. The generalization method on the other hand may help when dealing with some more... trivial matters. Hurt feelings, minor inconveniences and so on. Less important, sure, but still quite common, I believe.

It fails in at least some conflicts, good catch. I'd have to think when it does and when it doesn't and maybe update the post.

Kant's Multiplication

7 Vamair0 19 September 2015 02:25PM

In this community there is a respected technique of "shutting up and multiplying". However using it in many realistic ethical dillemas can be difficult. Imagine a situation: there is a company, and each its employee gains utility for pressing buttons. Each employee has a one-use only button that when pressed gives an employee one hundred units of utility, while all the others lose a unit each. They can't communicate about the button and there are no other effects. Is it ethical to press the button?

This is an extremely simple situation. Utilitarianism, no matter which one, would easily say that it's ethical to press the button if there are less than one hundred and one employee and unethical if more than one hundred and one. I believe (the proponents of other ethical theories may correct me if I'm wrong) that both virtue ethics (a person demonstrates a vice by pressing a button) and deontology (that's a kind of stealing and stealing is wrong) as they're usually used (and not as a utiitarianism substitute) would say it's wrong to be the first one to press the button, and so all the eleven employees would lose ninety utils.

But the only reason this situation is so simple under utilitarianism is that we've got a direct access to the employees utility functions. Usually, though, that's not the case. If we want to make a decision in a common question such as "is it ethical to throw a piece of trash on the road, or is it better to carry it to the trash bin" or "is it okay to smoke in a room with other people inside" we had to calculate the utility we gain from throwing it right here versus the utility of all the people. We can also use quick rules, which would say "no" in the both situations. But if there's no rule or two rules, or we don't trust one, then it would be useful to have a method that's more reliable than our Fermi estimations of utility or even money.

I believe there is such a method, and as you probably already figured out, it's the question "what would've happened if everyone does something like this". It's most often used in the context of deontology, but for a utilitarian it allows to feel the shared costs.

What am I talking about? Imagine we have to estimate if we should throw a piece of trash on a road. To calculate we're taking the number of people N that will be travelling this road, calculate their average loss for irritation R of seeing a piece of trash on the road and multiply them. The NR we got we have to compare to the loss X of taking the trash to the bin. Is it difficult to get the sign right? I guess it is. Now let's imagine every traveller has thrown a piece of trash. Now let's suppose your loss of utility is the same for each piece of trash you see and your irritation is about average for the travellers here. How much utility are you going to lose? The same NR. But now imagining this loss and comparing it to the loss of hauling the trash to the bin is much easier and I believe is even more accurate.

To use this method right a utilitarian should be careful not to make a few errors. I'm going to demonstrate a few points using a "smoking in a crowded room" example.

First of all, we shouldn't use worldbuilding too much. "If everyone here always smoked, they'd install a powerful ventilation system, so I'd be okay". That wouldn't sum the utilities in a right way because the ventilation system doesn't exist. So we should change only a single aspect of behavior and not any reactions for that.

Second, we have to remember that a sum of effects is not always a good substitution for a sum of utilities. That's why we cannot say something like: "If everyone here smoked, we'd die of suffocation, so smoking here is as bad as killing a person". That's as an addition of "don't judge people on the utility of what they do, judge them when judging has a high utility" aspect.

I believe the second point may work to the opposite direction with the trash example. That is, the more trash there is, the less irritation a single piece gives. That means to counter this effect we have to imagine there is more trash than if everyone threw it away once.

And the third point is that the person doing the calculation is not always similar to the average one. "If everyone smoked I'd be okay, I've got no problem with rooms full of smoke" fails to calculate the total utility of people there unless they're all smokers, and maybe even then.

This method if used correctly may be a good addition to the well-known here "shut up and multiply" and also is an example of a good tradition of stealing ideas from differing theories.

(I'm not a native speaker and I haven't got much experience of writing in English, so I'd be especially grateful for any grammar corrections. I don't know if the tradition here is to send them via PM or to use a special thread)

Comment author: Vamair0 19 September 2015 09:47:30AM *  8 points [-]

Hello. My name is Andrey, I'm a C++ programmer from Russia. I've been lurking here for about three years. As many others I've found this site by link from HPMOR. The biggest reason for joining in the first place was that I believe the community is right about a lot of important things, and the comments of quality that's difficult to find in the bigger Net. I've already finished reading the Sequences and right now I'm interested in ethics and I believe I've got a few ideas to discuss.

For the origin story as a rationalist, as it often happens it's all started with a crisis of faith. Actually, the second one. The first was a turn from Christianity to a complicated New Age paradigm I'll maybe explain later. The second was prompted by a question of why I believe some of the things I believe in. While I used to think there was a lot of evidence for the supernatural, I've started trying to verify them and also read religion apologetics to evaluate the best arguments they have. Yup, they were bad. The world doesn't look like there exists a powerful interventionist deity. (And even if the miracles they were talking about that happen right now are true miracles, all of them are better explained with not at all omnipotent or omniscient slightly magical fairies). This, coupled with my interests for physics and biology made me think there are problems that are both huge and don't get the attention they deserve. Like, y'know, death or catastrophic changes. And all we've got are some resources, some understanding of how things actually are and a limited ability to cooperate with each other.

I'm looking forward to discuss stuff with people here.

View more: Prev