Thomas comments on Open thread, Oct. 27 - Nov. 2, 2014 - Less Wrong

5 Post author: MrMind 27 October 2014 08:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (400)

You are viewing a single comment's thread.

Comment author: Thomas 27 October 2014 09:59:57AM 4 points [-]

Where are you right, while most others are wrong? Including people on LW!

Comment author: bramflakes 27 October 2014 07:53:59PM 16 points [-]

My thoughts on the following are rather disorganized and I've been meaning to collate them into a post for quite some time but here goes:

Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that's where the light is. I also think there's a more-or-less unstated assumption that considerations other than Harm are low-status.

Comment author: Larks 28 October 2014 02:04:36AM 2 points [-]

Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.

Comment author: Azathoth123 28 October 2014 04:00:06AM 3 points [-]

Ah, yes. The standard problem with measurement based incentives: you start optimizing for what's easy to measure.

Comment author: Ixiel 27 October 2014 11:22:25PM 4 points [-]

Inequality is a good thing, to a point.

I believe in a world where it is possible to get rich, and not necessarily through hard work or being a better person. One person owning the world with the rest of us would be bad. Everybody having identical shares of everything would be bad (even ignoring practicalities). I don't know exactly where the optimal level is, but is it closer to the first situation than the second, even if assigned by lottery.

I'm treating this as basically another contrarian views thread without the voting rules. And full disclosure I'm too biased for anybody to take my word for it, but I'd enjoy reading counterarguments.

Comment author: Viliam_Bur 28 October 2014 12:48:11AM 5 points [-]

My intuition would be that inequality per se is not a problem, it only becomes a problem when it allows abuse. But that's not necessarily a function of inequality itself; it also depends on society. I can imagine a society which would allow a lot of inequality and yet would prevent abuse (for example if some Friendly AI would regulate how you are allowed to spend your money).

Comment author: Nate_Gabriel 27 October 2014 11:37:29PM 2 points [-]

Do you think we currently need more inequality, or less?

Comment author: Ixiel 28 October 2014 12:33:02AM *  1 point [-]

In the US I would say more-ish. I support a guaranteed basic income, and any benefit to one person or group (benefitting the bottom without costing the top would decrease inequality but would still be good), but think there should be a smaller middle class.

I don't know enough about global issues to comment on them.

Comment author: lmm 28 October 2014 07:13:14PM 0 points [-]

If we're stipulating that the allocation is by lottery, I think equality is optimal due to simple diminishing returns. And also our instinctive feelings of fairness. This tends to be intuitively obvious in a small group; if you have 12 cupcakes and 4 people, no-one would even think about assigning them at random; 3 each is the obviously correct thing to do. It's only when dealing with groups larger than our Dunbar number that we start to get confused.

Comment author: Ixiel 29 October 2014 11:35:14AM *  0 points [-]

Assuming that cupcakes are tradable, that seems intuitively false to me. Is it just your intuition, or is there also reason? Not denying intuitions' values, they are just not as easy to explain to one who does not share them.

Comment author: lmm 29 October 2014 11:04:37PM 0 points [-]

If cupcakes are tradeable for brownies then I'd distribute both evenly to start and allow people to trade at prices that seemed fair to them, but I assume that's not what you're talking about. And yeah, it's primarily an intuition, and one that I'm genuinely quite surprised to find isn't universal, but I'd probably try to justify it in terms of diminishing returns, that two people with 3 cupcakes each have a higher overall happiness than one person with 2 and one with 4.

Comment author: Viliam_Bur 28 October 2014 12:56:24AM *  10 points [-]

It is extremely important to find out how to have a successful community without sociopaths.

(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like "oh, we can't send this person away just because of X; they also have so many good traits" or "I don't agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole". I believe that avoiding these - any maybe many other - failure modes is critical if we ever want to have a Friendly society.)

Comment author: Vaniver 30 October 2014 02:49:51PM 9 points [-]

It is extremely important to find out how to have a successful community without sociopaths.

It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?

(The analogy to Friendly AI is worth considering, though.)

Comment author: Azathoth123 02 November 2014 04:16:08AM 1 point [-]

(The analogy to Friendly AI is worth considering, though.)

Ok, so start by examining the suspected sociopath's source code. Wait, we have a problem.

Comment author: ChristianKl 29 October 2014 08:19:58PM *  5 points [-]

It is extremely important to find out how to have a successful community without sociopaths.

What do you mean with the phrase "sociopath"?

A person who's very low on empathy and follows intellectual utility calculations might very well donate money to effective charities and do things that are good for this community even when the same person fits the profile of what get's clinically diagnosed as sociopathy.

I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.

Comment author: Viliam_Bur 30 October 2014 08:57:06AM *  5 points [-]

I'd rather avoid going too deeply into definitions here. Sometimes I feel that if a group of rationalists were in a house that is on fire, they would refuse to leave the house until someone gives them a very precise definition of what exactly does "fire" mean, and how does it differ on quantum level from the usual everyday interaction of molecules. Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.

Specifically I am concerned about the type of people who are very low on empathy and their utility function does not include other people. (So I am not speaking about e.g. people with alexithymia or similar.) Think: professor Quirrell, in real life. Such people do exist.

(I once had a boss like this for a short time, and... well, it's like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point. Imagine a superintelligent paperclip maximizer in a human body, and you will probably have a better approximation. Yeah, I can imagine how untrustworthy this sounds. Unfortunately, that also is a part of a typical experience with a sociopath: first, you start doubting even your own senses, because nothing seems to make sense anymore, and you usually need a lot of time afterwards to sort it out, and then it is already too late to do something about it; second, you realize that if you try to describe it to someone else, there is no chance they would believe you unless they already had this type of experience.)

I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.

I'd like to agree with the spirit of this. But there is the problem that the sociopath would optimize their "indecent" behavior to make it difficult to prove.

Comment author: ChristianKl 30 October 2014 10:04:57AM 6 points [-]

Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.

I'm not saying that the topic is meaningless. I'm saying that if you call for discrimination of people with a certain psychological illness you should know what you are talking about.

Base rates for clinical psychopathy is sometimes cited as 5%. In this community there are plenty of people who don't have a properly working empathy module. Probably more than average in society.

When Eliezer says that he thinks based on typical mind issues that he feels that everyone who says: "I feel your pain" has to be lying that suggests a lack of a working empathy module. If you read back the first April article you find wording about "finding willing victims for BDSM". The desire for causing other people pain is there. Eliezer also checks other things such as a high belief in his own importance for the fate of the world that are typical for clinical psychopathy. Promiscuous sexual behavior is on the checklist for psychopathy and Eliezer is poly.

I'm not saying that Eliezer clearly falls under the label of clinical psychopathy, I have never interacted with him face to face and I'm no psychologist. But part of being rational is that you don't ignore patterns that are there. I don't think that this community would overall benefit from kicking out people who fill multiple marks on that checklist.

Yvain is smart enough to not gather the data for amount of LW members diagnosed with psychopathy when he asks for mental illnesses. I think it's good that way.

If you actually want to do more than just signaling that you like people to be friendly and get applause, than it makes a lot of sense to specify which kind of people you want to remove from the community.

Comment author: Viliam_Bur 30 October 2014 02:02:27PM *  3 points [-]

I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.

This feels to me like worrying about a vegetarian who eats "soy meat" because it exposes their unconscious meat-eating desire, while there are real carnivores out there.

specify which kind of people you want to remove from the community

I am not even sure if "removing a kind of people" is the correct approach. (Fictional evidence says no.) My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern. Which also has a possible problem with false reporting; which maybe also could be solved by noticing patterns.

Speaking about society in general, we have an experience that sociopaths are likely to gain power in different kinds of organizations. It would be naive to expect that rationalist communities would be somehow immune to this; especially if we start "winning" in the real world. Sociopaths have an additional natural advantage that they have more experience dealing with neurotypicals, than neurotypicals have with dealing with sociopaths.

I think someone should at least try to solve this problem, instead of pretending it doesn't exist or couldn't happen to us. Because it's just a question of time.

Comment author: ChristianKl 30 October 2014 05:38:53PM 4 points [-]

I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.

Human beings frequently like to think of people they don't like and understand as evil. There various very bad mental habits associated with it.

Academic psychology is a thing. It actually describes how certain people act. It describes how psychopaths acts. They aren't just evil. Their emotional processes is screwed in systematic ways.

My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.

Translated into every day language that's: "Rationalists should gossip more about each other." Whether we should follow that maxime is a quite complex topic on it's own and if you think that's important write an article about it and actually address the reasons why people don't like to gossip.

I think someone should at least try to solve this problem, instead of pretending it doesn't exist or couldn't happen to us.

You are not really addressing what I said. It's very likely that we have people in this community who fulfill the criteria of clinical psychopathy and I also remember an account of a person who said they trusted another person from a LW meetup who was a self declared egoist too much and ended up with a bad interaction because they didn't take the openness the person who said that they only care about themselves at face value.

Given your moderator position, do you think that you want to do something to garden but lack power at the moment? Especially dealing with the obvious case? If so, that's a real concern. Probably worth addressing more directly.

Comment author: Viliam_Bur 30 October 2014 07:36:19PM *  7 points [-]

Unfortunately, I don't feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don't think I have a solution. I just noticed a danger, and general unwillingness to debate it.

Probably the best thing I can do right now is to recommend good books on this topic. That would be:

  • The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
  • People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read

I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.

As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and "winning". (And "something bad" offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real life, which is what I hope to do one day. I want to do something better than just bring a lot of enthusiastic people to one place and let the fate decide. I trust myself not to start a cult, and not to abuse others, but that itself is no reason for others to trust me; and also, someone else may replace me (rather easily, since I am not good at coalition politics); or someone may do evil things under my roof, without me even noticing. Having a community of highly intelligent people has the risk that the possible sociopaths, if they come, will likely also be highly intelligent. So, I am thinking about what makes a community safe or unsafe. Because if the community grows large enough, sooner or later problems start happening. I would rather be prepared in advance. Trying to solve the problem ad-hoc would probably totally seem like a personal animosity or joining one faction in an internal conflict.

Comment author: Lumifer 30 October 2014 07:49:21PM *  4 points [-]

Can you express what you want to protect against while tabooing words like "bad", "evil", and "abuse"?

Comment author: ChristianKl 30 October 2014 10:31:04PM 1 point [-]

In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good.

That's no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person.

There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use.

If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn't have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.

But if you use that as a criteria for kicking people out you people won't be open about their own beliefs anymore.

In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn't a good idea.

On the other hand LW is per default inclusive and not structured in a way where it's a good idea to kick out people on such a basis.

Comment author: Nornagest 30 October 2014 10:40:22PM *  3 points [-]

If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn't have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.

Intelligent sociopaths generally don't go around telling people that they're sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they've generally been confused teenagers, Internet Tough Guys, and a few people who're just really bad at recognizing their own emotions -- who also aren't the best people to trust, granted, but for different reasons.

I'd be more worried about people who habitually underestimate the empathy of others and don't have obviously poor self-image or other issues to explain it. Most of the sociopaths I've met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.

Comment author: Azathoth123 02 November 2014 04:29:55AM 1 point [-]

My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.

What do you mean by "harm". I have to ask because there is a movement (commonly called SJW) pushing an insanely broad definition of "harm". For example, if you've shattered someone's worldview have you "harmed" him?

Comment author: Viliam_Bur 02 November 2014 11:15:10AM *  2 points [-]

if you've shattered someone's worldview have you "harmed" him?

Not per se, although there could be some harm in the execution. For example if I decide to follow someone every day from their work screaming at them "Jesus is not real", the problem is with me following them every day, not with the message. Or, if they are at a funeral of their mother and the priest is saying "let's hope we will meet our beloved Jane in heaven with Jesus", that would not be a proper moment to jump and scream "Jesus is not real".

Comment author: Lumifer 30 October 2014 03:15:57PM *  1 point [-]

I think someone should at least try to solve this problem

(a) What exactly is the problem? I don't really see a sociopath getting enough power in the community to take over LW as a realistic scenario.

(b) What kind of possible solutions do you think exist?

Comment author: Vaniver 30 October 2014 03:11:11PM 3 points [-]

I once had a boss like this for a short time, and... well, it's like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point.

Steve Sailer's description of Michael Milken:

I had a five-minute conversation with him once at a Milken Global Conference. It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.

Is that the sort of description you have in mind?

Comment author: Viliam_Bur 30 October 2014 04:55:08PM *  11 points [-]

I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people... and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.

If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn't have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say "yeah, yeah, I know exactly what you mean" while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)

Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say "2 + 2 = 5" so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go away, and after a few days you realize it doesn't make sense. Then you meet them again, and a minute later you feel so ashamed for having suspected them of being wrong, when in fact it was obviously you who were wrong.

If you try to confront them in front of another person and say: "You said yesterday that 2 + 2 = 5", they will either look the other person in the eyes and say "but really, 2 + 2 = 5" and make them believe so, or will look at you and say: "You must be wrong, I have never said that 2 + 2 = 5, you are probably imagining things"; whichever is more convenient for them at the moment. Either way, you will look like a total idiot in front of the third party. A few experiences like this, and it will become obvious to you that after speaking with them, no one would ever believe you contradicting them. (When things get serious, these people seem ready to sue you for libel and deny everything in the most believable way. And they have a lot of money to spend on lawyers.)

This person can play the same game with dozens of people at the same time and not get tired, because for them it's as easy as breathing, there are no emotional blocks to overcome (okay, I cannot prove this last part, but it seems so). They can ruin lives of some of them without hesitation, just because it gives them some small benefit as a side effect. If you only meet them casually, your impression will probably be "this is an awesome person". If you get closer to them, you will start noticing the pattern, and it will scare you like hell.

And unless you have met such person, it is probably difficult to believe that what I wrote is true without exaggeration. Which is yet another reason why you would rather believe them than their victim, if the victim would try to get your help. The true description of what really happened just seems fucking unlikely. On the other hand their story would be exactly what you want to hear.

It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.

No, that is completely unlike. That sounds like some super-nerd.

Your first impression from the person I am trying to describe would be "this is the best person ever". You would have no doubt that anyone who said anything negative about such person must be a horrible liar, probably insane. (But you probably wouldn't hear many negative things, because their victims would easily predict your reaction, and just give up.)

Comment author: Azathoth123 02 November 2014 04:23:55AM 2 points [-]

Not a person, but I've had similar experiences dealing with Cthulhu and certain political factions.

Comment author: Viliam_Bur 02 November 2014 11:12:01AM 2 points [-]

Sure, human terms are usually applied to humans. Groups are not humans, and using human terms for them would at best be a metaphor.

Comment author: Azathoth123 04 November 2014 04:03:39AM 1 point [-]

On the other hand, for your purpose (keeping LW a successful community), groups that collectively act like a sociopath are just as dangerous as individual sociopaths.

Comment author: NancyLebovitz 03 November 2014 12:50:14AM 0 points [-]

Narcissist Characteristics

I was wondering if this sounds like your abusive boss-- it's mostly a bunch of social habits which could be identified rather quickly.

Comment author: lmm 28 October 2014 07:09:43PM 4 points [-]

I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.

Comment author: Jiro 28 October 2014 07:39:03PM 2 points [-]

The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don't need a watertight justification because those are used to avoid corruption and bias and you know you don't have any of that anyway.

Comment author: lmm 29 October 2014 11:11:17PM 2 points [-]

I'm not being utopian, I'm giving pragmatic advice based on empirical experience. I think online communities like this one fail more often by allowing bad people to continue being bad (because they feel the need to be scrupulously fair and transparent) than they do by being too authoritarian.

Comment author: Viliam_Bur 30 October 2014 08:14:01AM 4 points [-]

I think I know what you mean. The situations like: "there is 90% probability that something bad happened, but 10% probability that I am just imagining things; should I act now and possibly abuse the power given to me, or should I spend a few more months (how many? I have absolutely no idea) collecting data?"

Comment author: Azathoth123 02 November 2014 04:39:21AM 3 points [-]

The thing is from what I've heard the problem isn't so much sociopaths as ideological entryists.

Comment author: Lumifer 28 October 2014 07:54:22PM 4 points [-]

The best ruler for a community is an uncorruptible, bias-free, dictator.

There is also that kinda-important bit about shared values...

Comment author: Risto_Saarelma 01 November 2014 05:30:50PM 3 points [-]

But when the first sociopath comes, most people would be like "oh, we can't send this person away just because of X; they also have so many good traits" or "I don't agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole".

How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.

And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?

The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church's writings, but there's also like a million pages of that stuff so I'm not going to try and pick examples.

Comment author: Viliam_Bur 01 November 2014 08:58:06PM 1 point [-]

All cheap detection methods could be fooled easily. It's like with that old meme "if someone is lying to you, they will subconsciously avoid looking into your eyes", which everyone has already heard, so of course today every liar would look into your eyes.

I see two possible angles of attack:

a) Make a correct model of sociopathy. Don't imagine sociopaths to be "like everyone else, only much smarter". They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.

b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)

Comment author: drethelin 02 November 2014 07:45:54PM 1 point [-]

I think it's extremely likely that any system for identifying and exiling psychopaths can be co-opted for evil, by psychopaths. I think rules and norms that act against specific behaviors are a lot more robust, and also are less likely to fail or be co-opted by psychopaths, unless the community is extremely small. This is why in cities we rely on laws against murder, rather than laws against psychopathy. Even psychopaths (usually) respond to incentives.

Comment author: drethelin 02 November 2014 07:30:29PM 0 points [-]

Why is this important?

Comment author: Viliam_Bur 02 November 2014 09:46:34PM 5 points [-]

My goal is to create a rationalist community. A place to meet other people with similar values and "win" together. I want to optimize my life (not just my online quantum physics debating experience). I am thinking strategically about an offline experience here.

Eliezer wrote about how a rationalist community might need to defend itself from an attack of barbarians. In my opinion, sociopaths are even greater danger, because they are more difficult to detect, and nerds have a lot of blind spots here. We focus on dealing with forces of nature. But in the social world, we must also deal with people, and this is our archetypal weakness.

The typical nerd strategy for solving conflicts is to run away and hide, and create a community of social outcasts where everything is tolerated, and the whole group is safe more or less because it has so low status that typical bullies rather avoid it. But at the moment we start "winning", this protective shield is over, and we do not have any other coping strategy. Just like being rich makes you an attractive target for thieves, being successful (and I hope rationalist groups will become successful in near future) makes your community a target for people who love to exploit people and get power. And all they need to get inside is to be intelligent and memorize a few LW keywords. Once your group becomes successful, I believe it's just a question of time. (Even a partial success, which for you is merely a first step along a very long way, can already do this.) That will happen much sooner than any "barbarians" would consider you a serious danger.

(I don't want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders. It's not just the "affective death spirals", although they also play a large role. But there are people in important positions who don't think about "how to make the world a better place for humans", but rather "how could I most benefit from this conflict". And the conflict often continues and grows because that happens to be the way for those people to profit most. And this seems to happen on all sides, in all movements, as soon as there is some power to be gained. Including movements that ostensibly are against the concept of power. So the other way to ask my question would be: How can a rationalist community get more power, without becoming dominated by people who are willing to sacrifice anything for power? How to have a self-improving Friendly human community? If we manage to have a community that doesn't immediately fall apart, or doesn't become merely a debate club, this seems to me like the next obvious risk.)

Comment author: ChristianKl 02 November 2014 10:22:37PM 1 point [-]

I don't want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders.

How do you come to that conclusion? Simply because you don't agree with their actions? Otherwise are there trained psychologists who argue that position in detail and try to determine how politicians score on the Hare scale?

Comment author: Viliam_Bur 03 November 2014 07:58:49AM 1 point [-]

How do you come to that conclusion? Simply because you don't agree with their actions?

Uhm, no. Allow me to quote from my other comment:

If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0.

I hope it illustrates that my mental model has separate buckets for "people I suspect to be sociopaths" and "people I disagree with".

Comment author: ChristianKl 03 November 2014 03:58:54PM 1 point [-]

Diagnosing mental illness based on the kind of second hand information you have about politicians isn't a trivial effort. Especially if you lack the background in psychology.

Comment author: pianoforte611 29 October 2014 07:43:46PM 0 points [-]

Are you directing this at LW? Ie. is there a sociopath that you think is bad for our community?

Comment author: Viliam_Bur 30 October 2014 09:03:12AM *  0 points [-]

Well, I suspect Eugine Nier may have been one, to show the most obvious example. (Of course there is no way to prove it, there are always alternative explanations, et cetera, et cetera, I know.)

Now that was an online behavior. Imagine the same kind of person in real life. I believe it's just a question of time. Using the limited experience to make predictions, such person would be rather popular, at least at the beginning, because they would keep using the right words that are tested to evoke a positive response from many lesswrongers.

Comment author: IlyaShpitser 30 October 2014 09:59:43AM *  3 points [-]

A "sociopath" is not an alternative label for [someone I don't like.] I am not sure what a concise explanation for the sociopath symptom cluster is, but it might be someone who has trouble modeling other agents as "player characters", for whatever reason. A monster, basically. I think it's a bad habit to go around calling people monsters.

Comment author: Viliam_Bur 30 October 2014 01:46:19PM *  6 points [-]

I know; I know; I know. This is exactly what makes this topic so frustratingly difficult to explain, and so convenient to ignore.

The thing I am trying to say is that if a real monster would come to this community, sufficiently intelligent and saying the right keywords, we would spend all our energy inventing alternative explanations. That although in far mode we admit that the prior probability of a monster is nonzero (I think the base rate is somewhere around 1-4%), in near mode we would always treat it like zero, and any evidence would be explained away. We would congratulate ourselves for being nice, but in reality we are just scared to risk being wrong when we don't have convincingly sounding verbal arguments on our side. (See Geek Social Fallacy #1, but instead of "unpleasant" imagine "hurting people, but only as much as is safe in given situation".) The only way to notice the existence of the monster is probably if the monster decides to bite you personally in the foot. Then you will realize with horror that now all other people are going to invent alternative explanations why that probably didn't happen, because they don't want to risk being wrong in a way that would feel morally wrong to them.

I don't have a good solution here. I am not saying that vigilantism is a good solution, because the only thing the monster needs to draw attention away is to accuse someone else of being a monster, and it is quite likely that the monster will sound more convincing. (Reversed stupidity is not intelligence.) Actually, I believe this happens rather frequently. Whenever there is some kind of a "league against monsters", it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)

So, we have a real danger here, but we have no good solution for it. Humans typically cope with such situations by pretending that the danger doesn't exist. I wish we had a better solution.

Comment author: NancyLebovitz 30 October 2014 08:37:13PM 2 points [-]

I can believe that 1% - 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don't have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.

Comment author: Viliam_Bur 30 October 2014 10:36:49PM *  3 points [-]

That's right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population.

(However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor -- which I don't know whether it's true or not -- it could again be 1-4% among the wannabe rationalists.)

Comment author: Lumifer 31 October 2014 01:05:53AM 2 points [-]

The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else.

I would describe that person as a charismatic manipulator. I don't think it requires being a sociopath, though being one helps.

Comment author: NancyLebovitz 30 October 2014 10:59:15PM 1 point [-]

The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we're relatively safe. :-)

I can hope that a people in a rationalist community would be better than average at eventually noticing they're in a mind-warping confusion and charisma field, but I'm really hoping we don't get tested on that one.

Comment author: Viliam_Bur 31 October 2014 09:39:38AM 2 points [-]

I think we're relatively safe

Returning to the original question ("Where are you right, while most others are wrong? Including people on LW!"), this is exactly the point where my opinion differs from the LW consensus.

I can hope that a people in a rationalist community would be better than average at eventually noticing they're in a mind-warping confusion and charisma field

For a sufficiently high value of "eventually", I agree. I am worried about what would happen until then.

I'm really hoping we don't get tested on that one.

I'm hoping that this is not the best answer we have. :-(

Comment author: arromdee 30 October 2014 06:47:49PM 1 point [-]

Whenever there is some kind of a "league against monsters", it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)

https://allthetropes.orain.org/wiki/Hired_to_Hunt_Yourself

Comment author: Lumifer 30 October 2014 03:31:59PM 3 points [-]

Well, I suspect Eugine Nier may have been one, to show the most obvious example.

Why do you suspect so? Gaming ill-defined social rules of an internet forum doesn't look like a symptom of sociopathy to me.

You seem to be stretching the definition too far.

Comment author: Viliam_Bur 30 October 2014 05:07:03PM 3 points [-]

Abusing rules to hurt people is at least a weak evidence. Doing it persistently for years, even more so.

Comment author: RowanE 27 October 2014 11:41:18AM 9 points [-]

I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this. I think there have actually been several threads about beliefs that most of LW would disagree with.

Comment author: Thomas 27 October 2014 11:59:31AM *  3 points [-]

Very well. But do you have such a belief, that others will see it as a wrong one?

(Last time this was asked, the majority of contrarian views were presented by me.)

Comment author: RowanE 27 October 2014 02:55:02PM *  8 points [-]

The most contra-LW belief I have, if you can call it that, is my not being convinced on the pattern theory of identity - EY's arguments about there being no "same" or "different" atoms not effecting me because my intuitions already say that being obliterated and rebuilt from the same atoms would be fatal. I think I need the physical continuity of the object my consciousness runs on. But I realise I haven't got much support besides my intuitions for believing that that would end my experience and going to sleep tonight won't, and by now I've become almost agnostic on the issue.

Comment author: ZankerH 27 October 2014 01:39:03PM 2 points [-]
  • Technological progress and social/political progress are loosely correlated at best

  • Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression

  • There is no such thing as moral progress, only people in charge of enforcing present moral norms selectively evaluating past moral norms as wrong because they disagree with present moral norms

Comment author: Metus 27 October 2014 02:24:31PM 3 points [-]

I think I found the neoreactionary.

Comment author: gjm 27 October 2014 03:50:13PM 2 points [-]

The neoreactionary? There are quite a number of neoreactionaries on LW; ZankerH isn't by any means the only one.

Comment author: Metus 27 October 2014 04:05:24PM 2 points [-]

Apparently LW is a bad place to make jokes.

Comment author: gjm 27 October 2014 05:09:53PM 10 points [-]

The LW crowd is really tough: jokes actually have to be funny here.

Comment author: Lumifer 27 October 2014 04:12:47PM 3 points [-]

That's not LW, that's internet. The implied context in your head is not the implied context in other heads.

Comment author: Nate_Gabriel 27 October 2014 01:52:51PM 1 point [-]

Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression

Regression? Since the 1750s? I realize Europe may be unusually bad here (at least, I hope so), but it took until 1829 for England to abolish the husband's right to punish his wife however he wanted.

Comment author: RowanE 27 October 2014 02:32:18PM 1 point [-]

I think that progress is specifically what he's on about in his third point. It's standard neoreactionary stuff, there's a reason they're commonly regarded as horribly misogynist.

Comment author: Capla 27 October 2014 06:06:02PM 1 point [-]

I want to discuss it, and be shown wrong if I'm being unfair, but saying "It's standard [blank] stuff" seems dismissive. Suppose I was talking with someone about friendly AI or the singularity, and a third person comes around and says "Oh, that's just standard Less Wrong stuff." It may or may not be the case, but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright. That is not conducive to communication.

Comment author: RowanE 27 October 2014 07:20:19PM 2 points [-]

I was trying to say "you should not expect that someone who thinks no social, political or moral progress has been made since the 18th century to consider women's rights to be a big step forward" in a way that wasn't insulting to Nate_Gabriel - being casually dismissive of an idea makes "you seem to be ignorant about [idea]" less harsh.

Comment author: Lumifer 27 October 2014 06:15:19PM 1 point [-]

but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright.

This comment could be (but not necessarily is) valid with the meaning of "Your arguments are part of a well-established set of arguments and counter-arguments, so there is no point in going through them once again. Either go meta or produce a novel argument.".

Comment author: fubarobfusco 28 October 2014 04:06:02AM *  1 point [-]

How do you square your beliefs with (for instance) the decline in murder in the Western world — see, e.g. Eisner, Long-Term Historical Trends in Violent Crime?

Comment author: RichardKennaway 27 October 2014 03:00:28PM 1 point [-]

What do you mean by social progress, given that you distinguish it from technological progress ("loosely correlated at best") and moral progress ("no such thing")?

Comment author: ZankerH 27 October 2014 03:15:28PM *  1 point [-]

Re: social progress: see http://www.moreright.net/social-technology-and-anarcho-tyranny/

We use the term “technology” when we discover a process that lets you get more output for less investment, whether you’re trying to produce gallons of oil or terabytes of storage. We need a term for this kind of institutional metis – a way to get more social good for every social sacrifice you have to make – and “social technology” fits the bill. Along with the more conventional sort of technology, it has led to most of the good things that we enjoy today.

The flip side, of course, is that when you lose social technology, both sides of the bargain get worse. You keep raising taxes yet the lot of the poor still deteriorates. You spend tons of money on prisons and have a militarized police force, yet they seem unable to stop muggings and murder. And this is the double bind that “anarcho-tyranny” addresses. Once you start losing social technology, you’re forced into really unpleasant tradeoffs, where you have sacrifice along two axes of things you really value.

As for moral progress, see whig history. Essentially, I view the notion of moral progress as fundamentally a misinterpretation of history. Related fallacy: using a number as an argument (as in, "how is this still a thing in 2014?"). Progress in terms of technology can be readily demonstrated, as can regression in terms of social technology. The notion of moral progress, however, is so meaningless as to be not even wrong.

Comment author: Toggle 27 October 2014 05:09:04PM 1 point [-]

More Right

That use of 'technology' seems to be unusual, and possibly even misleading. Classical technology is more than a third way that increases net good; 'techne' implies a mastery of the technique and the capacity for replication. Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.

It does not seem to be the case that we have ever known how to make new societies that do the things we want. The narrative of a 'regression' in social progress implies that there was a kind of knowledge that we no longer have- but it is the social institutions themselves that are breaking down, not our ability to craft them.

Cultures are still built primarily by poorly-understood aggregate interactions, not consciously designed, and they decay in much the same way. A stronger analogy here might be biological adaptation, rather than technological advancement, and in evolutionary theory the notion of 'progress' is deeply suspect.

Comment author: Lumifer 27 October 2014 05:28:40PM *  1 point [-]

Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.

The fact that I can't make a new computer from scratch doesn't mean I'm using one as "a magical artifact". What contemporary pieces of technology can you make?

It does not seem to be the case that we have ever known how to make new societies that do the things we want.

You might be more familiar with this set of knowledge if we call it by its usual name -- "politics".

Comment author: Toggle 27 October 2014 05:43:44PM 1 point [-]

I was speaking in the plural. As a civilization, we are more than capable of creating many computers with established qualities and creating new ones to very exacting specifications. I don't believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.

You can do this for governments, of course- but notably, we haven't lost any information here. We are still perfectly capable of writing constitutions, or even founding monarchies if there were a consensus to do so. The 'regression' that Zanker believes in is (assuming the most common NRx beliefs) a matter of convention, social fabrics, and shared values, and not a regression in our knowledge of political structures per se.

Comment author: Lumifer 27 October 2014 05:57:33PM *  2 points [-]

I don't believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.

That's not self-evident to me. There are legal and ethical barriers, but my guess is that given the same level of control that we have in, say, engineering, we could (or quickly could learn to) build societies with custom characteristics. Given the ability to select people, shape their laws and regulations, observe and intervene, I don't see why you couldn't produce a particular kind of a society.

Of course you can't build any kind of society you wish just like you can't build any kind of a computer you wish -- you're limited by laws of nature (and of sociology, etc.), by available resources, by your level of knowledge and skill, etc.

Shaping a society is a common desire (look at e.g. communists) and a common activity (of governments and politicians). Certainly it doesn't have the precision and replicability of mass-producing machine screws, but I don't see why you can't describe it as a "technology".

Comment author: ChristianKl 27 October 2014 04:41:09PM 3 points [-]

I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this.

I think you are wrong. Identifying a belief as wrong is not enough to remove it. If someone has low self esteem and you give him an intellectual argument that's sound and that he wants to believe that's frequently not enough to change the fundamental belief behind low self esteem.

Scott Alexander wrote a blog post about how asking a schizophrenic for weird beliefs makes the schizophrenic tell the doctor about the faulty beliefs.

If you ask a question differently you get people reacting differently. If you want to get a broad spectrum of answers than it makes sense to ask the question in a bunch of different ways.

I'm intelligent enough to know that my own beliefs about the social status I hold within a group could very well be off even if those beliefs feel very real to me.

If you ask me: "Do you think X is really true and everyone who disagrees is wrong?", you trigger slightly different heuristics than in me than if you ask "Do you believe X?".

It's probably pretty straightforward to demonstrate this and some cognitive psychologist might even already have done the work.

Comment author: gattsuru 27 October 2014 05:56:20PM *  5 points [-]

General :

  • There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true.

  • /Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we're desperately trying to Section Eight.

  • Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Political :

  • Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.

  • Privacy policies focused on preventing collection of identifiable data are ultimately doomed.

LessWrong-specific:

  • "Karma" is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it's disappointing.

  • The risks and costs of "Raising the sanity waterline" are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven't really looked at what this would mean on a national scale. "Nuclear Winter" as argued by Sagan was a very, very overt Pascal's Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective... several hundred pages of reading later.

  • "Rationality" is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you're competing with RationalWiki, the universe is trying to give you a Hint.

  • The type of Atheism that is certain it will win, won't. There's a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness ... and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it's poked and prodded by the blasphemy of actual practice. Lest you find the answer.

  • ((I'm /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))

MIRI-specific:

  • MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous... and it's several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.

  • MIRI's difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that's a value of "difficulty working with outsiders" that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))

Comment author: Nornagest 27 October 2014 06:55:21PM 5 points [-]

Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Isn't this basically Goodhart's law?

Comment author: gattsuru 28 October 2014 12:11:33AM 2 points [-]

It's related. Goodhart's Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn't predict how that decoupling will occur. The common story of Goodhart's law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.

Sometimes this is a good thing : it's why, for one example, companies don't instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.

That said, while I'm convinced that's the pattern, it's not the only one or even the most obvious one, and most people seem to have different formalizations, and I can't find the evidence to demonstrate it.

Comment author: Evan_Gaensbauer 28 October 2014 07:24:37AM 1 point [-]

MIRI's difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that's a value of "difficulty working with outsiders" that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))

I agree, and it's something I could, maybe should, help with instead of just complaining about. What's stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn't work, then, what would be stopping us?

Comment author: Viliam_Bur 28 October 2014 08:29:07AM 4 points [-]

over six-to-nine months to get the Sequences eBook proofread

This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?

Is it because people don't volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it?

Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven't read the whole Sequences, they can just pick a chapter they haven't read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.

Comment author: lmm 28 October 2014 07:15:06PM 3 points [-]

I'm just reading LW for fun and unwilling to do any real work to help, FWIW.

Comment author: gattsuru 28 October 2014 02:56:40PM *  2 points [-]

How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?

It's the 'norm-palatable' part more than the proofreading aspect, unfortunately, and I'm not sure that can be readily made volunteer work

As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I've been able to tell, they're looking for a release at the end of the year that strongly suggests that they've finished the proofreading aspect.

That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I'm not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don't seem much stronger from a reading order perspective.

In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format -- where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren't possible. At least from what I've seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited.

Less charitably, while trying to find this information I've found references to an eBook project dating back to late 2012, so nine months may be a low-end estimate. Not sure if that's the same project or if it's a different one that failed, or if it's a different one that succeeded and I just can't find the actual eBook result.

Comment author: kalium 30 October 2014 05:42:50AM *  3 points [-]

I used to work as a proofreader for MIRI, and was sometimes given documents with volunteers' comments to help me out. In most cases, the quality of the comments was poor enough that in the time it took me to review the comments, decide which ones were valid, and apply the changes, I could have just read the whole thing and caught the same errors (or at least an equivalent number thereof) myself.

There's also the fact that many errors are only such because they're inconsistent with the overall style. It's presumably not practical to get all your volunteers to read the Chicago Manual of Style and agree on what gets a hyphen and such before doing anything.

Comment author: Evan_Gaensbauer 28 October 2014 09:42:57AM 1 point [-]

Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven't read the whole Sequences, they can just pick a chapter they haven't read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.

Thanks for the suggestion. I'll plan some meetups around this. Not the whole thing, mind you. I'll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.

Comment author: gattsuru 29 October 2014 05:15:31PM *  3 points [-]

What's stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn't work, then, what would be stopping us?

In organized form, I've joining the Youtopia page, and the current efforts appear to be either busywork or best completed by a native speaker of a different language, there's no obvious organization regarding generalized goals, and no news updates at all. I'm not sure if this is because MIRI is using a different format to organize volunteers, because MIRI doesn't promote the Youtopia group that seriously, because MIRI doesn't have any current long-term projects that can be easily presented to volunteers, or for some other reason.

For individual-oriented work, I'm not sure what to do, and I'm not confident the best person to do it. There are also three separate issues, of which there's not obvious interrelation. Improving the Sequences and accessibility of the Sequences is the most immediate and obvious thing, and I can think of a couple different ways to go about this :

  • The obvious first step is to make /any/ eBook, which is why a number of people have done just that. This isn't much more comprehensible than just linking to the Sequences page on the Wiki, and in some cases may be less useful, and most of the other projects seem better-designed than I can offer.

  • Improve indexing of the Sequences for online access. This does seem like low-hanging fruit, possibly because people are waiting for a canonical order, and the current ordering is terrible. However, I don't think it's a good idea to just randomly edit the Sequences Wiki page, and Discussion and Main aren't really well-formatted for a long-term version-heavy discussion. (And it seems not Wise for my first Discussion or Main post to be "shake up the local textbook!") I have started working on a dependency web, but this effort doesn't seem produce marginal benefits until large sections are completed.

  • The Sequences themselves are written as short bite-sized pieces for a generalized audience in a specific context, which may not be optimal for long-form reading in a general context. In some cases, components that were good-enough to start with now have clearer explanations... that have circular redundancies. Writing bridge pieces to cover these attributes, or writing alternative descriptions for the more insider-centric Sequences, works within existing structures, and providing benefit at fairly small intervals. This requires fairly deep understanding of the Sequences, and does not appear to be a low-hanging fruit. (And again, not necessarily Wise for my first Discussion or Main post to be "shake up the local textbook!")

But this is separate from MIRI's ability to work with insiders and only marginally associated with its ability to work with outsiders. There are folk with very significant comparative advantages (ie, anyone inside MIRI, anyone in California, most people who accept their axioms) on these matters, and while outsiders have managed to have major impact despite that, they were LukeProg with a low-hanging fruit of basic nonprofit organization, which is a pretty high bar to match.

There are some possibilities -- translating prominent posts to remove excessive jargon or wordiness (or even Upgoer Fiving them), working on some reputation problems -- but none of these seem to have obvious solutions, and wrong efforts could even have negative impact. See, for example, a lot of coverage in more mainstream web media. I've also got a significant anti-academic streak, so it's a little hard for me to understand the specific concern that Scott Alexander/su3su2u1 were raising, which may complicate matters further.

Comment author: polymathwannabe 27 October 2014 06:35:10PM 0 points [-]

There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true.

Desirability issues aside, "believing X" and "knowing X is not true" cannot happen in the same head.

Comment author: Lumifer 27 October 2014 06:39:11PM 4 points [-]

"believing X" and "knowing X is not true" cannot happen in the same head

This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that "The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function" -- a bon mot I find insightful.

Comment author: polymathwannabe 27 October 2014 08:35:11PM 0 points [-]

Example of that being useful?

Comment author: gattsuru 27 October 2014 10:10:44PM *  6 points [-]

(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.)

Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There's some evidence that this link is causative for at least some characteristics. It's not a completely unblemished good characteristic -- it correlates with lower compliance with medical orders, and probably isn't good for some anxiety disorders in extreme cases -- but it seems more helpful than not.

It's also almost certainly a lie. Indeed, it's obvious that such a thing can't exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there's a whole lot of universe that isn't you than there is you to start with. On the upside, if your locus of control is external, at least it's not worth worrying about. You couldn't do much to change it, after all.

Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that's perhaps too easy an example. It's not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner's dilemma.

It's possible (even plausible) that this represents a valley of rationality -- like the earlier example of Pascal's Wagers that hold decent Utilitarian tradeoffs underneath -- but I'm not sure falsifiable, and it's certainly not obvious right now.

Comment author: Evan_Gaensbauer 28 October 2014 07:26:39AM 4 points [-]

Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.

As an afflicted individual, I appreciate the content warning. I'm responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.

Comment author: Vulture 31 October 2014 11:02:58PM 1 point [-]

I second Evan that the warning was a good idea, but I do wonder whether it would be better to just say "content warning"; "Basilisk" sounds culty, might point confused people towards dangerous or distressing ideas, and is a word which we should probably be not using more than necessary around here for the simple PR reason of not looking like idiots.

Comment author: gattsuru 01 November 2014 01:33:58AM 1 point [-]

Yeah, other terminology is probably a better idea. I'd avoided 'trigger' because it isn't likely to actually trigger anything, but there's no reason to use new terms when perfectly good existing ones are available. Content warning isn't quite right, but it's close enough and enough people are unaware of the original meaning, that its probably preferable to use.

Comment author: Lumifer 27 October 2014 08:47:01PM *  1 point [-]

Mostly in the analysis of complex phenomena with multiple in(or barely)compatible frameworks of looking at them.

A photon is a wave.
A photon is a particle.

Love is temporary insanity.
Love is the most beautiful feeling you can have.

Etc., etc.

Comment author: RowanE 27 October 2014 10:18:06PM 1 point [-]

It's possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true - a photon is actually neither.

Truth is not beauty, so there's no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.

Comment author: lmm 27 October 2014 07:49:41PM *  4 points [-]
  • Arguing on the internet is much like a drug, and bad for you
  • Progress is real
  • Some people are worth more than others
    • You can correlate this with membership in most groups you care to name
  • Solipsism is true
Comment author: NancyLebovitz 27 October 2014 08:35:51PM 4 points [-]

Some people are worth more than others
Solipsism is true

Are these consistent with each other? Should it at least be "Some "people" are worth more than others"?

Comment author: lmm 27 October 2014 10:37:03PM 0 points [-]

Words are just labels for empirical clusters. I'm not going to scare-quote people when it has the usual referent used in normal conversation.

Comment author: Mitchell_Porter 30 October 2014 06:59:54AM 0 points [-]

What do you mean by solipsism?

Comment author: lmm 30 October 2014 01:04:30PM 0 points [-]

My own existence is more real than this universe. Humans and our objective reality are map, not territory.

Comment author: Mitchell_Porter 31 October 2014 07:00:02AM 0 points [-]

What does it mean for one thing to be more real than another thing?

Also, when you say something is "map not territory", what do you mean? That the thing in question does not exist, but it resembles something else which does exist? Presumably a map must at least resemble the territory it represents.

Comment author: lmm 31 October 2014 07:36:25PM 0 points [-]

Maybe "more fundamental" is clearer. In the same way that friction is less real than electromagnetism.

Comment author: Mitchell_Porter 01 November 2014 01:31:04AM 1 point [-]

More fundamental, in what sense? e.g. do you consider yourself to be the cause of other people?

Comment author: lmm 01 November 2014 04:56:07PM 0 points [-]

e.g. do you consider yourself to be the cause of other people?

To the extent that there is a cause, yes. Other people are a surface phenomenon.

Comment author: Evan_Gaensbauer 28 October 2014 07:17:56AM 1 point [-]

Progress is real

What do you mean by 'progress'? There is more than one conceivable type of progress: political, philosophical, technological, scientific, moral, social, etc.

What's interesting is there is someone else in this thread who believes they are right about something most others are wrong about. ZankerH believes there hasn't been much political or social progress, and that moral progress doesn't exist. So, if that's the sort of progress you are meaning, and also believe that you're right about this when most others aren't, then this thread contains some claims that would contradict each other.

Alas, I agree with you that arguing on the Internet is bad, so I'm not encouraging you to debate ZankerH. I'm just noting something I find interesting.

Comment author: James_Miller 27 October 2014 09:56:53PM 3 points [-]

I've signed up for cryonics, invest in stocks through index funds, and recognize that the Fermi paradox means mankind is probably doomed.

Comment author: summerstay 27 October 2014 01:52:51PM *  3 points [-]

It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.

Comment author: polymathwannabe 27 October 2014 02:18:00PM 3 points [-]

EY has declared that P-zombies are nonsense, but I've had trouble understanding his explanation. Is there any consensus on this?

Comment author: RowanE 27 October 2014 02:43:51PM *  5 points [-]

Summary of my understanding of it: P-zombies require that there be no causal connection between consciousness and, well, anything, including things p-zombie philosophers say about consciousness. If this is the case, then a non-p-zombie philosopher talking about consciousness also isn't doing so for reasons causally connected to the fact that they are conscious. To effectively say "I am conscious, but this is not the cause of my saying so, and I would still say so if I wasn't conscious" is absurd.

Comment author: bbleeker 27 October 2014 06:42:29PM 1 point [-]

How would you tell the difference? I act like I'm conscious too, how do you know I am?

Comment author: hyporational 27 October 2014 02:57:54PM *  1 point [-]

It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.

I haven't gotten that impression. The p-zombie problem those other guys talk about is a bit different since human beings aren't made with a purpose in mind and you'd have to explain why evolution would lead to brains that only mimic conscious behavior. However if human beings make robots for some purpose it seems reasonable to program them to behave in a way that mimics behavior that would be caused by consciousness in humans. This is especially likely since we have hugely popular memes like the Turing test floating about.

I tend to believe that much simpler processes than we traditionally attribute consciousness to could be conscious in some rudimentary way. There might even be several conscious processes in my brain working in parallel and overlapping. If this is the case looking for human-like traits in machines becomes a moot point.

Comment author: Capla 27 October 2014 06:18:57PM 1 point [-]

I often wonder if my subconsciousness is actually conscious, it's just a different consciousnesses than me.

Comment author: hyporational 28 October 2014 09:40:49AM *  1 point [-]

I actually arrived at this supposedly old idea on my own when I was reading about the incredibly complex enteric nervous system in med school. For some reason it struck me that the brain of my gastrointestinal system might be conscious. But then thinking about it further it didn't seem very consistent that only certain bigger neural networks that are confined by arbitrary anatomical boundaries would be conscious, so I proceeded a bit further from there.

Comment author: Daniel_Burfoot 27 October 2014 01:59:11PM 1 point [-]

Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view. Altruists should seriously consider either migrating or scaling back their career ambitions significantly.

Comment author: Lumifer 27 October 2014 03:06:15PM 5 points [-]

Interesting. This is in contrast to which societies? To where should altruists emigrate?

Comment author: Evan_Gaensbauer 28 October 2014 07:44:33AM 4 points [-]

If anyone cares, the effective altruism community has started pondering this question as a group. This might work out for those doing direct work, such as research or advocacy: if they're doing it mostly virtually, what they need the most is Internet access. If a lot of the people they'd be (net)working with as part of their work were also at the same place, it would be even less of a problem. It doesn't seem like this plan would work for those earning to give, as the best ways of earning to give often depend on geography-specific constraints, i.e., working in developed countries.

Note that if you perceive this as a bad idea, please share your thoughts, as I'm only aware of its proponents claiming it might be a good idea. It hasn't been criticized, so it's an idea worthy of detractors if criticism is indeed to be had.

Comment author: drethelin 02 November 2014 07:39:21PM 3 points [-]

Fundamentally the biggest reason to have a hub and the biggest barrier to creating a new one is coordination. Existing hubs are valuable because a lot of the coordination work is done FOR you. People who are effective, smart, and wealthy are already sorted into living in places like NYC and SF for lots of other reasons. You don't have to directly convince or incentivize these people to live there for EA. This is very similar to why MIRI theoretically benefits from being in the Bay Area: They don't have to pay the insanely high a cost to attract people to their area at all, vs to attract them to hang out with and work with MIRI as opposed to google or whoever. I think it's highly unlikely that even for the kind of people who are into EA that they could make a new place sufficiently attractive to potential EAs to climb over the mountains of non-coordinated reasons people have to live in existing hubs.

Comment author: DanielLC 27 October 2014 07:19:57PM 2 points [-]

If I scale back my career ambitions, I won't make as much money, which means that I can't donate as much. This is not a small cost. How can my career do more damage than that opportunity cost?

Comment author: Daniel_Burfoot 27 October 2014 03:20:55PM *  1 point [-]

I would suggest ANZAC, Germany, Japan, or Singapore. I realized after making this list that those countries have an important property in common, which is that they are run by relatively young political systems. Scandinavia is also good. Most countries are probably ethically better than the US, simply because they are inert: they get an ethical score of zero while the US gets a negative score.

(This is supposed to be a response to Lumifer's question below).

Comment author: Lumifer 27 October 2014 03:34:32PM 4 points [-]

would suggest ANZAC, Germany, Japan, or Singapore. ... Scandinavia is also good.

That's a very curious list, notable for absences as well as for inclusions. I am a bit stumped, for I cannot figure out by which criteria was it constructed. Would you care to elaborate why do these countries look to you as the most ethical on the planet?

Comment author: Daniel_Burfoot 27 October 2014 10:05:33PM 1 point [-]

I don't claim that the list is exhaustive or that the countries I mentioned are ethically great. I just claim that they're ethically better than the US.

Comment author: Lumifer 28 October 2014 03:02:36PM 0 points [-]

Hmm... Is any Western European country ethically worse than the USA from your point of view? Would Canada make the list? Does any poor country qualify?

Comment author: Daniel_Burfoot 28 October 2014 03:15:29PM *  -1 points [-]

In my view Western Europe is mostly inert, so it gets an ethics score of 0, which is better than the US. Some poor countries are probably okay, I wouldn't want to make sweeping claims about them. The problem with most poor countries is that their governments are too corrupt. Canada does make the list, I thought ANZAC stood for Australia, New Zealand And Canada.

Comment author: Metus 27 October 2014 09:46:02PM 1 point [-]

Modern countries with developed economies lacking a military force involved and/or capable of military intervention outside of its territory. Maybe his grief is with the US military so I just went with that.

Comment author: Azathoth123 28 October 2014 03:54:16AM 4 points [-]

Which is to say they engage in a lot of free riding on the US military.

Comment author: DanielFilan 27 October 2014 10:50:13PM 2 points [-]

For reference, ANZAC stands for the "Australia and New Zealand Army Corps" that fought in WWI. If you mean "Australia and New Zealand", then I don't think there's a shorter way of saying that than just listing the two countries.

Comment author: Douglas_Knight 28 October 2014 10:03:55PM 2 points [-]

"the Antipodes"

Comment author: Capla 27 October 2014 06:11:37PM *  0 points [-]

I'm not sure what you mean. Can you elaborate, with the other available options perhaps? What should I do instead?

To be more specific, what's morally problematic about wanting to be a more successful writer or researcher or therapist?

Comment author: Lumifer 27 October 2014 06:23:05PM *  3 points [-]

what's morally problematic about wanting to be a more successful writer or researcher or therapist?

The issue is blanket moral condemnation of the whole society. Would you want to become a "more successful writer" in Nazi Germany?

The simple step of a courageous individual is not to take part in the lie." -- Alexander Solzhenitsyn

Comment author: faul_sname 27 October 2014 07:28:57PM *  2 points [-]

The issue is blanket moral condemnation of the whole society. Would you want to become a "more successful writer" in Nazi Germany?

...yes? I wouldn't want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else. In this context, "the lie" of Nazi Germany was not the mere existence of the society, it was specific things people within that society were doing. Romance novels, even very good romance novels, are not a part of that lie by reasonable definitions.

ETA: There are certainly better things a person in Nazi Germany could do than writing romance novels. If you accept the mindset that anything that isn't optimally good is bad, then yes, being a writer in Nazi Germany is probably bad. But in that event, moving to Sweden and continuing to write romance novels is no better.

Comment author: Lumifer 27 October 2014 07:43:08PM *  2 points [-]

I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else

The key word is "successful".

To become a successful romance writer in Nazi Germany would probably require you pay careful attention to certain things. For example, making sure no one who could be construed to be a Jew is ever a hero in your novels. Likely you will have to have a public position on the racial purity of marriages. Would a nice Aryan Fräulein ever be able to find happiness with a non-Aryan?

You can't become successful in a dirty society while staying spotlessly clean.

Comment author: faul_sname 27 October 2014 07:48:47PM 3 points [-]

So? Who said my goal was to stay spotlessly clean? I think more highly of Bill Gates than of Richard Stallman, because as much as Gates was a ruthless and sometimes dishonest businessman, and as much as Stallman does stick to his principles, Gates, overall, has probably improved the human condition far more than Stallman.

Comment author: Lumifer 27 October 2014 08:13:59PM *  2 points [-]

Who said my goal was to stay spotlessly clean?

The question was whether "being a writer in Nazi Germany would be any worse than being a writer anywhere else".

If you would be happy to wallow in mud, be my guest.

The question of how much morality could one maintain while being successful in an oppressive society is an old and very complex one. Ask Russian intelligentsia for details :-/

Comment author: NancyLebovitz 27 October 2014 08:32:20PM 2 points [-]

Lack of representation isn't the worst thing in the world.

if you could write romance novels in Nazi Gernany (did they have romance novels?) and the novels are about temporarily and engagingly frustrated love between Aryans with no nasty stereotypes of non-Aryans, I don't think it's especially awful.

Comment author: Douglas_Knight 28 October 2014 10:32:20PM 1 point [-]

did [Nazi Germany] have romance novels?

What a great question! I went to wikipedia which paraphrased a great quote from NYT

Germans love erotic romance...The company publishes German writers under American pseudonyms "because you can't sell romance here with an author with a German name"

which suggests that they are a recent development. Maybe there was a huge market for Georgette Heyer, but little production in Germany.

One thing that is great about wikipedia is the link to corresponding articles in other languages. "Romance Novel" in English links to an article entitled "Love- and Family-Novels." That suggests that the genres were different, at least at some point in time. That article mentions Hedwig Courths-Mahler as a prolific author who was a supporter of the SS and I think registered for censorship. But she rejected the specific censorship, so she published nothing after 1935 and her old books gradually fell out of print. But I'm not sure she really was a romance author, because of the discrepancy of genres.

Comment author: Azathoth123 30 October 2014 04:58:20AM 0 points [-]

What do your lovers find attractive about each other? It better be their Aryan traits.

Comment author: Nornagest 27 October 2014 08:44:22PM 0 points [-]

I wouldn't want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else.

Well, there is the inconvenient possibility of getting bombed flat in zero to twelve years, depending on what we're calling Nazi Germany.

Comment author: RowanE 27 October 2014 10:21:00PM 0 points [-]

Considering the example of Nazi Germany is being used as an analogy for the United States, a country not actually at way, taking allied bombing raids into account amounts to fighting the hypothetical.

Comment author: Nornagest 27 October 2014 10:26:49PM *  1 point [-]

Is it? I was mainly joking -- but there's an underlying point, and that's that economic and political instability tends to correlate with ethical failures. This isn't always going to manifest as winding up on the business end of a major strategic bombing campaign, of course, but perpetrating serious breaches of ethics usually implies that you feel you're dealing with issues serious enough to justify being a little unethical, or that someone's getting correspondingly hacked off at you for them, or both. Either way there are consequences.

Comment author: NancyLebovitz 28 October 2014 07:16:58PM 0 points [-]

It's a lot safer to abuse people inside your borders than to make a habit of invading other countries. The risk from ethical failure has a lot to do with whether you're hurting people who can fight back.

Comment author: Daniel_Burfoot 27 October 2014 07:00:52PM *  1 point [-]

I'm not sure I want to make blanket moral condemnations. I think Americans are trapped in a badly broken political system, and the more power, prestige, and influence that system has, the more damage it does. Emigration or socioeconomic nonparticipation reduces the power the system has and therefore reduces the damage it does.

Comment author: Lumifer 27 October 2014 07:14:03PM 1 point [-]

I'm not sure I want to make blanket moral condemnations.

It seems to me you do, first of all by your call to emigrate. Blanket condemnations of societies do not extend to each individual, obviously, and the difference between "condemning the system" and "condemning the society" doesn't look all that big..

Comment author: ChristianKl 27 October 2014 03:09:11PM 0 points [-]

Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view.

Do you follow some kind of utilitarian framework where you could quantify that problem? Roughly how much money donated to effective charities would make up the harm caused by participating in US society.

Comment author: Daniel_Burfoot 27 October 2014 04:29:03PM -1 points [-]

Thanks for asking, here's an attempt at an answer. I'm going to compare the US (tax rate 40%) to Singapore (tax rate 18%). Since SG has better health care, education, and infrastructure than the US, and also doesn't invade other countries or spy massively on its own citizens, I think it's fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.

Let I be income, D be charitable donations, R be tax rate (0.4 vs 0.18), U be money usage in support of lifestyle, and T be taxes paid. Roughly U=I-T-D, and T=R(I-D). A bit of algrebra produces the equation D=I-U/(1-R).

Consider a good programmer-altruist making I=150K. In the first model, the programmer decides she needs U=70K to support her lifestyle; the rest she will donate. Then in the US, she will donate D=33K, and pay T=47K in taxes. In SG, she will donate D=64K and pay T=16K in taxes to achieve the same U.

In the second model, the altruist targets a donation level of D=60, and adjusts U so she can meet the target. In the US, she payes T=36K in taxes and has a lifestyle of U=54K. In SG, she pays T=16K of taxes and lives on U=74K.

So, to answer your question, the programmer living in the US would have to reduce her lifestyle by about $20K/year to achieve the same level of contribution as the programmer in SG.

Most other developed countries have tax rates comparable or higher than the US, but it's more plausible that in other countries the money goes to things that actually help people.

Comment author: bramflakes 27 October 2014 07:39:49PM *  6 points [-]

I'm going to compare the US to Singapore

this is the point where alarm bells should start ringing

Comment author: Daniel_Burfoot 27 October 2014 09:55:09PM 1 point [-]

The comparison is valid for the argument I'm trying to make, which is that by emigrating to SG a person can enhance his or her altruistic contribution while keeping other things like take-home income constant.

Comment author: SolveIt 27 October 2014 09:04:03PM 3 points [-]

Since SG has better health care, education, and infrastructure than the US, and also doesn't invade other countries or spy massively on its own citizens, I think it's fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.

This is just plain wrong. Mostly because Singapore and the US are different countries in different circumstances. Just to name one, Singapore is tiny. Things are a lot cheaper when you're small. Small countries are sustainable because international trade means you don't have to be self-sufficient, and because alliances with larger countries let you get away with having a weak military. The existence of large countries is pretty important for this dynamic.

Now, I'm not saying the US is doing a better job than Singapore. In fact, I think Singapore is probably using its money better, albeit for unrelated reasons. I'm just saying that your analysis is far too simple to be at all useful except perhaps by accident.

Comment author: fubarobfusco 28 October 2014 01:25:52AM 0 points [-]

Things are a lot cheaper when you're small.

Things are a lot cheaper when you're large. It's called "economy of scale".

Comment author: SolveIt 28 October 2014 01:03:47PM 1 point [-]

Yes, both effects exist and they apply to different extents in different situations. A good analysis would take both (and a host of other factors) into account and figure out which effect dominates. My point is that this analysis doesn't do that.

Comment author: ChristianKl 27 October 2014 05:17:18PM 3 points [-]

Consider a good programmer-altruist making I=150K

I think given the same skill level the programmer-altruist making 150K while living in Silicon Valley might very well make 20K less living in Germany, Japan or Singapore.

Comment author: Nornagest 27 October 2014 09:32:04PM 5 points [-]

I don't know what opportunities in Europe or Asia look like, but here on the US West Coast, you can expect a salary hit of $20K or more if you're a programmer and you move from the Silicon Valley even to a lesser tech hub like Portland. Of course, cost of living will also be a lot lower.

Comment author: satt 29 October 2014 02:50:38AM 0 points [-]

Where are you right, while most others are wrong? Including people on LW!

A friend I was chatting to dropped a potential example in my lap yesterday. Intuitively, they don't find the idea of humanity being eliminated and replaced by AI necessarily horrifying or even bad. As far as they're concerned, it'd be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating?

(I don't agree with that position normatively but it seems impregnable intellectually.)

Comment author: Viliam_Bur 29 October 2014 05:09:01PM *  3 points [-]

it'd be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating

Just to make sure, could this be because you assume that "intelligent life" will automatically be similar to humans in some other aspects?

Imagine a galaxy full of intelligent spiders, who only use their intelligence for travelling the space and destroying potentially competing species, but nothing else. A galaxy full of smart torturers who mostly spend their days keeping their prey alive while the acid dissolves the prey's body, so they can enjoy the delicious juice. Only some specialists among them also spend some time doing science and building space rockets. Only this, multiplied by infinity, forever (or as long as the laws of physics permit).

Comment author: satt 29 October 2014 11:44:05PM *  0 points [-]

Just to make sure, could this be because you [sic] assume that "intelligent life" will automatically be similar to humans in some other aspects?

It could be because they assume that. More likely, I'd guess, they think that some forms of human-displacing intelligence (like your spacefaring smart torturers) would indeed be ghastly and/or utterly unrecognizable to humans — but others need not be.

Comment author: pianoforte611 28 October 2014 12:36:53AM 0 points [-]

Diet and exercise generally do not cause substantial long term weight loss. Failure rates are high, and successful cases keep off about 7% of they original body weight after 5 years. I strongly suspect that this effect does not scale, you won't lose another 7% after another 5 years.

It might be instrumentally useful though for people to believe that they can lose weight via diet and exercise, since a healthy diet and exercise are good for other reasons.

Comment author: Lumifer 29 October 2014 07:52:51PM 5 points [-]

Diet and exercise generally do not cause substantial long term weight loss

There is a pretty serious selection bias in that study.

I know some people who lost a noticeably amount of weight and kept it off. These people did NOT go to any structured programs. They just did it themselves.

I suspect that those who are capable of losing (and keeping it off) weight by themselves just do it and do not show up in the statistics of the programs analyzed in the meta-study linked to. These structured programs select for people who have difficulty in maintaining their weight and so are not representative of the general population.

Comment author: ChristianKl 29 October 2014 07:21:37PM *  1 point [-]

Diet and exercise generally do not cause substantial long term weight loss.

"Healthy diet" and dieting are often two different things.

Healthy diet might mean increasing the amount of vegetables in your diet. That's simply good.

Reducing your calorie consumption for a few months and then increasing it in what's commonly called the jo-jo effect on the other hand is not healthy.

Comment author: RomeoStevens 29 October 2014 07:06:48PM *  0 points [-]

Why is this surprising? You give someone a major context switch, put them in a structured environment where experts are telling them what to do and doing the hard parts for them (calculating caloric needs, setting up diet and exercise plans), they lose weight. You send them back to their normal lives and they regain the weight. These claims are always based upon acute weight loss programs. Actual habit changes are rare and harder to study. I would expect CBT to be an actually effective acute intervention rather than acute diet and exercise.

Comment author: pianoforte611 29 October 2014 07:39:18PM -1 points [-]

I hadn't thought of CBT, it does work in a very loose sense of the term although I wouldn't call weight loss of 4 kg that plateaus after a few months much of a success. I maintain that no non-surigcal intervention (that I know of) results in significant long term weight loss. I would be very excited to hear about one that does.

Comment author: RomeoStevens 29 October 2014 09:02:29PM 0 points [-]

I would bet that there are no one time interventions that don't have a regression to pre-treatment levels (except surgery).