Comment author: [deleted] 03 March 2015 11:02:42AM 4 points [-]

I have not yet read the sequences in full, let met ask, is there maybe an answer to what is bothering me about ethics: why is basically all ethics in the last 300 years or so universalistic? I.e. prescribing to treat everybody without exception according to the same principles? I don't understand it because I think altruism is based on reciprocity. If my cousin is starving and a complete stranger is halfway accross the world is starving even more, and I have money for food, most ethics would figure out I should help the stranger. But from my angle, I am obviously getting less reciprocity, less personal utility out of that than out of helping my cousin. I am not even considering the chance of a direct payback, simply the utility of having people I like and associate with not suffer is a utility to me, obviously. Basically you see altruism as an investment, you get a lot back from investing into people close to you, and then with the distance the return on investment is less and less to you, although never completely zero because making humankind as such better off is always better for you. This explains things like that kind of economic nationalism that if free trade makes Chinese workers better off with 100 units and American or European workers worse off with 50, a lot of people still don't want it, this is actually rational, 100 units to people far away make you better off with 1 unit, 50 units lost to basically your neighbors makes you worse off with 5.

And this is why I don't understand why most ethics are universalistic?

Of course one could argue this is not ethics when you talk about what is the best investment for yourself. After all with that sort of logic you would get the most return if you never give anything to anyone else, so why even help your cousin?

Anyway, was this sort of reciprocal and thus non-universalistic ethics ever discussed here?

In response to comment by [deleted] on Open thread, Mar. 2 - Mar. 8, 2015
Comment author: gedymin 04 March 2015 03:11:04PM 4 points [-]

I think universalism is an obvious Schelling point. Not just moral philosophers find it appealing, ordinary people do it too (at least when thinking about it in an abstract sense). Consider Rawls' "veil of ignorance".

Comment author: [deleted] 04 March 2015 11:31:55AM *  2 points [-]

It seems people make friends two ways:

1) chatting people and finding each other interesting

2) going through difficult shit together and thus bonding, building camaraderie (see: battlefield or sports team friendships)

If your social life lags and 1) is not working, try 2)

My two best friends come from a) surviving a "deathmarch" project that was downright heroic (worst week was over 100 hours logged) together b) going to a university preparation course, both get picked on by the teacher who did not like us, and then both failing the entry exam in a spectacular way.

Questions:

a) correct?

b) how do you intentionally put yourself into difficult shit with other people so that you can bond and build camaraderie?

In response to comment by [deleted] on Open thread, Mar. 2 - Mar. 8, 2015
Comment author: gedymin 04 March 2015 03:04:36PM 2 points [-]

Mountaineering or similar extreme activities is one option.

Comment author: gedymin 04 March 2015 02:58:36PM 4 points [-]

Are there any moral implications of accepting the Many Worlds interpretation, and if so what could they be?

For example, if the divergent copies of people (including myself) in other branches of Multiverse should be given non-insignificant moral status, then it's one more argument against the Epicurean principle that "as long as we exist, death is not here". My many-worlds self can die partially - that is, just in some of the worlds. So I should to reduce the number of worlds in which I'm dead. On the other hand, does it really change anything compared to "I should reduce the probability that I'm dead in this world"?

Comment author: gedymin 04 March 2015 02:43:39PM 4 points [-]

Is there some reason to think that physiognomy really works? Reverse causation is probably the main reason, e.g. tall people are more likely to be seen as leaders by others, so they are more likely to become leaders. Nevertheless, is there something beyond that?

Comment author: [deleted] 03 March 2015 02:49:40PM 1 point [-]

I need to write clearer. That is not my main thesis. My main thesis is my OMFG-level striking, shocking relevation that surprised by out of my mind, namely that e.g. obsessing over D&D is not merely a hobby or interest, but a desire to escape from a life and self you hate. This filled up with compassion and made me remember my former self who was not far from that. That hobbies and interests, in this case nerdy ones, predict problems. You can diagnose certain issues by looking at people's hobbies and interests. This is my main thesis.

The rest is digging deeper trying to figure out the reasons, and less important.

I think you misunderstood the group-hate thing. The kids are talking about were not yet groups at the ages of 8 or 10 when this happened to them, and actually I think it is a dangerous bias today to see every social dynamic as group relation, ignoring individual relations.

It seems after it was discovered that racism is a thing and a bad one, now everybody who was individually oppressed wants to invent their own "race". So for example gays went from just being individuals who like gay sex and get hated by other individuals for it to inventing their own group and identity, essentially inventing a "race" and thus re-casting the hatred they get from individual hatred to group hatred. I am quite puzzled by this. Is there a rational reason for this? Are humans hardwired to hate groups more than individuals? At any rate I think your point is adult nerds being another "race", which is a problem itself, but my real problem is that these kids were not yet nerds.

Seeing this as a group level oppression dynamic is very wrong at this 8-10-12 year old age. It was individuals, who were perceived as weak, and thus got oppressed for it. There was no identity of a weak-boy-group, it was not invented as a "race", although later on they became adult nerds and then yes they some extent invented themselves as a "race".

So it is not that nerds were hated as kids. Weak kids as individuals here tortured for being weak, and this made them hate themselves as they grew up, and self-hate turned them into fantasy escapists, and fantasy escapists are commonly called nerds or neckbeards or omega males.

I do not see nerds being jerks, where did you get this idea from? Or your own experience? Lack of empathy is not jerkishness. Lack of empathy with courage is jerkishness. Nerds don't dare to be jerky in my experience, too afraid of a beating. To the contrary, they will be generally submissive and meek. But very very closed up, not initiating contact with others. I really don't understand your point. I am constantly talking about the lack of courage as a self-hatred cause, and you are saying cowardish people can be jerks? I think they are too afraid to. We are talking about people who are even afraid to look in the eyes of others! Sorry, this does not make sense to me.

So yes, that is a real problem, fearful people are not actually jerks, lack of empathy and interest is not jerkishness in itself. It is a "please leave me alone, you scare me" behavior which is not jerky.

As for the third, there are already solutions, as I outlined above. I think confidence need to be earned, not just hacked it, or else mispredicts your abilities. It is deadly to give people more courage than ability.

Comment author: gedymin 03 March 2015 07:53:03PM 1 point [-]

Funny, I thought escaping in their own private world was not something exclusive to nerds. In fact most people do that. Schoolgirls escape in fantasies about romance. Boys in fantasies about porn. Gamers in virtual reality. Athletes in fantasies about becoming famous in sport. Mathletes - about being famous and successful scientists. Goths - musicians or artists. And so on.

True, not everyone likes to escape in sci-fi or fantasy, but that's because different minds are attracted by different kinds of things. D&D is a relatively harmless fantasy. I'm not that familiar with it, so I'm not even sure whether it can be used to diagnose "nerds", but that's not the point. Correlation is not causation.

Regarding "jerks", we apparently have disagreement on definitions, so this is an issue not worth pursuing. My point is that your self-styled definition of a "nerd" is bit ridiculous, as in fact you're talking about three different groups of people that just happen to be overlapping.

Comment author: gedymin 03 March 2015 01:28:44PM *  1 point [-]

the solution will involve fixing things that made one a "tempting" bullying target

So a nerd, according to the OP, is someone who:

  • lacks empathy and interest in other people
  • lacks self confidence
  • has unconventional interests, ideas, and appearance

But even if take for granted that this is a correct description of a nerd, these are very different issues and require very different solutions.

The last problem is simple to fix at the level of society and ought to be fixed there. A hate against specific social groups should not be acceptable, not matter how intuitive it feels, and how deep its biologically basis is. The current situation in which hate towards different races, nationalities, genders etc. is not politically acceptable, but the hate towards nerds is does not make sense. Overall, people have learned and mostly agree that tolerance is a good idea - but it is still applied very selectively. For example, if parents and schools are capable of avoiding race-hate in the classrooms (less or more), then they should be capable of censoring nerd-hate to an equal extent.

The problem of nerds being jerks and suffering as a consequence is not a real social problem at all. For a society as whole, it is giving negative feedback to someone being a jerk. It's a regulatory mechanism. Here I agree with the OP that self-improvement is the action to take,

The problem of self-confidence is a real problem, as in this case it easily leads to a negative feedback loop. If the usual ways of increasing self-confidence do not help, I see no moral arguments against e.g. using personality-altering confidence-boosting drugs, if such were easily available and had no side effects.

So we have (1) a social solution, (2) a "not a real problem" solution, and (3) "wait for scientific and technological progress to fix broken biology/psychology" solution".

Comment author: JoshuaZ 20 January 2015 10:48:04PM 5 points [-]

UGC is a conjecture, and unlike many computational complexity conjectures (such as P != NP and P=BPP) there's substantially more doubt about it being true or not. There are serious attempts for example to show that solving unique games lives in BQP, and if that's the case, then UGC cannot be true unless NP is contained in BQP which is widely believed to be false.

Note that UGC also isn't so narrow: it essentially says that a whole host of different approximation problems are tough. If UGC is true, then one should doubt recursive self-improvement will happen in general, which makes most of the focus on human morality less relevant.

(I personally lean towards UGC being false but that's very weakly and it is strictly speaking outside my area of expertise.)

Comment author: gedymin 21 January 2015 10:41:34AM 3 points [-]

If UGC is true, then one should doubt recursive self-improvement will happen in general

This is interesting, can you expand on this? I feel there clearly are some arguments in complexity theory against AI as an existential risk, and that these arguments would deserve more attention.

To sidetrack a bit, as I've argued in a comment, if it turns out that many important problems are practically unsolvable in realistic timescales, any superintelligence would be unlikely to get strategic advantage. The support for this idea is much more concrete than the speculations in the OP of this thread; for example, there are many problems in economics that we don't know how to solve efficiently despite investing a lot of effort.

Comment author: gedymin 10 January 2015 11:11:40AM *  0 points [-]

Why do you think that the fundamental attribution error is a good point where to start someone's introduction in rational thinking? There seems to be a clear case of the Valley of bad rationality here. Fundamental attribution is a powerful psychological tool. It allows us to take personal responsibility for our successes while blaming the environment for our failures. Now assume that this tool is taken away from a person, leaving all his/her other beliefs intact. How exactly would this improve his/her life?

I also don't get why thinking that "the rude driver probably has his reasons too, so I should excuse him" is a psychologically good strategy, even assuming it is morally right.

About map vs. reality. Not sure why it has to be put together with FAE, as it's a much more general topic. And your explanation is not the the first one I've seen about this topic that leaves a big "so what?" question hanging in the air. At the face value, it seems to say that "people often confuse the idea of an apple with an apple in their hand". Now clerarly that is not the case for anyone, perhaps except the most die-hard Platonists. Even if it would be so, why should the aspiring rationalist care?

I think negative examples would be really strong here. Teaching about the perils of magical thinking and wishful thinking would be a good start. Only after giving a few compelling concrete examples it makes sense to generalize and speak at a more abstract level. It also seems that many aspiring rationalists / high IQ persons are especially vulnerable to the trap of building elaborate mental models of something, and then failing to empirically test them by experiencing the raw reality.

Comment author: gedymin 07 January 2015 10:26:10AM 1 point [-]

I don't think that overfitting is a good metaphor for your problem. Overfitting involves building a model that is more complicated than an optimal model would be. What exactly is the model here, and why do you think that learning just a subset of the course's material leads to building a more complicated model?

Instead, your example looks like a case of sampling bias. Think of the material of whole course as the whole distribution, and of the exam topics as a subset of that distribution. "Training" your brain with samples just from that subset is going to produce a learning outcome that is not likely to work well for the whole distribution.

Memorizing disconnected bits of knowledge without understanding the material - that would be a case of overfitting.

Comment author: gedymin 05 January 2015 10:36:02AM 2 points [-]

There is a semi-official EA position on immigration

Could you describe what this position is? (or give a link) Thanks!

View more: Prev | Next