diegocaleiro

Wiki Contributions

Comments

Sorted by

Copied from the Heterodox Effective Altruism facebook group (https://www.facebook.com/groups/1449282541750667/):

Giego Caleiro I've read the comments and now speak as me, not as Admin:
It sems to me that the Zurich people were right to exclude Roland from their events. Let me lay out the reasons I have, based on extremely partial information:

1) IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.

2) The politeness of EAs is in great part the reason that some SJWs managed to infiltrate it. Having regulations and rules that determine who can be kicked out is bad, because it is a weapon that the SJWs have been known to wield with great care and precision. That is, I much prefer a group where people are kicked out without justification than one in which reason is given (I say this as someone who was kicked out of at least 2 physical spaces related to EA, so it does not come lightly). Competition drives out SJWs, so I would recommend to Roland to create a new meeting that is more valuable than it's predecessor, and attract people to it. (this community was created by me, with me as an admin, precisely for those reasons. I believed that I could legitimately help generate more valuable debate than previous EA groups, including the one that I myself created, but feared would be taken over by more SJWish types. This one is protected).

3) Another reason to be pro-kicking out: I and Tartre run a facebook chat group where I make a point of never explaining kicking anyone out. As far as I can tell, it has the best density of interesting topics of any facebook chat related to rationalists and EAs. It is necessary to be selective.

4) That said: Being excluded from social groups is horrible, it feels like dying to a lot of people, and it makes others fear it happening to them like the plague. So it allows for the kind of pernicious coordination in (DeScioli 2013) and full blown Girardian Scapegoating. There's a balance that needs to be struck to avoid SJWs from taking little bureocracies, then mobbing people out, thus tyrannizing others into condescention with whatever is their current day flavour of acceptable speech.

5) Because being excluded from social groups is horrible, HEAs need to create a welcoming network of warmth and kindness towards those who are excluded or accused. We don't want people to feel like they are dying, we don't want they hyppocampi compromised and their serotonin levels lowered. Why? Because this happens to a LOT of people when they transition from being politically left leaning to being politically right leaning (or when they take the sexual strategy Red Pill). If we, HEAs, side with the accusers, the scapegoaters, the mob, we will be one more member of the Ochlocracy. This is both anti-utilitarian, as the harm to the excluded party is nearly unbearable, and anti-heterodox, as in all likelihood at least in part a person was excluded for not sharing a belief or behavioral pattern with those who are doing the excluding. So I highly recommend that, on priors, HEAs come forth in favor of the person.

During my own little scapegoating event, Divia Caroline Eden was nice enough to give me a call and inquire about psychological health, make sure I wasn't going to kill myself and that sort of thing (people literally do that, if you have never been scapegoated, you cannot fathom what it is like, it cannot be expressed in words) and maybe 4 other people messaged me online showing equal niceness and appreciation.

Show that to Roland now, and maybe he'll return the favor when and if it happens to you. As an HEA, you are already in the group of risk.

Eric Weinstein argues strongly against returns being 20century level, and says they are now vector fields, not scalars. I concur (not that I matter)

The Girardian conclusion, and general approach of this text make sense.
But the strategy that is best is forgiving 2 tits for tat, or something like that, worth emphasizing.
Also it seems you are putting some moral value in long term mating that doesn't necessarily reflect our emotional systems or our evolutionary drives. Short tem mating is very common and seen in most societies where there's enough resources to go around and enough intersexual geographical proximity. Recently there are more and stronger arguments emerging against female short term strategies. But it would be a far cry to claim that we already know decisively that the expected value for a female of short terming is necessarily negative. It may depend on fetal androgens, and it may be that the measurements made so far took biased samples to calculate the cost of female promiscuity. In the case of males, as far as I know, there is literally no data associating short terming with long term QALY loss, none. But I'd be happy to be corrected.
Notice also that the moral question is always about the sex you are not. If you are female, and data says it doesn't affect males, then you are free to do whatever. If you are male, and the data says short terming females become long term unhappy, then the moral responsibility for that falls on you, specially if there's information assymetry.

This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.

It's not that relevant, so just if people are curious

I am now a person who moved during adulthood, and I can report past me was right except he did not account for rent.

It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.

Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.

I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.

Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.

You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.

Yes, I stand by those words even now (that I am happy).

I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Shhh, don't tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit

Load More