Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC
Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.
In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went like a brief mutual acknowledgement of this hidden fact before continuing on to topics that were more important.
I don't think anyone mentioned above was being dishonest about what they thought or was acting from a desire to hurt trans people. Yet, above exchanges did in retrospect cause me emotional pain, stress, and contributed to internalizing sexism and transphobia. I definitely wouldn't describe this as a main causal factor to my psychosis (that was very casual drug use that even Michael chided me for). I cant' think of a good policy that would have been helpful to me in above interactions. Maybe emphasizing bucket-errors in this context more, or spreading caution about generalizing from abstract models to yourself, but I think I would have been too rash to listen.
I wouldn't say I completely moved past this until years following the events. I think the following things were helpful for that (in no particular order): the intersex brains model and associated brain imagining studies, everyday-acceptance while living a normal life not allowing myself concerns larger than renovations or retirement savings, getting to experience some parts of female socialization and mother-daughter bonding, full support from friends and family in cases my gender has come into question, and the acknowledgement of a medical system that still has some gate-keeping aspects (note: I don't think this positive effect of a gate-keeping system at all justifies the negative of denying anyone morphological freedom).
Thinking back to these events, engaging with the LessWrong community, and even publicly engaging under my real name bring back fear and feelings of trauma. I'm not saying this to increase a sense of having been wronged but as an apology for this not being as long as it should be, or as well-written, and for the lateness/absence of any replies/followups.
However, this likely understates the magnitudes of differences in underlying traits across cities, owing to people anchoring on the people who they know when answering the questions rather than anchoring on the national population
I think this is a major problem. This is mainly based on taking a brief look at this study a while back and being very suspicious of it explicitly contradicting so many of my models (eg South America having lower Extraversion than North America and East Asia being the least Conscientious region)
The causal chain feels like a post-justification and not what actually goes on in the child's brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).
Great! Thanks!
I just remembered that I still haven't finished this. I saved my survey response partway through, but I don't think I ever submitted it. Will it still be counted, and if not, could you give people with saved survey responses the opportunity to submit them?
I realize this is my fault, and understand if you don't want to do anything extra to fix it.
I wasn't only referring to wanting to live where there are a lot of people. I was also referring to wanting to live near to very similar/nice people and far from very dissimilar/annoying people. I think the latter, together with the expected ability to scale things down, would make people want to live in smaller, more selected, communities. Even if they were in the middle of nowhere.
Where people want to live depends on where other people live. It's possible to move away from bad Nash equilibria by cooperation.
Yes, robust cooperation is not much to us if its cooperation between the paperclip maximizer and the pencilhead minimizer. But if there are a hundred shards that make up human values, and tens of thousands of people running AI's trying to maximize the values they see fit. It's actually not unreasonable to assume that the outcome, while not exactly what we hoped for, is comparable to incomplete solutions that err on the side of (1) instead.
After having written this I notice that I'm confused and conflating: (a) incomplete solutions in the sense of there not being enough time to do what should be done, and (b) incomplete solutions in the sense of it being actually (provably?) impossible to implement what we right now consider essential parts of the solution. Has anyone got thoughts on (a) vs (b)?
It's important to remember the scale we're talking about here. A $1B project (even when considered over its lifetime) in such an explosive field with such prominent backers, would be interpreted as nothing other than a power-grab unless it included a lot of talk about openness (it will still be, but as a less threatening one). Read the interview with Musk and Altman and note how they're talking about sharing data and collaborations. This will include some noticeable short term benefits for the contributors, and pushing for safety, either via including someone from our circles or by a more safety focused mission statement, would impede your efforts at gathering such a strong coalition.
It's easy to moan over civilizational inadequacy and moodily conclude that above shows us how (as a species) we're so obsessed with appropriateness and politics that we will avoid our one opportunity to save ourselves. Sure do some of that, and then think of the actual effects for a few minutes:
If the Value Alignment research program is solvable in the way we all hope it is (complete with a human universal CEV, stable reasoning under self-modification and about other instances of our algorithm) then having lots of implementations running around will be basically the same as distributing the code over lots of computers. If the only problem is that human values won't quite converge: this gives us a physical implementation of the merging algorithm of everyone just doing their own thing and (acausally?) trading with each other.
If we can't quite solve everything that we're hoping for, this does change the strategic picture somewhat. Mainly it seems to push us away from a lot of quick fixes, that will likely seem tempting as we approach the explosion: we can't have a sovereign just run the world like some kind of OS that keeps everyone separate, we'll also be much less likely to make the mistakes of creating CelestAI from Friendship is Optimal, something that optimizes most our goals but has some undesired lock-ins. There are a bunch of variations here, but we seem locked out of strategies that try to achieve some minimum level of the cosmic endowment, while possibly failing at getting a substantial constant fraction of our potential by achieving it at the cost of important values or freedoms.
Whether this is a bad thing or not really depends on how one evaluates two types of risk: (1) the risk of undesired lock-ins from an almost perfect superintelligence getting too much relative power, (2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on. What keeps spooking me are risks due to consciousness: either mistakenly endowing algorithms with it creating suffering, or evolving to the point that we loose it. These aren't as easily solved by robust cooperation, especially if we don't notice them until it's too late. The real strategic problem right now is that there isn't really anyone we can trust to be unbiased in analyzing the relative dangers of (1) and (2), especially because they pattern-match so well with the ideological split between left and right.
Please see my comment on the grandparent.
I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.