I think this is an important perspective, especially for understanding Eliezer, who places a high value on truth/honesty, often directly over consequentialist concerns.
While this explains true but unpleasant statements like "[Individual] has substantially decreased humanity's odds of survival", it doesn't seem to explain statements like the potted plant one or other obviously-not-literally-true statements, unless one takes the position that full honesty also requires saying all the false and irrational things that pass through one's head as well. (And even then, I'd expect to see an immediate follow-up of "that's not true of course").
I agree with this decision. You reference the comment in one of your answers. If it starts taking over, it should be removed, but can otherwise provide interesting meta-commentary.
I think this makes sense as a model of where he is coming from. As a strategy, my understanding of social dynamics is that "I told you so" makes it harder, not easier, for people to change their minds and agree with you going forward.
Not an answer to the question, but I think it's worth noting that people asking for your opinion on EA may not be precise with what question they ask. For example, it's plausible to me that someone could ask "has EA been helpful" when their use case for the info is something like "would a donation to EA now be +EV", and not be conscious of the potential difference between the two questions.
I agree that we'll make new puzzles that will be more rewarding. I don't think that suffering need be involuntary to make its elimination meaningful. If I am voluntarily parched and struggling to climb a mountain with a heavy pack (something that I would plausibly reject ASI help with), I would nevertheless feel appreciation if some passerby offered me a drink or lightening my load. Given a guarantee of safety from permanent harm, I think I'd plausibly volunteer to play a role in some game that involved some degree of suffering that could be alleviated.
there are also donation opportunities for influencing AI policy to advance AI safety which we think are substantially more effective than even the best 501c3 donation opportunities
Would you be willing to list these (or to DM me if there's a reason to not list publicly)?
I began to write a long comment about how to possibly identify poverty-restoring forces, but I think we actually should take a step back and ask:
Why do we care about poverty in the first place?
"The utility function is not up for grabs"
Sure, but poverty seems like a rather complex idea to really be directly in our utility function, instead of instrumentally.
"Well we care about poverty because it causes suffering"
Ok. But why not just talk about reducing suffering then?
"Suffering can have multiple causes. It is helpful to focus on a single cause at a time to produce solutions"
Sure - but we just said we don't know what causes it, so that's not why. Why don't we just talk about eliminating suffering?
"Because that would feel too...utilitarian. Too sterile. Cancer is unfortunate, but poverty is just wrong."
And that's exactly it I think - we care about 'poverty' in particular because we care about justice. There is something worse about someone dying of a preventable disease. So poverty is not simply a state of resources or of hedonic experiences. It's not even about the poor. Someone suffering of an unpreventable cause is unfortunate. They only become poor once others have the ability to help them and doesn't. We also care about suffering for itself, but poverty is actually a moral defect we see in the other humans who don't help.
Once we frame the discussion this way, it becomes easy to see why universal basic income might not fix human moral defects.*
*And even if we object that poverty is not about just the moral defect, but about it also indirectly causing suffering , it is still much easier to see why UBI might not prevent human moral defects from indirectly causing suffering.
Quote voice seems to "win" this exchange, but I think there are 3 things it is missing:
1. I can't know someone else's joy level with certainty, but despite quote voice accusing unquote voice of having problems taking joy in the real - I don't hear the joy in quote voice (save for the last reply). Maybe QV is just using "joy in the real" as an applause light instead of actually practicing it.
2. "And you claim to be surprised by this?" - Lack of surprise may be a symptom of having a perfect model of the world, but more often it is a symptom of not actually predicting with your model. For mortals, surprise at the real state of things should be a common occurrence - it is akin to admitting fallibility. Perhaps more importantly, in this conversation, it seems to be shutting down curiosity.
3. Even after the call-out on "explain any possible human behavior", QV continues to use "well it has to [work] somehow" to imply "my specific model of the world is correct". If UQV was arguing for magic or theism, then these responses would make sense, but as is, they seem like a way to avoid admitting "I don't know".
Very well said. I also think more is possible - not nearly as much more as I originally thought, but there is always room for improvement and I do think there's a real possibility that community effects can be huge. I mean, individual humans are smarter than individual animals, but the real advantages have accrued though society, specialization, teamwork, passing on knowledge, and sharing technology - all communal activities.
And yeah, probably the main barrier boils down to the things you mentioned. People who are interested in self-improvement and truth are a small subset of the population[1]. Across the country/world there are lots of them, but humans have some psychological thing about meeting face to face, and the local density most places is below critical mass. And having people move to be closer together would be a big ask even if they were already great friends, which the physical distance makes difficult. As far as I can see, the possible options are:
1. Move into proximity (very costly)
2. Start a community with the very few nearby rationalists (difficult to keep any momentum)
3. Start a community with nearby non-rationalists (could be socially rewarding, but likely to dampen any rationality advantage)
4. Teach people nearby to be rational (ideal, but very difficult)
5. Build an online community (LW is doing this. Could try video meetings, but I predict it would still feel less connected that in person and make momentum difficult)
5b. Try to change your psychology so that online feels like in person. (Also, difficult)
6. Do it without a community (The default, but limited effectiveness)
So, I don't know - maybe when AR gets really good we could all hang out in the "metaverse" and it will feel like hanging out in person. Maybe even then it won't - maybe it's just literally having so many other options that makes the internet feel impersonal. If so, weird idea - have LW assign splinter groups and that's the only group you get (maybe you can move groups, but there's some waiting period so you can't 'hop'). And of course, maybe there just isn't a better option than what we're already doing.
Personally - I'm trying to start regularly calling my 2 brothers. They don't formally study rationality but they care about it and are pretty smart. The family connection kinda makes up for the long distance and small group size, but it's still not easy to get it going. I'd like to try to get a close-knit group of friends where I live, though they probably won't be rationalists. But I'll probably need to stop doing prediction markets to have the time to invest for that.
Oh, and what you said about the 5 stages makes a lot of sense - my timing is probably just not lined up with others, and maybe in a few years someone else will ask this and I'll feel like "well I'm not surprised by what rationalists are accomplishing - I updated my model years ago".
I read Alexander Scott say that peddling 'woo' might just be the side effect of a group taking self-improvement seriously and lacking the ability to fund actual studies and I think that hypothesis makes sense.
I suspect that some of my dissonance does result from an illusion of consistency and a failure to appreciate how multi-faceted people can really be. I naturally think of people as agents and not as a collection of different cognitive circuits. I'm not ready to assume that this explains all of the gap between my expectations and reality, but it's probably part of it.