All of xpym's Comments + Replies

xpym87

Insofar as it’s required for you to pretend that people are nicer than they are to be kind to them, I think you should do that. But your impact will be better if you at least note it if that’s what you’re doing

Unlikely to work for Mr. Portman. Living the life of systematic lies and pretension is difficult and cognitively demanding. Being pro-socially self-deceived to some degree is a much simpler strategy, which is probably why evolution converged on it (paired with some amount of psychopathy to balance/exploit excesses).

policy analysis

Most people obviously aren't cut out for that, and are happier for it, if they live in a reasonably high-trust society.

xpym10

I bet the person says “no”.

I agree, but I think it's important to mention issues like social desirability bias and strategic self-deception here, coupled with the fact that most people just aren't particularly good at introspection.

it’s conflicting desires, not conflicting values

It's both, our minds employ desires in service of pursuing our (often conflicting) values.

Insofar as different values are conflicting, that conflict has already long ago been resolved, and the resolution is: the action which best accords with the person’s values, in this i

... (read more)
xpym32

the feeling of my head remaining on the pillow is motivating, but the self-reflective idea of myself being in bed is demotivating

This seems to be an example of conflicting values, and its preferred resolution, not a difference between a value and a non-value. Suppose you'd find your pillow replaced by a wooden log - I'd imagine that the self-reflective idea of yourself remedying this state of affairs would be pretty motivating!

4Steven Byrnes
I claim that if you find someone who’s struggling to get out of bed, making groaning noises, and ask them the following question: I bet the person says “no”. Yet, they’re still in fact doing that thing, which implies (tautologically) that they have some desire to do it—I mean, they’re not doing it “by accident”! So it’s conflicting desires, not conflicting values. I don’t think your wooden log example is relevant. Insofar as different values are conflicting, that conflict has already long ago been resolved, and the resolution is: the action which best accords with the person’s values, in this instance, is to get up. And yet, they’re still horizontal. Another example: if someone says “I want to act in accordance with my values” or “I don’t always act in accordance with my values”, we recognize these as two substantive claims. The first is not a tautology, and the second is not a self-contradiction.
xpym83

For at least six months now, we’ve had software assistants that can roughly double the productivity of software development.

Is this the consensus view? I've seen people saying that those assistants give 10% productivity improvement, at best.

In the last few months, there’s been a perceptible increase in the speed of releases of better models.

On the other hand, the schedules for headline releases (GPT-5, Claude 3.5 Opus) continue to slip, and there are anonymous reports of diminishing returns from scaling. The current moment is interesting in that there are two essentially opposite prevalent narratives barely interacting with each other.

6Carl Feynman
Is this the consensus view? I think it’s generally agreed that software development has been sped up. A factor of two is ambitious! But that’s what it seems to me, and I’ve measured three examples of computer vision programming, each taking an hour or two, by doing them by hand and then with machine assistance. The machines are dumb and produce results that require rewriting. But my code is also inaccurate on a first try. I don’t have any references where people agree with me. And this may not apply to AI programming in general. You ask about “anonymous reports of diminishing returns to scaling.” I have also heard these reports, direct from a friend who is a researcher inside a major lab. But note that this does not imply a diminished rate of progress, since there are other ways to advance besides making LLMs bigger. O1 and o3 indicate the payoffs to be had by doing things other than pure scaling. If there are forms of progress available to cleverness, then the speed of advance need not require scaling.
xpym10

the principles of EA imply that

The principles of Christianity not only imply that, they clearly spell it out: "If you want to be perfect, then go and sell your possessions and give the money to the poor", and yet Christianity was uncontroversial in the West for centuries, and the current secular "common sense" morality hasn't diverged particularly far. EAs just take ostensibly common sense principles far too seriously compared to the unspoken social consensus, in a way that's cringe for normal people. Critics don't really have a principled response to the core EA ideas either, but they don't want to appear morally delinquent, so they generally try to dismiss EAs without seriously engaging.

xpym*20

When circling was first discussed here, there was a comment that led to a lengthy discussion about boundaries, but nobody seemed to dispute its other main claim, that "it is highly unlikely that [somebody] would have 3-11 people they reasonably trusted enough to have [group] sex with". Do you agree with that statement, and if so, do you think that the circling/sex analogy is invalid?

3Kaj_Sotala
The truth of that literal statement depends on exactly how much trust someone would need in somebody else before having sex with them - e.g. to my knowledge, studies tend to find that most single men but very few if any women would be willing to have sex with a total stranger. Though I've certainly also known women who have had a relatively low bar of getting into bed with someone, even if they wouldn't quite do it with a total stranger. But more relevantly, even if that statement was correct, I don't think it'd be a particularly good analogy to Circling. It seems to involve the "obligatory openness" fallacy that I mentioned before. I'm not sure why some people with Circling experience seemed to endorse it, but I'm guessing it has to do with some Circling groups being more into intimacy than others. (At the time of that discussion, I had only Circled once or twice, so probably didn't feel like I had enough experience to dispute claims by more experienced people.) My own experience with Circling is that it's more like meeting a stranger for coffee. If both (all) of you feel like you want to take it all the way to having sex, you certainly can. But if you want to keep it to relatively shallow and guarded conversation because you don't feel like you trust the other person enough for anything else, you can do that too. Or you can go back and forth in the level of intimacy, depending on how the conversation feels to you and what topics it touches on. In my experience of Circling, I definitely wouldn't say that it feeling anywhere near as intimate as sex would be the norm. You can also build up that trust over time. I think Circling is best when done with people who you already have some pre-existing reason to trust, or in a long-term group where you can get to know the people involved. That way, even if you start at a relatively shallow level, you can go deeper over time if (and only if) that feels right.
xpym10

And it may not be our permanent condition. The future may hold something more like a “foundation” or a “framework” or a “system of the world” that people actually trust and consider legitimate.

Our current condition is a product of our material circumstances, and those definitely aren't permanent in their essential character, as many people have variously noted. Things are still very much in flux, and any eventual medium-to-long term frameworks would significantly depend on (possibly wildly divergent) trajectories that major trends will take in the fores... (read more)

xpym10

Though generally it doesn’t seem to me like social stigma would be a very effective way of reducing unhealthy behaviors

I agree, as far as it goes, but surely we shouldn't be quick to dismiss stigma, as uncouth as it might seem, if our social technology isn't developed enough yet to actually provide any very effective approaches instead? Humans are wired to care about status a great deal, so it's no surprise that traditional enforcement mechanisms tend to lean heavily into that.

I think generally people can maintain healthy habits much more consistently

... (read more)
xpym32

But I don’t think this is always true

Neither do I, of course, but my impression was that you thought this was never true.

But this still doesn’t justify the assertion that “expressing” the preference is “wrong.”

I do agree that the word "wrong" doesn't feel appropriate here, something like "ill-advised" might work better instead. If you're a sadist, or a pedophile, making this widely known is unlikely to be a wise course of action.

2DaystarEld
Right, those words definitely seem more accurate to me!
xpym10

I do not believe preferences themselves, or expressing them, should ever be considered wrong

Suppose that you have a preference for inflicting suffering on others. You also have a preference for being a nice person, that other people enjoy the company of. Clearly those preferences would be in constant conflict, which would likely cause you discomfort. This doesn't mean that either of those preferences is "bad", in a perfectly objective cosmic sense, but such a definition of "bad" doesn't seem particularly useful.

1DaystarEld
The implication that the preference itself is bad only works with assumptions that the preference will cause harm, to yourself or others, even if you don't act on it. But I don't think this is always true; it's often a matter of degree or context, and how the person's inner life works. We could certainly say it is inconvenient or dysfunctional to have a preference that causes suffering for the self or others, and maybe that's what you mean by "bad." But this still doesn't justify the assertion that "expressing" the preference is "wrong." That's the thing that feels particularly presumptuous, to me, about how preferences should be distinguished from actions.
xpym30

Now it would certainly be tempting to define rationality as something like “only taking actions that you endorse in the long term”, but I’d be cautious of that.

Indeed, and there's another big reason for that - trying to always override your short-term "monkey brain" impulses just doesn't work that well for most people. That's the root of akrasia, which certainly isn't a problem that self-identified rationalists are immune to. What seems to be a better approach is to find compromises, where you develop workable long-term strategies which involve neither ... (read more)

3Kaj_Sotala
+1. Less smoking does seem better than more smoking. Though generally it doesn't seem to me like social stigma would be a very effective way of reducing unhealthy behaviors - lots of those behaviors are ubiquitous despite being somewhat low-status. I think the problem is at least threefold: * As already mentioned, social stigma tends to cause optimization to avoid having the appearance of doing the low-status thing, instead of optimization to avoid doing the low-status thing. (To be clear, it does cause the latter too, but it doesn't cause the latter anywhere near exclusively.) * Social stigma easily causes counter-reactions where people turn the stigmatized thing into an outright virtue, or at least start aggressively holding that it's not actually that bad. * Shame makes things wonky in various ways. E.g. someone who feels they're out of shape may feel so much shame about the thought of doing badly if they try to exercise, they don't even try. For compulsive habits like smoking, there's often a loop where someone feels bad, turns to smoking to feel momentarily better, then feels even worse for having smoked, then because they feel even worse they are drawn even more strongly into smoking to feel momentarily better, etc. I think generally people can maintain healthy habits much more consistently if their motivation comes from genuinely believing in the health benefits and wanting to feel better. But of course that's harder to spread on a mass scale, especially since not everyone actually feels better from healthy habits (e.g. some people feel better from exercise but some don't). Then again, for the specific example of smoking in particular, stigma does seem to have reduced the amount of it (in part due to mechanisms like indoor smoking bans), so sometimes it does work anyway.
xpym72

My biggest problem with the trans discourse is that it's a giant tower of motte-and-baileys, and there's no point where it's socially acceptable to get off the crazy train.

Sure, at this point it seems likely that gender dysphoria isn't an entirely empty notion. Implying that this condition might be in any way undesirable is already a red line though, with discussions of how much of it is due to social contagion being very taboo, naturally. And that only people experiencing bad enough dysphoria to require hormones and/or surgery could claim to be legitimat... (read more)

xpym1-1

I'd say that atheism had already set the "conservatives not welcome" baseline way back when, and this resulted in the community norms evolving accordingly. Granted, these days the trans stuff is more salient, but the reason it flourished here even more than in other tech-adjacent spaces has much to do with that early baseline.

Ben Shapiro and Jordan Peterson have both said that the intellectual case for atheism is strong, and both remain very popular on the right.

Sure, but somebody admitting that certainly isn't the modal conservative.

5Sting
I wouldn't call the tone back then "conservatives not welcome". Conservatism is correlated with religiosity, but it's not the same thing. And I wouldn't even call the tone "religious people are unwelcome" -- people were perfectly civil with religious community members.  The community back then were willing to call irrational beliefs irrational, but they didn't go beyond that. Filtering out people who are militantly opposed to rational conclusions seems fine. 
xpym*2116

A more likely explanation, it seems to me, is that a large part of early LW/sequences was militant atheism, with religion being the primary example of the low "sanity waterline", and this hasn't been explicitly disclaimed since, at best de-emphasized. So this space had done its best to repel conservatives much earlier than pronouns and other trans issues entered the picture.

Viliam2316

I approve of the militant atheism, because there are just too many religious people out there, so without making a strong line we would have an Eternal September of people joining Less Wrong just to say "but have you considered that an AI can never have a soul?" or something similar.

And if being religious is strongly correlated with some political tribe, I guess it can't be avoided.

But I think that going further than that is unnecessary and harmful.

Actually, we should probably show some resistance to the stupid ideas of other political tribes, just to make... (read more)

5Sting
Maybe, but Martin Randall and Matt Gilliland have both said that the trans explanation matches their personal experience, and Eliezer Yudkowsky agrees with the explanation as well. I have no insider knowledge and am just going off what community members say.  1. Do you have any particular reasons for thinking atheism is a bigger filter than pronouns and other trans issues? 2. It's not clear what your position is. Do you think the contribution of pronouns and other trans issues is negligible? Slightly smaller than atheism? An order of magnitude smaller?  I suspect atheism is a non-negligible filter, but both smaller than trans issues, and less likely to filter out intelligent truth-seeking conservatives. Atheism is a factual question with a great deal of evidence in favor, and is therefore less politically charged. Ben Shapiro and Jordan Peterson have both said that the intellectual case for atheism is strong, and both remain very popular on the right. 
xpym30

Is this related to the bounty, or a separate project?

7Arjun Panickssery
Yeah it's for the bounty. Hanson suggested that a list of links might be preferred to a printed book, at least for now, since he might want to edit the posts.
xpym30

Furthermore, most of these problems can be addressed just fine in a Bayesian framework. In Jaynes-style Bayesianism, every proposition has to be evaluated in the scope of a probabilistic model; the symbols in propositions are scoped to the model, and we can’t evaluate probabilities without the model. That model is intended to represent an agent’s world-model, which for realistic agents is a big complicated thing.

It still misses the key issue of ontological remodeling. If the world-model is inadequate for expressing a proposition, no meaningful probability could be assigned to it.

xpym54

Killing oneself with high certainty of effectiveness is more difficult than most assume.

Dying naturally also isn't as smooth as plenty of people assume. I'm pretty sure that "taking things into your hands" leads to higher amount of expected suffering reduction in most cases, and it's not informed rational analysis that prevents people from taking that option.

If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it?

Yes? I mean, unless we entertain some extreme abstractions like it simulating all possible minds of certain complexity or whatever.

xpym10

This isn’t really a problem with alignment

I'd rather put it that resolving that problem is a prerequisite for the notion of "alignment problem" to be meaningful in the first place. It's not technically a contradiction to have an "aligned" superintelligence that does nothing, but clearly nobody would in practice be satisfied with that.

2Roko
you can have an alignment problem without humans. E.g. two strawberries problem.
xpym10

Because humans have incoherent preferences, and it's unclear whether a universal resolution procedure is achievable. I like how Richard Ngo put it, "there’s no canonical way to scale me up".

2Roko
This isn't really a problem with alignment so there's no need to address it here. Alignment means the transmission of a preference ordering to an action sequence. Lacking a coherent preference ordering for states of the universe (or histories, for that matter) is not an alignment problem.
xpym10

Hmm, right. You only need assume that there are coherent reachable desirable outcomes. I'm doubtful that such an assumption holds, but most people probably aren't.

2Roko
Why?
xpym30

We’ll say that a state is in fact reachable if a group of humans could in principle take actions with actuators - hands, vocal chords, etc - that could realize that state.

The main issue here is that groups of humans may in principle be capable of great many things, but there's a vast chasm between "in principle" and "in practice". A superintelligence worthy of the name would likely be able to come up with plans that we wouldn't in practice be able to even check exhaustively, which is the sort of issue that we want alignment for.

2Roko
This is not a problem for my argument. I am merely showing that any state reachable by humans, must also be reachable by AIs. It is fine if AIs can reach more states.
xpym*21

I think that saying that "executable philosophy" has failed is missing Yudkowsky's main point. Quoting from the Arbital page:

To build and align Artificial Intelligence, we need to answer some complex questions about how to compute goodness

He claims that unless we learn how to translate philosophy into "ideas that we can compile and run", aligned AGI is out of the question. This is not a worldview, but an empirical proposition, the truth of which remains to be determined.

There's also an adjacent worldview, which suffuses the Sequences, that it's possibl... (read more)

xpym132

the philosophy department thinks you should defect in a one-shot prisoners’ dilemma

Without further qualifications, shouldn't you? There are plenty of crazy mainstream philosophical ideas, but this seems like a strange example.

2Ben Pace
Oops, I think I should've written that they think you should always defect in a one-shot prisoners' dilemma. My understanding is that the majority of philosophers endorse Causal Decision Theory, in which you should always defect in a one-shot prisoners' dilemma, even if you're playing with a copy of yourself, whereas I think Logical Decision Theory is superior, which cooperates in that situation. [I've edited the debate to include the word 'always'.]
xpym30

Yes, I buy the general theory that he was bamboozled by misleading maps. My claim is that it's precisely the situation where a compass should've been enough to point out that something had gone wrong early enough for the situation to have been salvageable, in a way that sun clues plausibly wouldn't have.

7Adam Marsland
I think confirmation bias plays a role here.  At the point where I think Bill probably went wrong (of course we will never know for sure), there's a junction of two basically identical jeep trails, neither of which are marked on the park map or most of the then-current trail maps (they are on the topo map).  There's 3 or 4 different ways he might have gone down the wrong road - others have mentioned the two I put out there, there's a couple of other ways that are possible but less plausible so I didn't bother with them - but he should have noticed he was going south and not east, by the setting sun.  However, because of the angle of the road and the mountain cover, plus having an obvious road to follow, I can see why he wouldn't have.  The sun would still more or less be setting behind him, and to his right, on either route.  If he was focused on making time, it's unlikely he'd note the exact angle of the sun.   My feeling is that because Bill was in a hurry, he did not get out things like a compass or (maybe, depending on how he got lost) more detailed maps until he knew he was lost and by that time he was screwed by the darkness and the topography of the area which wouldn't allow him to dead reckon back unless he could find the trail again, and at that point it was a wash, of which there are a half dozen in the area.  I basically cover this in the video, there's a lot of information there so it can be hard to follow, but there are reasons why the compass didn't get him out of the situation.
xpym40

Well, the thing I'm most interested in is the basic compass. From what I can see on the maps, he was going in the opposite direction from the main road for a long time after it should have become obvious that he had been lost. This is a truly essential thing that I've never gone into unfamiliar wilderness without.

2eukaryote
Ah! I forget about a compass, honestly. He definitely came in with maps (and once he was out there for, like, over eight hours, he would have had cues from the sun.) A lot of the mystery / thing to explain is indeed "why despite being a reasonably competent hiker and map user, Ewasko would have traveled so far in the opposite direction from his car"; defs recommend Adam's videos because he lays out what seems like a very plausible story there. (EDIT: was rewatching Adam's video, yes Bill absolutely had a compass and had probably used it not long before passing, they found one with his backpack near the top. Forgot that.)
xpym40

If you go out into the wilderness, bring plenty of water. Maybe bring a friend. Carry a GPS unit or even a PLB if you might go into risky territory. Carry the 10 essentials.

Most people who die in the wilderness have done something stupid to wind up there. Fewer people die who have NOT done anything glaringly stupid, but it still happens, the same way. Ewasko’s case appears to have been one of these.

Hmm, so is there evidence that he did in fact follow those common-sense guidelines and died in spite of that? Google doesn't tell me what was found alongside his remains besides a wallet.

8Zian
I wouldn't call them "common-sense". When a modern-day tragedy (death of a child) is required before "hug a tree and survive" becomes a slogan, it seems safe to say that they are counter-intuitive. If humans did the right thing by default (e.g. "If you are lost, 'Hug-A-Tree' and stay put."), there would be fewer sad stories.
4eukaryote
Check out Marsland's post-coroner's-report video for all the details, but tentatively it looks like Ewasko: * Hiked alone * Didn't tell someone the exact trailhead/route he'd be hiking (later costing time, while he was still alive, while rescuers searched other parts of the park) * Didn't have a GPS unit / PLB, just a regular (non-smart) cellphone (I don't actually know to what degree a regular smartphone works as a dedicated GPS unit - like, when you're at the edges of regular coverage, is it doing location stuff from phone + data coverage, or does it have a GPS chip? - but either way, he didn't have a smartphone) * Had an unclear number of the ten essentials - it seems like a fair number? But (as someone in the youtube comments pointed out) if he had lit a fire, rescuers could have found him from the smoke, so either he didn't think of that or he just didn't have a firestarter. Though I want to point out that doing all of these things - well, it's not an insane amount of preparation, but it's above bare minimum common sense / "anyone going out into the woods who thinks at all about safety is already doing this." I've had training in wilderness/outdoor safety type stuff and I've definitely done day hikes while less prepared than Ewasko was. 
xpym98

They don't, of course, but if you're lucky enough not to be located among the more zealous of them and be subjected to mandatory struggle sessions, their wrath will generally be pointed at more conspicuous targets. For now, at least.

xpym42

We have a significant comparative advantage to pretty much all of Western philosophy.

I do agree that there are some valuable Eastern insights that haven't yet penetrated the Western mainstream, so work in this direction is worth a try.

We believe we’re in a specific moment in history where there’s more leverage than usual, and so there’s opportunity. We understand that chances are slim and dim.

Also reasonable.

We have been losing the thread to ‘what is good’ over the millenia. We don’t need to reinvent the wheel on this; the answers have been around

... (read more)
4Unreal
Hm, you know I do buy that also.  The task is much harder now, due to changing material circumstances as you say. The modern culture has in some sense vaccinated itself against certain forms of wisdom and insight.  We acknowledge this problem and are still making an effort to address them, using modern technology. I cannot claim we're 'anywhere close' to resolving this? We're just firmly GOING to try, and we believe we in particular have a comparative advantage, due to a very solid community of spiritual practitioners. We have AT LEAST managed to get a group of modern millienials + Gen-Zers (with all the foibles of this group, with their mental hang-ups and all -- I am one of them)... and successfully put them through a training system that 'unschools' their basic assumptions and provides them the tools to personally investigate and answer questions like 'what is good' or 'how do i live' or 'what is going on here'.  There's more to say, but I appreciate your engagement. This is helpful to hear. 
xpym54

I don't think that intelligence and military are likely to be much more of reckless idiots than Altman and co., what seems more probable is that their interests and attitudes genuinely align.

-2O O
Do you think Sam Altman is seen as a reckless idiot by anyone aside from the pro-pause people in the Lesswrong circle? 
2Anders Lindström
Of course they are not idiots, but I am talking about the pressure to produce results fast without having doomers and skeptics holding them back. A 1,2 or 3 year delay for one party could mean that they loose.  If it would have been publicly known that the Los Alamos team were building a new bomb capable of destroying cities and that they were not sure if the first detonation could lead to an uncontrollable chain reaction destroying earth, don't you think there would have been quite a lot of debate and a long delay in the Manhattan project? If the creation of AGI is one of the biggest events on earth since the advent of life, and that those who get it first can (will) be the all-power full masters, why would that not entice people to take bigger risks than they otherwise would have? 
xpym*-30

most modern humans are terribly confused about morality

The other option is being slightly less terribly confused, I presume.

This is why MAPLE exists, to help answer the question of what is good

Do you consider yourselves having significant comparative advantage in this area relative to all other moral philosophers throughout the millenia whose efforts weren't enough to lift humanity from the aforementioned dismal state?

-2Unreal
We have a significant comparative advantage to pretty much all of Western philosophy. I know this is a 'bold claim'. If you're further curious you can come visit the Monastic Academy in Vermont, since it seems best 'shown' rather than 'told'. But we also plan on releasing online content in the near future to communicate our worldview.  We do see that all the previous efforts have perhaps never quite consistently and reliably succeeded, in both hemispheres. (Because, hell, we're here now.) But it is not fair to say they have never succeeded to any degree. There have been a number of significant successes in both hemispheres. We believe we're in a specific moment in history where there's more leverage than usual, and so there's opportunity. We understand that chances are slim and dim.  We have been losing the thread to 'what is good' over the millenia. We don't need to reinvent the wheel on this; the answers have been around. The question now is whether the answers can be taught to technology, or whether technology can somehow be yoked to the good / ethical, in a way that scales sufficiently. 
xpym119

Oh, sure, I agree that an ASI would understand all of that well enough, but even if it wanted to, it wouldn't be able to give us either all of what we think we want, or what we would endorse in some hypothetical enlightened way, because neither of those things comprise a coherent framework that robustly generalizes far out-of-distribution for human circumstances, even for one person, never mind the whole of humanity.

The best we could hope for is that some-true-core-of-us-or-whatever would generalize in such way, the AI recognizes this and propagates that w... (read more)

xpym119

I expect this because humans seem agent-like enough that modeling them as trying to optimize for some set of goals is a computationally efficient heuristic in the toolbox for predicting humans.

Sure, but the sort of thing that people actually optimize for (revealed preferences) tends to be very different from what they proclaim to be their values. This is a point not often raised in polite conversation, but to me it's a key reason for the thing people call "value alignment" being incoherent in the first place.

8Lucius Bushnaq
I kind of expect that things-people-call-their-values-that-are-not-their-revealed-preferences would be a concept that a smart AI that predicts systems coupled to humans would think in as well. It doesn't matter whether these stated values are 'incoherent' in the sense of not being in tune with actual human behavior, they're useful for modelling humans because humans use them to model themselves, and these self-models couple to their behavior. Even if they don't couple in the sense of being the revealed-preferences in an agentic model of the humans' actions. Every time a human tries and mostly fails to explain what things they'd like to value if only they were more internally coherent and thought harder about things, a predictor trying to forecast their words and future downstream actions has a much easier time of it if they have a crisp operationalization of the endpoint the human is failing to operationalize.  An analogy: If you're trying to predict what sorts of errors a diverse range of students might make while trying to solve a math problem, it helps to know what the correct answer is. Or if there isn't a single correct answer, what the space of valid answers looks like.
xpym41

But meditation is non-addictive.

Why not? An ability to get blissed-out on demand sure seems like it could be dangerous. And, relatedly, I have seen stuff mentioning jhana addicts a few times.

7lsusr
I think that's a completely reasonable question to ask. The answer is non-obvious. To fully answer your question is beyond the scope of this post, but I think there's two systems operating in the brain. One of them is a reinforcing operant condition system that can get addicted. Jhanic bliss states require that the operant conditioning system not be active, so it's not getting reinforced.

Ingram has actively hunted for any jhana hunters for twenty years and hasn't found any. The reason why becomes obvious once one gains a bit of insight into why/how jhana works. Though it's trickier to describe.

xpym60

Indeed, from what I see there is consensus that academic standards on elite campuses are dramatically down, likely this has a lot to do with the need to sustain holistic admissions.

As in, the academic requirements, the ‘being smarter’ requirement, has actually weakened substantially. You need to be less smart, because the process does not care so much if you are smart, past a minimum. The process cares about… other things.

So, the signalling value of their degrees should be decreasing accordingly, unless one mainly intends to take advantage of the proces... (read more)

xpym10

I think Scott’s name is not newsworthy either.

Metz/NYT disagree. He doesn't completely spell out why (it's not his style), but, luckily, Scott himself did:

If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists.

Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.

xpym1-2

Right, the modern civilization point is more about the "green" archetype. The "yin" thing is of course much more ancient and subtle, but even so I doubt that it (and philosophy in general) was a major consideration before the advent of agriculture leading to greater stability, especially for the higher classes.

xpym10

and another to actually experience the insights from the inside in a way that shifts your unconscious predictions.

Right, so my experience around this is that I'm probably one of the lucky ones in that I've never really had those sorts of internal conflicts that make people claim that they suffer from akrasia, or excessive shame/guilt/regret. I've always been at peace with myself in this sense, and so reading people trying to explain their therapy/spirituality insights usually makes me go "Huh, so apparently this stuff doesn't come naturally to most peop... (read more)

xpym*30

Thanks for such a thorough response! I have enjoyed reading your stuff over the years, from all the spirituality-positive people I find your approach especially lucid and reasonable, up there with David Chapman's.

I also agree with many of the object-level claims that you say spiritual practices helped you reach, like the multi-agent model of mind, cognitive fusion, etc. But, since I seem to be able to make sense of them without having to meditate myself, it has always left me bemused as to whether meditation really is the "royal road" to these kinds of ins... (read more)

2romeostevensit
I think cognitive understanding is overrated and physical changes to the CNS are underrated, as explanations for positive change from practices.
2Kaj_Sotala
Thank you! That's high praise. :) Heh, I remember that at one point, a major point of criticism about people talking about meditation on LW was that they were saying something like "you can't understand the benefits of meditation without actually meditating so I'm not going to try, it's too ineffable". Now that I've tried explained things, people wonder what the point of meditating might be if they can understand the explanation without meditating themselves. :) (I'm not annoyed or anything, just amused. And I realize that you're not one of the people who was making this criticism before.) Anyway, I'd say it's one thing to understand an explanation of the general mechanism of how insights are gotten, and another to actually experience the insights from the inside in a way that shifts your unconscious predictions. That being said, is it worth the effort for you? I don't know, we kinda concluded in our dialogue that it might not be for everyone. And there are risks too. Maybe give some of it a try if you haven't already, see if you feel motivated to continue doing it for the immediate benefits, and then just stick to reading about it out of curiosity if not? Good question. One thing that I'm particularly confused about is why me and Scott Alexander seem to have such differing views on the effectiveness of the "weird therapies". My current position is just something like... "people seem to inhabit genuinely different worlds for reasons that are somewhat mysterious, so they will just have different experiences and priors leading to different beliefs, and often you just have to go with your own beliefs even if other smart people disagree because just substituting the beliefs of others for your own doesn't seem like a good either". And then hopefully if we continue discussing our reasons for our beliefs for long enough, at some point someone will figure out something. My current epistemic position is also like... it would be interesting to understand the reason, and
xpym10

I think western psychotherapies are predicated on incorrect models of human psychology.

Yet they all seem to have positive effects of similar magnitude. This suggests that we don't understand the mechanism through which they actually work, and it seems straightforward to expect that this extends to less orthodox practices.

RCTs mostly can’t capture the effects of serious practice over a long period of time

But my understanding is that benefits of (good) spiritual practices are supposed to be continuous, if not entirely linear. However much effort you invest correlates with the amount of benefits you get, until enlightenment and becoming as gods.

2romeostevensit
Was not linear for me afaict
xpym10

Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)

What is your take on the Dodo bird verdict, in relation to both therapy and Buddhism-adjacent things? All this stuff seems to be very heavy on personal anecdotes and just-so stories, and light on RCT-type things. Maybe there's a there there, but it doesn't seem like serious syst... (read more)

7Kaj_Sotala
The first thing to note is that that very page says that the state of evidence on the verdict is mixed, with different studies pointing in different directions, results depending on how you conduct your meta-analyses, and generally significant disagreement about whether this is really a thing. I also think that this comment from @DaystarEld , our resident rationalist therapist, makes a lot of sense: This touches upon a related issue, which is that there are some serious challenges with trying to apply an RCT-type methodology on something like this. For an RCT, you'll want to try to standardize things as much as possible, so that - for example - if you are measuring the effectiveness of Cognitive Behavioral Therapy, then every therapist you've classified as "doing CBT" actually does do CBT and nothing else. But a good therapist won't just blindly apply one method, they'll consider what they think might work best for this particular client and then use that. Suppose you have a study where therapists A and B are both familiar with both CBT and Internal Family Systems. Therapist A is assigned to the CBT condition and therapist B is assigned to the IFS condition. As a result, A spends some time doing CBT on clients CBT is a poor match for, and B spends some time doing IFS on clients IFS is a poor match for. The study finds that CBT and IFS have roughly similar, moderate efficacy. What the study fails to pick up on is that if both A and B had been allowed to pick the method that works best on each client, doing IFS for some and CBT for some, then the effect of the method might have been significantly greater. But you can't really compare the efficiency of methods by doing an RCT where everyone is allowed to just do whatever method they like, or worse, some hybrid method that pulls in from many different therapy techniques. Or maybe you could do it and just ask the therapists to write down what method they used with each client afterward... but that would probably requ
3romeostevensit
I think western psychotherapies are predicated on incorrect models of human psychology. RCTs mostly can't capture the effects of serious practice over a long period of time, but of the ones that have tried, the most robust effect is lowered neuroticism, afaik. This was also my experience. It corresponded to a big positive shift subjectively, as well as expressions of shock from friends and family about the change.
xpym20

That is, for all its associations with blue (and to a lesser extent, black), rationality (according to Yudkowsky) is actually, ultimately, a projectof red. The explanatory structure is really: red (that is, your desires), therefore black (that is, realizing your desires), therefore blue (knowledge being useful for this purpose; knowledge as a form of power).

Almost. The explanation structure is: green (thou art godshatter), therefore red, therefore black, therefore blue. Yudkowsky may not have a green vibe, as you describe it in this series, but he certainly doesn't shy from acknowledging that there's no ultimate escaping from the substrate.

xpym*6-1

Green is the idea that you don’t have to strive towards anything.

Can only be said by somebody not currently starving, freezing/parched or chased by a tiger. Modern civilization has insulated us from those "green" delights so thoroughly that we have an idealized conception far removed from how things routinely are in the natural world. Self-preservation is the first thing that any living being strives towards, the greenest thing there is, any "yin" can be entertained only when that's sorted out.

5Daphne_W
Fighting with tigers is red-green, or Gruul by MTG terminology. The passionate, anarchic struggle of nature red in tooth and claw. Using natural systems to stay alive even as it destroys is black-green, or Golgari. Rot, swarms, reckless consumption that overwhelms. Pure green is a group of prehistoric humans sitting around a campfire sharing ghost stories and gazing at the stars. It's a cave filled with handprints of hundreds of generations that came before. It's cats louging in a sunbeam or birds preening their feathers. It's rabbits huddling up in their dens until the weather is better, it's capybaras and monkeys in hot springs, and bears lazily going to hibernate. These have intelligible justifications, sure, but what do these animals experience while engaging in these activities? Most vertebrates seem to have a sense of green, of relaxation and watching the world flow by. Physiologically, when humans and other animals relax, the sympathetic nervious system is suppressed and the parasympathetic system stays/becomes active. This causes the muscles to relax and causes the blood stream to prioritize digestion. For humans at least, stress and the pressure to find solutions right now decrease and the mind wanders. Attention loses its focus but remains high-bandwidth. This green state is where people most often come up with 'creative' solutions that draw on a holistic understanding of the situation. Green is the notion that you don't have to strive towards anything, and the moment an animal does need to strive for something they mix in red, blue, black, or white, depending on what the situation calls for and the animal's color evolved toolset. The colors exist because no color on its own is viable. Green can't keep you alive, and that's okay, it isn't meant to.
2SeñorDingDong
A sensible point, though dating yin to the advent of 'modern civilization' is too extreme. The 'spiritual' or 'yin-like' aspects of green have a long history pre-dating modern civilization. The level of material security required before one can 'indulge in yin' is probably extremely low (though of course strongly dependent on local environmental conditions). 
2Noosphere89
Yeah, the basic failure mode of green is that it is reliant on cartoonish descriptions of nature that is much closer to Pocahontas or really any Disney movie than real-life nature, and in general is extremely non-self reliant in the sense that it relies heavily on both Blue and Red's efforts to preserve the idealized Green. Otherwise, it collapses into large scale black and arguably red personalities of nature.
xpym1-2

But some of them don’t immediately discount the Spokesperson’s false-empiricism argument publicly

Most likely as a part of the usual arguments-as-soldiers political dynamic.

I do think that there's an actual argument to be made that we have much less empirical evidence regarding AIs compared to Ponzis, and plently of people on both sides of this debate are far too overconfident in their grand theories, EY very much included.

xpym95

Sure, there is common sense, available to plenty of people, of which reference classes apply to Ponzi schemes (but, somehow, not to everybody, far from it). Yudkowsky's point, however, is that the issue of future AIs is entirely analogous, so people who disagree with him on this are as dumb as those taken in by Bernies and Bankmans. Which just seems empirically false - I'm sure that the proportion of AI doom skeptics among ML experts is much higher than that that of Ponzi believers among professional economists. So, if there is progress to be made here, it probably lies in grappling with whatever asymmetries are between these situations. Telling skeptics a hundredth time that they're just dumb doesn't look promising.

1Ben Livengood
I mean, the Spokesperson is being dumb, the Scientist is being confused. Most AI researchers aren't even being Scientists, they have different theoretical models than EY. But some of them don't immediately discount the Spokesperson's false-empiricism argument publicly, much like the Scientist tries not to. I think the latter pattern is what has annoyed EY and what he writes against here. However, a large number of current AI experts do recently seem to be boldly claiming that LLMs will never be sufficient for even AGI, not to mention ASI. So maybe it's also aimed at them a bit.
xpym51

And due to obvious selection effects, such people are most likely to end up in need of one. Must be a delightful job...

xpym30

The standard excuse is that the possibility to ruin everything was a necessary cost of our freedom, which doesn’t make much sense

There's one further objection to this, to which I've never seen a theist responding.

Suppose it's true that freedom is important enough to justify the existence of evil. What's up with heaven then? Either there's no evil there and therefore no freedom (which is still somehow fine, but if so, why the non-heaven rigmarole then?), or both are there and the whole concept is incoherent.

xpym30

That's probably Kevin's touch. Robin has this almost inhuman detachment, which on the one hand allows him to see things most others don't, but on the other makes communicating them hard, whereas Kevin managed to translate those insights into engaging humanese.

Any prospective "rationality" training has to comprehensively grapple with the issues raised there, and as far as I can tell, they don't usually take center stage in the publicized agendas.

xpym32

What do people here think about Robin Hanson's view, for example as elaborated by him and Kevin Simler in the book Elephant in the Brain? I've seen surprisingly few mentions/discussions of this over the years in the LW-adjacent sphere, despite Hanson being an important forerunner of the modern rationalist movement.

One of his main theses, that humans are strategic self-deceivers, seems particularly important (in the "big if true" way), yet downplayed/obscure.

2AnnaSalamon
I love that book!  I like Robin's essays, too, but the book was much easier for me to understand.  I wish more people would read it, would review it on here, etc.
xpym10

To me, the main deficiency is that it doesn't make the possibility, indeed, the eventual inevitability of ontological remodeling explicit. The map is a definite concept, everybody knows what maps look like, that you can always compare them etc. But you can't readily compare Newtonian and quantum mechanics, they mostly aren't even speaking about the same things.

2Said Achmiz
Switching from a flat map drawn on paper (parchment?), to a globe, would be an example of ontological remodeling.
xpym32

Well, I blame Yudkowsky for the terminology issue, he took a term with hundreds of years of history and used it mostly in place of another established term which was traditionally sort of in opposition to the former one, no less (rationalism vs empiricism).

As I understand it, Chapman's main target audience wasn't LW, but normal STEM-educated people unsophisticated in the philosophy of science-related issues. Pretty much what Yudkowsky called "traditional rationality".

The map/territory essay: https://metarationality.com/maps-and-territory

2Said Achmiz
Thanks for the link! I have to agree with @Richard_Kennaway’s evaluation of the essay. Also, Chapman here exhibits his very common tendency to, as far as I can tell, invent strawman “mistakes” that his targets supposedly make, in order to then knock them down. For example: Maybe someone somewhere has made this sort of mistake at some point, but I can’t recall ever encountering such a person. And to claim that such a mistake arises, specifically, from the map-territory metaphor, seems to me to be entirely groundless. But of course that’s fine; if I haven’t encountered a thing, it does not follow that the thing doesn’t exist. And surely Chapman has examples to point to, of people making this sort of error…? I mean, I haven’t found any examples, at least not in this essay, but he has them somewhere… right?
2Richard_Kennaway
Every example Chapman gives there to illustrate the supposed deficiencies of "the map is not the territory" is of actual maps of actual territories, showing many different ways in which an actual map can fail to correspond to the actual territory, and corresponding situations of metaphorical maps of metaphorical territories. The metaphor passes with flying colours even as Chapman claims to have knocked it down.
Load More