Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated for several years. I live near Cambridge (UK) and work for Hewlett-Packard (who acquired the company that acquired what remained of the small company I used to work for, after they were acquired by someone else). My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.
If you're wondering why some of my very old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.
I am rather confused.
What am I missing or misunderstanding here?
Noted. But it seems to me that if the trajectory was excessively altruistic -> obnoxious Objectivist -> something reasonable, it's pretty plausible that without reading Rand you might just have gone straight from "excessively altruistic" to "something reasonable".
(But of course you may well have a better sense of that having been through it from the inside.)
Does Lewis really advocate for extreme altruism, as such? Of course he advocates for Christianity, and some versions of Christianity advocate extreme altruism, but Lewis's sort was mostly pretty moderate.
This has very little to do with the actual high-level topic at issue, but it's something I've seen elsewhere in rationalist discourse and I recently realised that I think it's probably nonsense.
I still think a lot of you all need to sit down with Atlas Shrugged to get nudged in a usefully more selfish direction.
I am pretty sure it scarcely ever happens that someone who is too altruistic reads Atlas Shrugged and comes away with their altruism moderated a bit, or that someone who is too selfish reads, er, the Communist Manifesto or the Sermon on the Mount or something[1], and comes away with their selfishness moderated a bit.
[1] I don't know whether it's Highly Significant somehow that I can't come up with a good symmetrical example of something advocating for extreme altruism as AS advocates for extreme selfishness.
I think what actually happens is that (usually) they say to themselves "wow, that was a load of pernicious nonsense, I resent having wasted my time reading it, and will now be even more zealous in opposing that sort of thing" and if anything have their original position reinforced, or (occasionally) they feel like the scales have fallen from their eyes and become a full-blown Objectivist or Marxist or Christian or whatever.
If I thought altruism was bullshit and everyone ought to be a Randian egoist then I might be all for giving copies of Atlas Shrugged to very altruistic people. But if what I wanted was more-moderately-altruistic people, I don't think that would be a good strategy.
I should in fairness say that I don't have any actual evidence for what happens when extreme altruists read Atlas Shrugged. Maybe (either in general, or specifically when they are rationalist extreme altruists) they do tend to emerge with their views moderated. But I don't think it's the way I'd bet.
I think the information actually conveyed by this "unreasonably effective writing advice" is the fact that such-and-such a section of what you wrote prompts that question and suspect that saying "this bit isn't clear" would be almost as effective as asking "what did you mean here?" and then saying "well, write that then".
(It's like the old joke about the consultant whose invoice charges $1 for hitting the machine with a wrench and $9,999 for knowing where to hit it.)
Yes, that all seems fair. I was just struck by the parallels.
(It is not entirely clear to me exactly what if anything Kelly is claiming about the state of mind, and motives, of Festinger and his colleagues. He does say near the start "that the book’s central claims are false, and that the authors knew they were false", but I don't see much evidence in his article that the authors knew their central claims were false. He does offer evidence that the authors interfered more than they admitted, but that isn't really the same thing.)
If Kelly's account of things is correct, then one could describe the events as follows.
Leon Festinger and his colleagues made a dramatic and surprising prediction. When they had the opportunity to test that prediction out, things didn't in fact go the way they had predicted. In response to this, they falsified the evidence, interpreted things tendentiously, and went ahead with a vigorous campaign to spread their theory and get lots of other people to believe it; their theory prospered and remains widely believed to this day.
So it seems like Kelly's critique is kinda self-defeating. If Dorothy Martin's little UFO cult isn't really an example of the mechanism Festinger popularized, in the way Kelly describes, then Festinger and his colleagues themselves are an even better example of it.
Kelly's paper kinda acknowledges this: "If Festinger’s theory of cognitive dissonance is right, reappraisal of the value of When Prophecy Fails may be slow." But if he appreciates just how thoroughly he's portrayed Festinger's own behaviour as a perfect exemplification of the very theory he's skewering, he doesn't show it.
(My impression, as very much not any sort of expert, is that Kelly seems to be somewhat overselling the discrepancy between Festinger's account and reality. But I haven't read When Prophecy Fails, I haven't read the recently-unsealed documents Kelly is citing, and I could well be all wrong about that.)
If I am reading things correctly, section 2 of the Voting Rights Act says:
(a) No voting qualification or prerequisite to voting or standard, practice, or procedure shall be imposed or applied by any State or political subdivision in a manner which results in a denial or abridgement of the right of any citizen of the United States to vote on account of race or color, or in contravention of the guarantees set forth in section 10303(f)(2) of this title, as provided in subsection (b).
(and subsection (b) clarifies this in what seem like straightforward ways).
It seems to me that if this "asymmetrically binds Republicans" then the conclusion is "so much the worse for the Republicans" not "so much the worse for the Voting Rights Act".
As for "the unfair advantage Democrats have had nationally for decades":
Why different years (2022, 2020, 2020)? Because each of those was the first thing I found when searching for articles from at-least-somewhat-credible outlets about structural advantages for one or another party in presidential, Senate, and House races. I make no claim that those figures are representative of, say, the last 20 years, but I don't think it's credible to talk about "the unfair advantage Democrats have had nationally for decades" when all three of the major national institutions people in the US get to vote for have recently substantially favoured Republicans in the sense that to get equal results Democrats would need substantially more than equal numbers of votes.
I've no idea, but I think you should collaborate with someone named Duenning to find out.
(I don't think anything I said assumed you were referring to thermodynamic order/disorder.)
It sounds as if some of your definitions may want adjusting.
Doesn't all of this explicitly say that moving in the sub->super direction means becoming more disordered, which means becoming more chaotic?
Perhaps what you actually mean to say is of the following form?
(Is there actually a proper term for the thing that increases as you move from subcritical to supercritical? I keep finding that I need ugly circumlocutions for want of one.)
And then the situation described in the article (where a certain change, in this case from mindfullness to jhana, moves in the sub-to-super direction -- which would normally mean more sensitivity, hence more tendency to chaos in the mathematical sense, hence typically more disorder -- but somehow also involves a reduction in chaoticity) could be explained by this system not having the usual relationship between the sub-to-super parameter and chaoticity.
But I think I'm still confused, because (as I mentioned before) the article very much doesn't present that combination as somehow an unusual one. It says that jhana is characterized by a smaller max Lyapunov exponent, hence less chaoticity ... but isn't Lyapunov exponent much the same thing as you're calling "gain"? Wouldn't we normally expect reducing the Lyapunov exponent to move in the direction of subcriticality? Or am I, indeed, just still confused? The article says "Jhana decreases brain chaoticity relative to mindfulness, indicating brain dynamics closer to criticality" (italics mine), which to me seems like they're saying that in general we should expect closer-to-criticality dynamics to come along with less chaos, which is the exact opposite of what it feels like we should expect.
I've had a bit of a look for a nice clear explanation of the actual mathematics here, but it seems that there are (1) things about dynamical systems generally, written by mathematicians, which talk about e.g., subcritical or supercritical bifurcations and have nice clean definitions for those, and (2) things about Complex Systems, often specifically about brains, which talk about whole systems being "subcritical" or "critical" or "supercritical" but never seem to give actual explicit definitions of the things they are talking about. Probably I have just not found the right things to read.