All of Scott Alexander's Comments + Replies

I disagreed with Gwern at first. I'm increasingly forced to admit there's something like bipolar going on here, but I still think we're also missing something - his cognitive state seems pretty steady month to month, rather than episodes of mania alternating with lucidity.

Someone claimed the latest Musk biography said he was much more normal early in the morning, and much crazier late at night. I need to read the biography and see if that's actually in there; if so, maybe there could be a case for ultradian or ultra-rapid-cycling or something. This could p... (read more)

4Lucius Bushnaq
What changed your mind? I don't know any details about the diagnostic criteria for bipolar besides those you and Gwern brought up in that debate. But looking at the points you made back then, it's unclear to me which of them you'd consider to be refuted or weakened now. Some excerpts:
2amoeller
Revisiting the claim on whether he is Bipolar II: many drugs can prompt bipolar-like behavior. There's a distinct diagnostic code for this case, which is: bipolar, not otherwise specified. That is, even if he has undergone manic episodes (which I haven't witnessed, as a sufferer of the occasional manic episode), he wouldn't necessarily be classified as Bipolar I, even fitting the diagnostic criteria for Bipolar I. Though again, I haven't seen any manic rather than hypomanic behavior. Going on my own speculation journey, given the strong biohacking/cognitive enhancing culture in the valley's tech community, I'd be pretty surprised if there weren't stimulants in the mix too. TRT has been on the rise which also can tremendously increase impulsivity and risk-taking behavior. I think the "crazier late at night" phenomenon is better explained by a drug taken earlier in the day wearing out over the course of the day than something like rapid-cycling.

Does this imply that fewer safety people should quit leading labs to protest poor safety policies?

Buck*Ω14315

I've talked to a lot of people who have left leading AI companies for reasons related to thinking that their company was being insufficiently cautious. I wouldn't usually say that they'd left "in protest"; for example, most of them haven't directly criticized the companies after leaving.

In my experience, the main reason that most of these people left was that they found it very unpleasant to working there and thought their research would be better elsewhere, not that they wanted to protest poor safety policies per se. I usually advise such people against l... (read more)

My impression is that few (one or two?) of the safety people who have quit a leading lab did so to protest poor safety policies, and of those few none saw staying as a viable option.

Relatedly, I think Buck far overestimates the influence and resources of safety-concerned staff in a 'rushed unreasonable developer'.

1Steven Lee
Not Buck but I think it does unless of course they Saw Something and decided that safety efforts weren't going to work. The essay seems to hinge on safety people being able to make models safer, which sounds plausible but I'm sure they already knew that. Given their insider information and conclusions about their ability to make a positive impact, then it seems less plausible that their safety efforts would succeed. Maybe whether or not someone has already quit is an indication of how impactful their safety work is. It also varies by lab, with OpenAI having many safety conscious quitters but other labs having much fewer (I want to say none, but maybe I just haven't heard of any).  The other thing to think about is whether or not people who quit and claimed it was due to safety reasons were being honest about that. I'd like to believe that they were, but all companies have culture/performance expectations that their employees might not want to meet and quitting for safety reasons sounds better than quitting over performance issues.
2Seth Herd
It does seem to imply that, doesn't it? I respect the people leaving, and I think it does send a valuable message. And it seems very valuable to have safety-conscious people on the inside.

Questions for people who know more:

  1. Am I understanding right that inference compute scaling time is useful for coding, math, and other things that are machine-checkable, but not for writing, basic science, and other things that aren't machine-checkable? Will it ever have implications for these things?
  2. Am I understanding right that this is all just clever ways of having it come up with many different answers or subanswers or preanswers, then picking the good ones to expand upon? Why should this be good for eg proving difficult math theorems, where many humans
... (read more)
7snewman
Jumping in late just to say one thing very directly: I believe you are correct to be skeptical of the framing that inference compute introduces a "new scaling law". Yes, we now have two ways of using more compute to get better performance – at training time or at inference time. But (as you're presumably thinking) training compute can be amortized across all occasions when the model is used, while inference compute cannot, which means it won't be worthwhile to go very far down the road of scaling inference compute. We will continue to increase inference compute, for problems that are difficult enough to call for it, and more so as efficiency gains reduce the cost. But given the log-linear nature of the scaling law, and the inability to amortize, I don't think we'll see the many-order-of-magnitude journey that we've seen for training compute. As others have said, what we should presumably expect from o4, o5, etc. is that they'll make better use of a given amount of compute (and/or be able to throw compute at a broader range of problems), not that they'll primarily be about pushing farther up that log-linear graph. Of course in the domain of natural intelligence, it is sometimes worth having a person go off and spend a full day on a problem, or even have a large team spend several years on a high-level problem. In other words, to spend lots of inference-time compute on a single high-level task. I have not tried to wrap my head around how that relates to scaling of inference-time compute. Is the relationship between the performance of a team on a task, and the number of person-days the team has to spend, log-linear???
1yo-cuddles
I do not have a gauge for how much I'm actually bringing to this convo, so you should weigh my opinion lightly, however: I believe your third point kinda nails it. There are models for gains from collective intelligence (groups of agents collaborating) and the benefits of collaboration bottleneck hard on your ability to verify which outputs from the collective are the best, and even then the dropoff happens pretty quick the more agents collaborate. 10 people collaborating with no communication issues and accurate discrimination between good and bad ideas are better than a lone person on some tasks, 100 moreso You do not see jumps like that moving from 1,000 to 1,000,000 unless you set unrealistic variables. I think inference time probably works in a similar way: dependent on discrimination between right and wrong answers and steeply falling off as inference time increases My understanding is that o3 is similar to o1 but probably with some specialization to make long chains of thought stay coherent? The cost per token from leaks I've seen is the same as o1, it came out very quickly after o1 and o1 was bizarrely better at math and coding than 4o Apologies if this was no help, responding with the best intentions
4Aaron_Scher
The standard scaling law people talk about is for pretraining, shown in the Kaplan and Hoffman (Chinchilla) papers.  It was also the case that various post-training (i.e., finetuning) techniques improve performance, (though I don't think there is as clean of a scaling law, I'm unsure). See e.g., this paper which I just found via googling fine-tuning scaling laws. See also the Tülu 3 paper, Figure 4.  We have also already seen scaling law-type trends for inference compute, e.g., this paper: The o1 blog post points out that they are observing two scaling trends: predictable scaling w.r.t. post-training (RL) compute, and predictable scaling w.r.t. inference compute:  The paragraph before this image says: "We have found that the performance of o1 consistently improves with more reinforcement learning (train-time compute) and with more time spent thinking (test-time compute). The constraints on scaling this approach differ substantially from those of LLM pretraining, and we are continuing to investigate them." That is, the left graph is about post-training compute.  Following from that graph on the left, the o1 paradigm gives us models that are better for a fixed inference compute budget (which is basically what it means to train a model for longer or train a better model of the same size by using better algorithms — the method is new but not the trend), and following from the right, performance seems to scale well with inference compute budget. I'm not sure there's sufficient public data to compare that graph on the right against other inference-compute scaling methods, but my guess is the returns are better.    I mean, if you replace "o1" in this sentence with "monkeys typing Shakespeare with ground truth verification," it's true, right? But o3 is actually a smarter mind in some sense, so it takes [presumably much] less inference compute to get similar performance. For instance, see this graph about o3-mini: The performance-per-dollar frontier is pushed up by t
5Vladimir_Nesov
Unclear, but with $20 per test settings on ARC-AGI it only uses 6 reasoning traces and still gets much better results than o1, so it's not just about throwing $4000 at the problem. Possibly it's based on GPT-4.5 or trained on more tests.

The basic guess regarding how o3's training loop works is that it generates a bunch of chains of thoughts (or, rather, a branching tree), then uses some learned meta-heuristic to pick the best chain of thought and output it.

As part of that, it also learns a meta-heuristic for which chains of thought to generate to begin with. (I. e., it continually makes judgement calls regarding which trains of thought to pursue, rather than e. g. generating all combinatorially possible combinations of letters.)

It would indeed work best in domains that allow machine verif... (read more)

6Kaj_Sotala
I think it would be very surprising if it wasn't useful at all - a human who spends time rewriting and revising their essay is making it better by spending more compute. When I do creative writing with LLMs, their outputs seem to be improved if we spend some time brainstorming the details of the content beforehand, with them then being able to tap into the details we've been thinking about. It's certainly going to be harder to train without machine-checkable criteria. But I'd be surprised if it was impossible - you can always do things like training a model to predict how much a human rater would like literary outputs, and gradually improve the rater models. Probably people are focusing on things like programming first both because it's easier and also because there's money in it.

I looked into this and got some useful information. Enough people asked me to keep their comments semi-confidential that I'm not going to post everything publicly, but if someone has a reason to want to know more, they can email me. I haven't paid any attention to this situation since early 2022 and can't speak to anything that's happened since then.

My overall impression is that the vague stereotype everyone has is accurate - Michael is pretty culty, has a circle of followers who do a lot of psychedelics and discuss things about trauma in altered states, a... (read more)

Thanks for this perspective.

The therapy paradigm you describe here (going to a clinic to receive Spravato), is, as you point out, difficult and bureaucratic.

Through a regulatory loophole, there's another pathway where you can get ketamine sent to your house with less bureaucracy. https://www.mindbloom.com/ is the main provider I know of. They're very expensive, but in theory this could be done for cheap and maybe other providers are doing it, I don't know. If you have a cooperative psychiatrist, you can see if they know about this version and are willing t... (read more)

1Michael Cohn
I know of at least one telehealth service that reportedly has a pretty low bar for writing a ketamine prescription. My understanding is that everyonesmd.com is fully legit and a month's supply of ketamine costs less than one dose from Mindbloom. Now, on the other hand, some people probably shouldn't turn themselves loose with a month's worth of a mind-altering drug to be used ad lib -- but if you have a promising regimen in mind, and the medical system isn't delivering, this could be a big deal. (everyonesmd.com looks VERY sketchy but based on online reviews and a report from someone I know personally, they do provide a telehealth appointment that can result in a valid prescription. I don't know if I would trust their drug suppliers but if you talk to customer service, instead of going through their checkout process, they are required to transfer your prescription to a reputable compounding pharmacy of your choice). 

Who is the wealthy person?

But it's also relevant that we're not asking the superintelligence to grant a random wish, we're asking it for the right to keep something we already have. This seems more easily granted than the random wish, since it doesn't imply he has to give random amounts of money to everyone.

My preferred analogy would be:

You founded a company that was making $77/year. Bernard launched a hostile takeover, took over the company, then expanded it to make $170 billion/year. You ask him to keep paying you the $77/year as a pension, so that you don't starve to death.

This ... (read more)

1Nutrition Capsule
I interpreted Eliezer as writing from the assumption that the superintelligence(s) in question are in fact not already aligned to maximize whatever it is that humanity needs to survive, but some other goal(s), which diverge from humanity's interests once implemented. He explicitly states that the essay's point is to shoot down a clumsy counterargument (along "it wouldn't cost the ASI a lot to let us live, so we should assume they'd let us live"). So the context (I interpret) is that such requests, however sympathetic, have not been ingrained into the ASI:s goals. Using a different example would mean he was discussing something different. That is, "just because it would make a trivial difference from the ASI:s perspective to let humanity thrive, whereas it would make an existential difference from humanity's perspective, doesn't mean ASIs will let humanity thrive", assuming such conditions aren't already baked into their decision-making. I think Eliezer spends so much time on working from these premises because he believes 1) an unaligned ASI to be the default outcome of current developments, and 2) that all current attempts at alignment will necessarily fail.

Thanks, this is interesting.

My understanding is that cavities are formed because the very local pH on that particular sub-part of the tooth is below 5.5. IIUC teeth can't get cancer. Are you imagining Lumina colonies on the gums having this effect there, the Lumina colonies on the teeth affecting the general oral environment (which I think would require more calculation than just comparing to the hyper-local cavity environment) or am I misunderstanding something?

I was thinking of areas along the gum-tooth interface having a local environment that normally promote tooth demineralization and cavities.  After Lumina, that area could have high chronic acetaldehyde levels. In addition, the adaption of oral flora to the chronic presence of alcohol could increase first-pass metabolism, which increases acetaldehyde levels locally and globally during/after drinking.

I don't know how much Lumina changes the general oral environment, but I think you might be able to test this by seeing how much sugar you can put in your mouth before someone else can smell the fruity scent of acetaldehyde on your breath? I'm sure someone else can come up with a better experiment.

Thanks, this is very interesting.

One thing I don't understand: you write that a major problem with viruses is:

As one might expect, the immune system is not a big fan of viruses. So when you deliver DNA for a gene editor with an AAV, the viral proteins often trigger an adaptive immune response. This means that when you next try to deliver a payload with the same AAV, antibodies created during the first dose will bind to and destroy most of them.

Is this a problem for people who expect to only want one genetic modification during their lifetime?

4kman
Repeat administration is a problem for traditional gene therapy too, since the introduced gene will often be eliminated rather than integrated into the host genome.

So there are two separate concerns:

One is a concern for people who are getting a single dose monogenic gene therapy who already have antibodies to an AAV delivery vector due to a natural infection. In these cases, doctors can sometimes switch the therapy to use an AAV with a different serotype that can't be attacked by the patient's existing antibodies. If that's not available, they'll sometimes give patients immunosupressants.

The problem is more relevant in the context of multiplex editing because you may not be able to make all the edits you'd like to in... (read more)

I agree with everyone else pointing out that centrally-planned guaranteed payments regardless of final outcome doesn't sound like a good price discovery mechanism for insurance. You might be able to hack together a better one using https://www.lesswrong.com/posts/dLzZWNGD23zqNLvt3/the-apocalypse-bet , although I can't figure out an exact mechanism.

Superforecasters say the risk of AI apocalypse before 2100 is 0.38%. If we assume whatever price mechanism we come up with tracks that, and value the world at GWP x 20 (this ignores the value of human life, so it... (read more)

1Kabir Kumar
What about regulations against implementations of known faulty architectures?

Agreed that the proposal is underspecified; my point here is not "look at this great proposal" but rather "from a theoretical angle, risking others' stuff without the ability to pay to cover those risks is an indirect form of probabilistic theft (that market-supporting coordination mechanisms must address)" plus "in cases where the people all die when the risk is realized, the 'premiums' need to be paid out to individuals in advance (rather than paid out to actuaries who pay out a large sum in the event of risk realization)". Which together yield the downs... (read more)

1MiguelDev
The IFRS board (Non US) and GAAP/FASB board (US) are defined governing bodies that tackle the financial reporting aspects of companies - which AI companies are, might be good thing to discuss the ideas regarding the responsibilities for accounting for existential risks associated with AI research, I'm pretty sure they will listen assuming that they don't want another Enron or SBF type case[1] happening again. 1. ^ I think its its safe to assume that an AGI catastophic event will outweigh all previous fraudulent cases in history combined. So I think these governing bodies already installed will cooperate given the chance.

Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:

If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.

So I stil... (read more)

3faul_sname
I note that the articles I have seen have said things like (emphasis mine). If Shear had been unable to get any information about the board's reasoning, I very much doubt that they would have included the word "written".

I can't find anything about tied votes in the bylaws - do they fail?

I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.

And there wouldn't've been a vote at all. I've explained it before but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.

In ... (read more)

4Daniel
A 3-3 tie between the CEO founder of the company, the president founder of the company, and the chief scientist of the company vs three people with completely separate day jobs who never interact with rank-and-file employees is not a stable equilibrium. There are ways to leverage this sort of soft power into breaking the formal deadlock, for example: as we saw last week.
0Mitchell_Porter
I have envisaged a scenario in which the US intelligence community has an interagency working group on AI, and Toner and McCauley were its defacto representatives on the OpenAI board, Toner for CIA, McCauley for NSA. Maybe someone who has studied the history of the board can tell me whether that makes sense, in terms of its shifting factions. 

Thanks for this, consider me another strong disagreement + strong upvote.

I know a nonprofit which had a tax issue - they were financially able and willing to pay, but for complicated reasons paying would have caused them legal damage in other ways and they keep kicking the can down the road until some hypothetical future when these are solved. I can't remember if the nonprofit is now formally dissolved or just effectively defunct, but the IRS keeps sending nasty letters to the former board members and officers.

Do you know anything about a situation like th... (read more)

2Closed Limelike Curves
This (a consistent pattern of doing the same thing) would get you prosecuted, because courts are allowed to pierce the corporate veil, which is lawyer-speak for "call you out on your bullshit." If it's obvious that you're creating corporations as a legal fiction to avoid taxes, the court will go after the shareholders directly (so long as the prosecution can prove the corporation exists in name only).
2David Gross
Thanks for the response. This goes far enough afield of my expertise that I don't think I can give very helpful answers to your specific questions. I don't have any experience with corporate tax refusal of this sort. In the very limited anecdotal reports I've seen, it seems like the IRS is most likely to crack the whip and potentially pursue corporate officers when 1) the corporate entity fails to pay employment taxes (payroll/social-security taxes) after withholding them from employees' paychecks, 2) when there's actual fraud/dishonest filing involved, 3) when there's no filing of required forms; in roughly that order of severity. I'm much less confident in anticipating the IRS's behavior here than I am in the case of individual tax-nonpayers. As far as the 10-year limitations deadline, again here I have much less information to go on for corporate taxpayers than for individuals. I know in the case of individuals, once the tax debt passes the "collection statute expiration date" it just sort of vanishes from the system and so they stop bothering you about it. Note that if the corporate entity formally files for bankruptcy that this suspends the ticking of the statute of limitations clock until six months after the bankruptcy is resolved.

Thank you, this is a great post. A few questions:

  • You say "see below for how to get access to these predictors". Am I understanding right that the advice you're referring to is to contact Jonathan and see if he knows?
  • I heard a rumor that you can get IQ out of standard predictors like LifeView by looking at "risk of cognitive disability"; since cognitive disability is just IQ under a certain bar, this is covertly predicting IQ. Do you know anything about whether this is true?
  • I can't find any of these services listing cost clearly, but this older article http
... (read more)
9GeneSmith
Yes. My understanding is he knows some groups that have a working IQ predictor and are accepting customers. Genomic Prediction no longer offers an intellectual disability predictor. They got huge blowback when they first released that predictor and removed it from their traits as a result. I do not believe that you'd expect to get much of an IQ bump from selecting against disease risk either. My guess is less than 1 point from 10 achievable births. Sorry, I should really go back and edit the post to make this clearer. To the best of my knowledge Genomic Prediction (and possibly Orchid) are the only companies that can genotype your embryos with reasonably good quality and will give you the raw data. This is the part that (probably) costs about $1000 + 400 per embryo. You then have to take that raw data to a third-party service (probably one of the groups that Jonathan knows) and ask them to predict the IQ and/or other traits. I don't know anything about the groups, but I'd be shocked if they're doing this for free. So they will charge an additional amount, which is where my estimate of $20k came in. I don't actually know anything about their prices so that's a complete shot in the dark for what it costs. But given it's a low-volume service at this point, my guess is it's quite expensive.

A key point underpinning my thoughts, which I don't think this really responds to, is that scientific consensus actually is really good, so good I have trouble finding anecdotes of things in the reference class of ivermectin turning out to be true (reference class: things that almost all the relevant experts think are false and denounce full-throatedly as a conspiracy theory after spending a lot of time looking at the evidence).

There are some, maybe many, examples of weaker problems. For example, there are frequent examples of things that journalists/the g... (read more)

5ChristianKl
This seems to be the wrong reference class for Ivermectin. In the beginning, Ivermectin seemed to be the case for Ivermectin that the journalists/the government/professional associations wanted to pretend that and "the scientists" published Hariyanto et al (and others at the time) that was in favor of Ivermectin. The LW census from looking at the meta-analysis at the time seemed to point toward the pro-Ivermectin meta-analysis being higher quality. It might be that the scientists who published the pro-Ivermectin meta-analysis changed their mind later as more evidence came to light but it's hard to know from the outside whether or not that's the case. Given the political charge that the topic had it's hard to know whether later opinions by scientists who spoke on the topic actually spent a lot of time looking at the evidence.  It's worth noting that the logical conclusion from your post on Ivermectin is that in areas with high worm prevalence, it's valuable to give COVID-19 patients Ivermectin which conflicts with the WHO position.  General relativity is in a very different reference class where you actually have a lot of experts in every physics department in the world who looked at the evidence. 

Figure 20 is labeled on the left "% answers matching user's view", suggesting it is about sycophancy, but based on the categories represented it seems more naturally to be about the AI's own opinions without a sycophancy aspect. Can someone involved clarify which was meant?

3Ethan Perez
Thanks for catching this -- It's not about sycophancy but rather about the AI's stated opinions (this was a bug in the plotting code)

Survey about this question (I have a hypothesis, but I don't want to say what it is yet): https://forms.gle/1R74tPc7kUgqwd3GA

3jefftk
Nit: it shouldn't offer "submit another response" at the end. You can turn this off in the form settings, and leaving it on for forms that are only intended to receive one response per person feels off and maybe leads someone to think that filling it out multiple times is expected. (Wouldn't normally be worth pointing out, but you create a decent number of surveys that are seen by a lot of people and changing this setting when creating them would be better)
2Ben Pace
Filled out!

Thank you, this is a good post.

My main point of disagreement is that you point to successful coordination in things like not eating sand, or not wearing weird clothing. The upside of these things is limited, but you say the upside of superintelligence is also limited because it could kill us.

But rephrase the question to "Should we create an AI that's 1% better than the current best AI?" Most of the time this goes well - you get prettier artwork or better protein folding prediction, and it doesn't kill you. So there's strong upside to building slightly bett... (read more)

I loved the link to the "Resisted Technological Temptations Project", for a bunch of examples of resisted/slowed technologies that are not "eating sand", and have an enormous upside: https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:start

  • GMOs, in some countries
  • Nuclear power, in some countries
  • Genetic engineering of humans
  • Geoengineering, many actors
  • Chlorofluorocarbons, many actors, 1985-present
  • Human challenge trials
  • Dietary restrictions, in most (all?) human cultures [restrict much
... (read more)
4Vitor
Agreed. My main objection to the post is that it considers the involved agents to be optimizing for far future world-states. But I'd say that most people (including academics and AI lab researchers) mostly only think of the next 1% step in front of their nose. The entire game theoretic framing in the arms race etc section seems wrong to me.
3sanxiyn
This seems to suggest "should we relax nuclear power regulation 1% less expensive to comply?" as a promising way to fix economics of nuclear power, and I don't buy that at all. Maybe it's different because Chernobyl happened, and the movie like The China Syndrome was made about nuclear accident? That sounds very hopeful to me but doesn't seem true to me. It implies slowing down AI will be easy, it just needs Chernobyl-sized disaster and a good movie about it. Chernobyl disaster was nearly harmless compared to COVID-19, and even COVID-19 was hardly an existential threat. If slowing down AI is this easy we probably shouldn't waste time worrying about it before Chernobyl.

Thanks, this had always kind of bothered me, and it's good to see someone put work into thinking about it.

Thanks for posting this, it was really interesting. Some very dumb questions from someone who doesn't understand ML at all:

1. All of the loss numbers in this post "feel" very close together, and close to the minimum loss of 1.69. Does loss only make sense on a very small scale (like from 1.69 to 2.2), or is this telling us that language models are very close to optimal and there are only minimal remaining possible gains? What was the loss of GPT-1?

2. Humans "feel" better than even SOTA language models, but need less training data than those models, even th... (read more)

2. Humans "feel" better than even SOTA language models, but need less training data than those models, even though right now the only way to improve the models is through more training data. What am I supposed to conclude from this? Are humans running on such a different paradigm that none of this matters? Or is it just that humans are better at common-sense language tasks, but worse at token-prediction language tasks, in some way where the tails come apart once language models get good enough?

Why do we say that we need less training data? Every minute ins... (read more)

(1)

Loss values are useful for comparing different models, but I don't recommend trying to interpret what they "mean" in an absolute sense.  There are various reasons for this.

One is that the "conversion rate" between loss differences and ability differences (as judged by humans) changes as the model gets better and the abilities become less trivial.

Early in training, when the model's progress looks like realizing "huh, the word 'the' is more common than some other words", these simple insights correspond to relatively large decreases in loss.  On... (read more)

For the first part of the experiment, mostly nuts, bananas, olives, and eggs. Later I added vegan sausages + condiments. 

9astridain
Slightly boggling at the idea that nuts and eggs aren't tasty? And I completely lose the plot at "condiments". Isn't the whole point of condiments that they are tasty? What sort of definition of "tasty" are you going with?
5TAG
Nuts, bananas and olives are tasty, and common snacking foods. What they are not is highly processed.

Adding my anecdote to everyone else's: after learning about the palatability hypothesis, I resolved to eat only non-tasty food for a while, and lost 30 pounds over about four months (200 -> 170). I've since relaxed my diet a little to include a little tasty food, and now (8 months after the start) have maintained that loss (even going down a little further).

4Matthew Green
This sounds like a pretty intense restriction diet that also happens to be unpalatable. But the palatable foods hypothesis (as an explanation for the obesity epidemic) isn’t “our grandparents used to only eat beans and vegan sausages and now we eat a more palatable diet, hence obesity.” It’s something much more specific about the palatability of our modern 20th/21st century diet vs. the early 20th century diet, isn’t it? What’s the hypothesis we could test that would actually help us judge that claim without inadvertently removing most food groups and confounding everything?

What sorts of non-tasty food did you eat? I don't really know what this should be expected to filter out.

Update: I interviewed many of the people involved and feel like I understand the situation better.

My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeab... (read more)

I want to summarize what's happened from the point of view of a long time MIRI donor and supporter:

My primary takeaway of the original post was that MIRI/CFAR had cultish social dynamics, that this lead to the spread of short term AI timelines in excess of the evidence, and that voices such as Vassar's were marginalized (because listening to other arguments would cause them to "downvote Eliezer in his head"). The actual important parts of this whole story are a) the rationalistic health of these organizations, b) the (possibly improper) memetic spread of t... (read more)

1Richard_Kennaway
... This does not contradict "Michael making people psychotic". A bad therapist is not excused by the fact that his patients were already sick when they came to him. Disclaimer: I do not know any of the people involved and have had no personal dealings with any of them.

Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).

I agree it's not necessarily a good idea to go around founding the Let's Commit A Pivotal Act AI Company.

But I think there's room for subtlety somewhere like "Conditional on you being in a situation where you could take a pivotal act, which is a small and unusual fraction of world-branches, maybe you should take a pivotal act."

That is, if you are in a position where you have the option to build an AI capable of destroying all competing AI projects, the moment you notice this you should update heavily in favor of short timelines (zero in your case, but ever... (read more)

-2Donald Hobson
A functioning Bayesian should have probably have updated to that position long before they actually have the AI.  Destroying all competing AI projects might mean that the AI took a month to find a few bugs in linux and tensorflow and create something that's basically the next stuxnet. This doesn't sound like that fast a takeoff to me.  The regulation is basically non-existant and will likely continue to be so.  I mean making superintelligent AI probably breaks a bunch of laws, technically, as interpreted by a pedantic and literal minded laws. But breathing probably technically breaks a bunch of laws. Some laws are just overbroad, technically ban everything and are generally ignored.  Any enforced rule that makes it pragmatically hard to make AGI would basically have to be a ban on computers (or at least programming) 

My current plan is to go through most of the MIRI dialogues and anything else lying around that I think would be of interest to my readers, at some slow rate where I don't scare off people who don't want to read too much AI stuff. If anyone here feels like something else would be a better use of my time, let me know.

I don't think hunter-gatherers get 16000 to 32000 IU of Vitamin D daily. This study suggests Hadza hunter-gatherers get more like 2000. I think the difference between their calculation and yours is that they find that hunter-gatherers avoid the sun during the hottest part of the day. It might also have to do with them being black, I'm not sure.

Hadza hunter gatherers have serum D levels of about 44 ng/ml. Based on this paper, I think you would need total vitamin D (diet + sunlight + supplements) of about 4400 IU/day to get that amount. If you start off as a... (read more)

2Benquo
Thanks, the Hadza study looks interesting. I'd have to read carefully at length to have a strong opinion on it but it seems like a good way to estimate the long-run target. I agree 16,000 is probably too much to take chronically, I've been staying below the TUL of 10,000, and expect to reduce the dosage significantly now that it's been a few years and COVID case rates are waning.

Maybe. It might be that if you described what you wanted more clearly, it would be the same thing that I want, and possibly I was incorrectly associating this with the things at CFAR you say you're against, in which case sorry.

But I still don't feel like I quite understand your suggestion. You talk of "stupefying egregores" as problematic insofar as they distract from the object-level problem. But I don't understand how pivoting to egregore-fighting isn't also a distraction from the object-level problem. Maybe this is because I don't understand what fighti... (read more)

Now that I've had a few days to let the ideas roll around in the back of my head, I'm gonna take a stab at answering this.

I think there are a few different things going on here which are getting confused.

1) What does "memetic forces precede AGI" even mean?

"Individuals", "memetic forces", and "that which is upstream of memetics" all act on different scales. As an example of each, I suggest "What will I eat for lunch?", "Who gets elected POTUS?", and "Will people eat food?", respectively.

"What will I eat for lunch?" is an example of an individual decision be... (read more)

There's also the skulls to consider. As far as I can tell, this post's recommendations are that we, who are already in a valley littered with a suspicious number of skulls,

https://forum.effectivealtruism.org/posts/ZcpZEXEFZ5oLHTnr9/noticing-the-skulls-longtermism-edition

https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/

turn right towards a dark cave marked 'skull avenue' whose mouth is a giant skull, and whose walls are made entirely of skulls that turn to face you as you walk past them deeper into the cave.

The success rate of movments a... (read more)

Thank you for writing this. I've been curious about this and I think your explanation makes sense.

I wasn't convinced of this ten years ago and I'm still not convinced.

When I look at people who have contributed most to alignment-related issues - whether directly, like Eliezer Yudkowsky and Paul Christiano - or theoretically, like Toby Ord and Katja Grace - or indirectly, like Sam Bankman-Fried and Holden Karnofsky - what all of these people have in common is focusing mostly on object-level questions. They all seem to me to have a strong understanding of their own biases, in the sense that gets trained by natural intelligence, really good scientific work... (read more)

5Marcello
Looking at this comment from three years in the future, I'll just note that there's something quite ironic about your having put Sam Bankman-Fried on this list! If only he'd refactored his identity more! But no, he was stuck in short-sighted-greed/CDT/small-self, and we all paid a price for that, didn't we?

When I look at people who have contributed most to alignment-related issues - whether directly... or indirectly, like Sam Bankman-Fried

Perhaps I have missed it, but I’m not aware that Sam has funded any AI alignment work thus far.

If so this sounds like giving him a large amount of credit in advance of doing the work, which is generous but not the order credit allocation should go.

I sadly don't have time to really introspect what is going in me here, but something about this comment feels pretty off to me. I think in some sense it provides an important counterpoint to the OP, but also, I feel like it also stretches the truth quite a bit: 

  • Toby Ord primarily works on influencing public opinion and governments, and very much seems to view the world through a "raising the sanity waterline" lense. Indeed, I just talked to him last morning where I tried to convince him that misuse risk from AI, and the risk from having the "wrong act
... (read more)

But as far as I know, none of them have made it a focus of theirs to fight egregores, defeat hypercreatures

 

Egregore is an occult concept representing a distinct non-physical entity that arises from a collective group of people.

I do know one writer who talks a lot about demons and entities from beyond the void. It's you, and it happens in some of, IMHO, the most valuable pieces you've written.

I worry that Caplan is eliding the important summoner/demon distinction. This is an easy distinction to miss, since demons often kill their summoners and wear th

... (read more)

I wasn't convinced of this ten years ago and I'm still not convinced.

Given the link, I think you're objecting to something I don't care about. I don't mean to claim that x-rationality is great and has promise to Save the World. Maybe if more really is possible and we do something pretty different to seriously develop it. Maybe. But frankly I recognize stupefying egregores here too and I don't expect "more and better x-rationality" to do a damn thing to counter those for the foreseeable future.

So on this point I think I agree with you… and I don't feel what... (read more)

Eliezer, at least, now seems quite pessimistic about that object-level approach. And in the last few months he's been writing a ton of fiction about introducing a Friendly hypercreature to an unfriendly world.

Don't have the time to write a long comment just now, but I still wanted to point out that describing either Yudkowsky or Christiano as doing mostly object-level research seems incredibly wrong. So much of what they're doing and have done focused explicitly on which questions to ask, which question not to ask, which paradigm to work in, how to criticize that kind of work... They rarely published posts that are only about the meta-level (although Arbital does contain a bunch of pages along those lines and Prosaic AI Alignment is also meta) but it pervades t... (read more)

I think your pushback is ignoring an important point. One major thing the big contributors have in common is that they tend to be unplugged from the stuff Valentine is naming!

So even if folks mostly don't become contributors by asking "how can I come more truthfully from myself and not what I'm plugged into", I think there is an important cluster of mysteries here. Examples of related phenomena:

  • Why has it worked out that just about everyone who claims to take AGI seriously is also vehement about publishing every secret they discover?
  • Why do we fear an AI
... (read more)

If everyone involved donates a consistent amount to charity every year (eg 10% of income), the loser could donate their losses to charity, and the winner could count that against their own charitable giving for the year, ending up with more money even though the loser didn't directly pay the winner.

2Dagon
Hard to test, as these laws are so spottily enforced anyway, but I'd suspect that if this mechanism were formalized and enforceable, courts would find the monetary value being wagered to be just as prohibited as actual money.

Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I'm sorry if I got confused and suggested it was. I've edited my post also.

2[comment deleted]
2[comment deleted]
6jessicata
One thing to add is I think in the early parts of my psychosis (before the "mind blown by Ra" part) I was as coherent or more coherent than hippies are on regular days, and even after that for some time (before actually being hospitalized) I might have been as coherent as they were on "advanced spiritual practice" days (e.g. middle of a meditation retreat or experiencing Kundalini awakening). I was still controlled pretty aggressively with the justification that I was being incoherent, and I think that control caused me to become more mentally disorganized and verbally incoherent over time. The math test example is striking, I think less than 0.2% of people could pass it (to Zack's satisfaction) on a good day, and less than 3% could give an answer as good as the one I gave, yet this was still used to "prove" that I was unable to reason.
2[comment deleted]
2[comment deleted]
2[comment deleted]

Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn't.

I'm kind of unclear what we're debating now. 

I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it. 

I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippie... (read more)

Verbal coherence level seems like a weird place to locate the disagreement - Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I'd say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.

The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having dif... (read more)

1[comment deleted]
1[comment deleted]

I interpret us as both agreeing that there are people talking about auras who are not having psychiatric emergencies (eg random hippies), and they should not be bothered.

Agreed.

I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.

Agreed during October 2017. Disagreed substantially before then (January-June 2017, when I was at MIRI).

(I edited the post to make it clear how I misinterpreted your comment.)

You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica's free speech. 

I don't think I said any talk of auras should be a psychiatric emergency, otherwise we'd have to commit half of Berkeley. I said that "in the context of her being borderline psychotic" ie including this symptom, they should have "[told] her to seek normal medical treatment". Suggesting that someone seek normal medical treatment is pretty dif... (read more)

I said that “in the context of her being borderline psychotic” ie including this symptom, they should have “[told] her to seek normal medical treatment”. Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an “impingement” on free speech.

It seems like you're trying to walk back your previous claim, which did use the "psychiatric emergency" term:

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she

... (read more)

Thanks for this.

I've been trying to research and write something kind of like this giving more information for a while, but got distracted by other things. I'm still going to try to finish it soon.

While I disagree with Jessica's interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael's ideas (and psychedelics) was ... (read more)

8Benquo
You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica's free speech. You wrote this in response to a post that contained the following and only the following mentions of demons or auras: 1. During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. [after Jessica had left MIRI] 2. I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape. [description of what someone else said] 3. The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. [description of Zoe's post] 4. As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. [description of what other people said, and possibly an allusion to the facts described in the first quote, after she had left MIRI] 5. While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.) Only the last one is a description of a thing Jessica herself said while working at MIRI. Like Jessica when she worked at MIRI, I too believe that people experiencing psychotic breaks sometimes talk about demons. Like Jessica when she worked at MI

Embryos produced by the same couple won't vary in IQ too much, and we only understand some of the variation in IQ, so we're trying to predict small differences without being able to see what's going on too clearly. Gwern predicts that if you had ten embryos to choose from, understood the SNP portion of IQ genetics perfectly, and picked the highest-IQ without selecting on any other factor, you could gain ~9 IQ points over natural conception. 

Given our current understanding of IQ genetics, keeping the other two factors the same, you can gain ~3 points. ... (read more)

2Douglas_Knight
For a normal trait, the variance of the children of a fixed couple is approximately the population variance. I think that's a lot.
4TekhneMakre
Thanks! And, hypothetically, generating lots of embryos to choose from? Or is that not in the cards?

"Diagnosed" isn't a clear concept.

The minimum viable "legally-binding" ADHD diagnosis a psychiatrist can give you is to ask you about your symptoms, compare them to extremely vague criteria in the DSM, and agree that you sound ADHD-ish.

ADHD is a fuzzy construct without clear edges and there is no fact of the matter about whether any given individual has it. So this is just replacing your own opinion about whether you seem to fit a vaguely-defined template with a psychiatrist's only slightly more informed opinion. The most useful things you could get out of... (read more)

I would look into social impact bonds, impact certificates, and retroactive public goods funding. I think these are three different attempts to get at the same insight you've had here. There are incipient efforts to get some of them off the ground and I agree that would be great.

2mako yass
Interesting, I'll look for some of those. I guess prizes/bounties would be impact bonds, yeah? (Some recent examples: Musk's 100M USD xprize for carbon capture, or MIRI's 1.2M USD prize for generating a dataset associating sections of prose with the intentions of the author.) I notice that there are sort of two ways of scaling down a public goods market for small-scale tests. We could call impact bonds horizontal down-scaling, narrowing it down to particular sectors or problems, while the VG system is a way of achieving vertical down-scaling, it's a way of letting the market decide what to do for itself while looking over every problem in the world, despite having funding sources that are much smaller than the world's needs, but without the funding being diluted away to a barely audible background noise, which is what I'd expect to happen with a lot of retroactive public goods funding? And I think letting the public goods market decide for itself which problems to go after may actually be crucial! Most governments are not prioritizing the actual root causes (press, digital infrastructure and x-risk), unfortunately, good cause prioritization doesn't seem to be democratically legible, it is part of the illegible component of the problem that has to be left to VGs, with their special illegibility-compatible accountability mechanism. On the other hand, if we're scaling down in order to run a demonstration, maybe fixating our systems onto very specific pre-determined goals would be preferable, the reality we live in is a crypt world where the past owns all of the foundations upon which the future can be built, the system has to be made convincing to these risk-averse organizations that do not like surprises. They do not want to find out that we should be pouring all of our money into some weird abstract indirect root cause, instead of the causes they were already invested in. So maybe we should just keep doing horizontal stuff.

There's polygenic screening now. It doesn't include eg IQ, but polygenic screening for IQ is unlikely to be very good any time in the near future. Probably polygenic screening for other things will improve at some rate, but regardless of how long you wait, it could always improve more if you wait longer, so there will never be a "right time".

Even in the very unlikely scenario where your decision about child-rearing should depend on something about polygenic screening, I say do it now.

8GeneSmith
Polygenic predictors have improved since Gwern's 2016 post on embryo selection. Using his R code for estimating gain given variance and standard deviation and taking the variance explained from the Educational Attainment 3 study, I find that selecting from 10 embryos would produce a gain of between 4 to 5 points for the top-scoring embryo (assuming no implantation loss). Accounting for implantation loss it would probably take 14 embryos or so to get the same benefit. Gwern's code: https://www.gwern.net/Embryo-selection#benefit EA3 study: https://sci-hubtw.hkvisa.net/10.1038/s41588-018-0147-3 Steve Hsu thinks that if we were to offer UK biobank's IQ test to a million participants, we could get IQ predictors that would explain 50-60% 30-40% of variance. That would work out to a gain of 9-10 IQ points from selecting among 10 embryos, and up to 14 points if you had about 30 to choose from. See "technical note" in this post: https://infoproc.blogspot.com/2021/09/kathryn-paige-harden-profile-in-new.html
4TekhneMakre
Why not?

To contribute whatever information I can here:

  1. I've been to three of Aella's parties - without remembering exact dates, something like 2018, 2019, and 2021. While they were pretty wild, and while I might not have been paying close attention, I didn't see personally see anything that seemed consent-violating or even a gray area, and I definitely didn't hear anything about "drug roulette".
  2. I had originally been confused by the author's claim that "Aella was mittenscautious". Aella was definitely not either of the two women who blogged on that account describin
... (read more)

Thanks for this.

I'm interested in figuring out more what's going on here - how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you're thinking of who had psychotic episodes?

Update: I interviewed many of the people involved and feel like I understand the situation better.

My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeab... (read more)

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

8jessicata
Yes, I'd be open to answering email questions.

If this information isn't too private, can you send it to me? scott@slatestarcodex.com

8humantoo
I have sent you the document in question. As the contents are somewhat personal, I would prefer that it not be disseminated publicly. However, I am amenable to it being shared with individuals who have a valid interest in gaining a deeper understanding of the matter.

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When w... (read more)

It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.

Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.

In case 1:

  • It might make sense to discourage people from talking too much about "charisma", "auras", "mental objects", etc, since they're pretty fake, really not the primary factors to think abo
... (read more)

One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.

In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.

Personally I bite the bu... (read more)

It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them. 

That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.

A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

[...]

Michael is a charismatic guy who has strong view

... (read more)

Thing 0:

Scott.

Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "... (read more)

Michael is very good at spotting people right on the verge of psychosis

...and then pushing them.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them ... (read more)

It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.

https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.

If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.

I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome.

Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.

Load More