AlphaAndOmega

Wikitag Contributions

Comments

Sorted by

>Benzodiazepines are anti-anxiety drugs that calm fear but don’t prevent panic attacks, while tricyclic antidepressants like imipramine prevent panic attacks but don’t do anything about fear.3 

 

As far as I'm aware, the claim that benzos don't prevent panic attacks is incorrect! 

We don't like to prescribe them for that purpose, or for most cases of Generalized Anxiety Disorder, as they're strongly habit forming and sedative, but they are very effective in that regard. 

 

https://acnp.org/g4/GN401000130/CH.html

 

"The most extensively studied benzodiazepine for the treatment of panic has been the high potency triazolobenzodiazepine  alprazolam.  The Cross National Collaborative Panic Study (CNCPS) (44), a multicentre study conducted in two phases, is generally regarded as the most ambitious attempt to demonstrate the antipanic efficacy of alprazolam. Phase One of the CNCPS (45) randomly assigned 481 panic disorder patients (80% of whom had agoraphobia) to alprazolam or placebo, utilizing a double blind design and flexible dose schedule.  All groups received their respective treatments for 8 weeks.  Treatment was then discontinued over 4 weeks, and subjects were followed for 2 weeks after discontinuance.  The mean dose of alprazolam employed was 5.7mg/day. Alprazolam was shown to have a rapid onset of effect, with most improvement occurring in the first week of treatment.  Alprazolam was far superior to placebo on measures of panic attacks, anticipatory anxiety and phobic avoidance; at the 8 week endpoint, 55% of alprazolam treated patients were panic free, compared to 32% of those given placebo.     Phase two of the Cross National Collaborative Panic Study (46) attempted  to not only replicate phase one’s results in a larger sample, but also to compare alprazolam’s efficacy to that of a typical antidepressant treatment for panic.  1168 panic patients were randomly assigned to alprazolam, imipramine, or placebo for 8 weeks.  This follow up study confirmed the earlier findings demonstrating  superior antipanic activity of alprazolam (mean= 5.7mg/day) and imipramine   (mean=155mg/day) compared with placebo, with 70% of both imipramine and alprazolam groups experiencing amelioration of panic compared to 50% for placebo. Significant drug effects were demonstrated for anticipatory anxiety and phobia. As in the phase 1 study, most of alprazolam’s beneficial effects were witnessed in the first and second weeks; imipramine, however, took four weeks or more to exert antipanic action.  The main criticism of the Cross-National Study, forwarded by Marks et al (47), was that the high level (approximately 30%)  of placebo dropouts due to inefficient treatment may have confounded the analysis of the endpoint data.      In addition to the CNCPS, several trials have conclusively established alprazolam’s efficacy in the acute and long term treatment of panic (48-52,21). Almost all studies found alprazolam to be superior to placebo in treating phobic avoidance,  reducing anticipatory anxiety, and lessening overall disability.  Further, comparator studies of  alprazolam and imipramine found the two medications comparable in efficacy for panic attacks, phobias, Hamilton anxiety, CGI and disability. These studies have additionally revealed alprazolam to be uniformly better tolerated than imipramine, with a quicker onset of therapeutic effect. "

 

https://pmc.ncbi.nlm.nih.gov/articles/PMC1076453/#:~:text=Clonazepam%20was%20found%20to%20be,in%202%20placebo%2Dcontrolled%20studies.&text=In%20a%209%2Dweek%20study,attacks%20at%20the%20study%20endpoint.

 

" Clonazepam was found to be superior to placebo in 2 placebo-controlled studies.35,36 In a 9-week study,35 74% of patients treated with 1 mg/day of clonazepam (administered b.i.d. after up-titration during 3 days) and 56% of placebo-treated patients were completely free of panic attacks at the study endpoint."

I'm not sure if it's you or the author making the claim that they don't prevent panic attacks, but I hope this is a small sample of the evidence base that shows them being strongly effective in that regard, which only increases our chagrin when prescribing them can lead to significant harm in the long run. 

I have ADHD, and also happen to be a psychiatry resident. 

As far as I can tell, it has been nothing but negative in my personal experience. It is a handicap, one I can overcome with coping mechanisms and medication, but I struggle to think of any positive impact on my life. 

For a while, there were evopsych theories that postulated that ADHD had an adaptational benefit, but evopsych is a shakey field at the best of times, and no clear benefit was demonstrated. 

https://pubmed.ncbi.nlm.nih.gov/32451437/

>All analyses performed support the presence of long-standing selective pressures acting against ADHD-associated alleles until recent times. Overall, our results are compatible with the mismatch theory for ADHD but suggest a much older time frame for the evolution of ADHD-associated alleles compared to previous hypotheses. 

The ancient ancestral environment probably didn't reward strong executive function and consistency in planning as strongly as agricultural societies did. Even so, the study found that prevalence was dropping even during Palaeolithic times, so it wasn't even something selected for in hunter-gatherers! 

I hate having ADHD, and sincerely hope my kids don't. I'm glad I've had a reasonably successful life despite having it. 

>Safety is limited to refusals, notably including refusals for medical or legal advice. Have they deliberately restricted those abilities to avoid lawsuits or to limit public perceptions of expertise being overtaken rapidly by AI? 

I think it's been well over a year since I've had an issue with getting an LLM to give me medical advice, including GPT-4o and other SOTA models like Claude 3.5/7, Grok 3 and Gemini 2.0 Pro. I seem to recall that the original GPT-4 would occasionally refuse, but could be coaxed into it. 

I am a doctor, and I tend to include that information either in model memory or in a prompt (mostly to encourage the LLM to assume background knowledge and ability to interpret facts). Even without it, my impression is that most models simply append a "consult a human doctor" boilerplate disclaimer instead of refusing. 

I would be rather annoyed if GPT 4.5 was a reversion in that regard, as I find LLMs immensely useful for quick checks on topics I'm personally unfamiliar with (and while hallucinations happen, they're quite rare now, especially with search, reasoning and grounding). I don't think OAI or other AI companies have faced any significant amount of litigation from either people who received bad advice, or doctors afraid of losing a job. 

I'm curious about whether anyone has had any issues in that regard, though I'd expect not. 

I'd wear a suit more often if dry-cleaning wasn't a hassle. Hmm.. I should check if machine washable suits are a thing. 

 

At least in the UK, suits have become a rarity in medical professionals. You do see some consultants wear them, but they're treated as strictly optional and nobody will complain about showing up with just a shirt and chinos. I'm keeping my suits nearly folded for the next conference I need to attend, I've got no excuse to wear them otherwise (that warrants the hassle IMO). 

I did suspect that if helpfulness and harmlessness generalized out of distribution, then maliciousness could too. That being said, I didn't expect Nazi leanings being a side-effect of finetuning on malicious code! 

>Pregnant woman goes into labor at 22 weeks, hospital tells her she has no hope, she drives 7 miles to another hospital she finds on facebook and now she has a healthy four year old. Comments have a lot of other ‘the doctors told us our child would never survive, but then we got a second opinion and they did anyway’ stories. 

At 22 weeks, premature delivery without intensive support has a survival rate of about 0%.

A study analyzing data from 2020 to 2022 across 636 U.S. hospitals reported that among infants born at 22 weeks who received postnatal life support, 35.4% survived to hospital discharge. However, survival without severe complications was notably lower, at 6.3%. 

https://pubmed.ncbi.nlm.nih.gov/39323403/

>Conclusions: Survival ranged from 24.9% at 22 weeks to 82.1% at 25 weeks, with low proportions of infants surviving without complications, prolonged lengths of hospital stay, and frequent technology dependence at all gestational ages. 

When talking complications, severe is not an understatement. Long-term cognitive impairment occurs in the vast majority of cases, and is crippling more often than not. 

I think it's ill-advised to pick this particularly case as an example of doctors giving poor or inadequate advice. It's entirely possible that the hospital didn't have the facilities for the level of intensive care a pre-term delivery at 22 weeks demanded.

The woman, and her daughter, were enormously lucky. I'm not an OB-gyn, but if I were in their shoes I would strongly counsel against attempting delivery and resuscitation. Of course, I respect patient autonomy enough that I would have gone ahead if the patient truly understood the risks involved, but without the benefit of hindsight I wouldn't think it was in the best interest of the child. 

Who knows how long regulatory inertia might last? I agree it'll probably add at least a few years to my employability, past the date where an AI can diagnose, plan and prescribe better than I can. It might not be something to rely on, if you end up with a regime where a single doctor rubberstamps hundreds of decisions, in place of what a dozen doctors did before. There's not that much difference between 90% and 100% unemployment! 

Evidence that adult cognition can be improved is heartening. I'd always had a small amount of fear regarding being "locked in" to my current level of intelligence with no meaningful scope for improvement. Long ago, in a more naive age, it was the prospect of children being enhanced to leave their parents in the dirt. Now, it looks like AI is improving faster than our biotechnology is. 

It's always a pleasure to read deep dives into genetic engineering, and this one was uniquely informative, though that's to be expected from GeneSmith. 

Thank you for your insight. Out of idle curiosity, I tried putting your last query into Gemini 2 Flash Thinking Experimental and it told me yes first-shot.

Here's the final output, it's absolutely beyond my ability to evaluate, so I'm curious if you think it went about it correctly. I can also share the full COT if you'd like, but it's lengthy:

https://ibb.co/album/rx5Dy1

(Image since even copying the markdown renders it ugly here)

I happen to be a doctor with an interest in LW and associated concerns, who discovered a love for ML far too late for me to reskill and embrace it.

My younger cousin is a mathematician currently doing an integrated Masters and PhD. About a year back, I'd been trying to demonstrate to him the every increasing capability of SOTA LLMs at maths, and asked him to raise questions that it couldn't trivially answer.

He chose "is the one-point compactification of a Hausdorff space itself Hausdorff?".

At the time, all the models insisted invariably that that's a no. I ran the prompt multiple times on the best models available then. My cousin said it was incorrect, and provided to sketch out a proof (which was quite simple when I finally understood that much of the jargon represented rather simple ideas at their core).

I ran into him again when we're both visiting home, and I decided to run the same question through the latest models to gauge their improvements.

I tried Gemini 1206, Gemini Flash Thinking Experimental, Claude 3.5 Sonnet (New) and GPT-4o.

Other than reinforcing the fact that AI companies have abysmal naming schemes, to my surprise almost all of them gave the correct answer, barring Claude, but it was hampered by Anthropic being cheapskates and turning on the concise responses mode.

I showed him how the extended reasoning worked for Gemini Flash (it doesn't hide its thinking tokens unlike o1) and I could tell that he was shocked/impressed, and couldn't fault the reasoning process it and the other models went through.

To further shake him up, I had him find some recent homework problems he'd been assigned at his course (he's in a top 3 maths program in India) and used the multimodality inherent in Gemini to just take a picture of an extended question and ask it to solve it.* It did so, again, flawlessly.

*So I wouldn't have to go through the headache of reproducing it in latex or markdown.

He then demanded we try with another, and this time he expressed doubts that the model could handle a compact, yet vague in the absence of context not presented problem, and no surprises again.

He admitted that this was the first time he took my concerns seriously, though getting a rib in by saying doctors would be off the job market before mathematicians. I conjectured that was unlikely, given that maths and CS performance are more immediately beneficial to AI companies as they are easier to drop-in and automate, while also having direct benefits for ML, with the goal of replacing human programmers and having the models recursively self-improve. Not to mention that performance in those domains is easier to make superhuman with the use of RL and automated theorem providers for ground truth. Oh well, I reassured him, we're probably all screwed and in short order, to the point where there's not much benefit in quibbling about the other's layoffs being a few months later.

Load More