Scott Alexander

Sequences

Priming
Positivism and Self Deception
Introduction to Game Theory
The Blue-Minimizing Robot
Hypotheses and Hunches
Probability and Predictions
Parables and Prayers
Futurism and Forecasting
Community and Cooperation
Load More (9/15)

Wikitag Contributions

Load More

Comments

Sorted by

I disagreed with Gwern at first. I'm increasingly forced to admit there's something like bipolar going on here, but I still think we're also missing something - his cognitive state seems pretty steady month to month, rather than episodes of mania alternating with lucidity.

Someone claimed the latest Musk biography said he was much more normal early in the morning, and much crazier late at night. I need to read the biography and see if that's actually in there; if so, maybe there could be a case for ultradian or ultra-rapid-cycling or something. This could potentially just look like a random mix of good and bad decisions depending on when in the cycle he's making them, with the cycle itself too fast to notice on the scale of news reports. As you say, presumably something about that changed the past few years (I've never heard anyone discuss what happens to ultradian bipolar if you simply never sleep, but I bet it's nothing good).

Anatoly Karlin, who apparently also read the biography, says that Musk's father Errol also went crazy after fifty - see https://x.com/powerfultakes/status/1892003738929238408 . One excerpt:

"I don't know how he went from being great at engineering to believing in witchcraft", [Elon told the biographer about his father]. Errol can be very forceful and occasionally convincing. "He changes reality around him," [Elon's brother] Kimbal  says. "He will literally make up things, but he actually believes his own false reality."

I can't think of a form of bipolar which consistently gets much worse at age 50, but I hope to look into this further.

Scott AlexanderΩ205715

Does this imply that fewer safety people should quit leading labs to protest poor safety policies?

Questions for people who know more:

  1. Am I understanding right that inference compute scaling time is useful for coding, math, and other things that are machine-checkable, but not for writing, basic science, and other things that aren't machine-checkable? Will it ever have implications for these things?
  2. Am I understanding right that this is all just clever ways of having it come up with many different answers or subanswers or preanswers, then picking the good ones to expand upon? Why should this be good for eg proving difficult math theorems, where many humans using many different approaches have failed, so it doesn't seem like it's as simple as trying a hundred times, or even trying using a hundred different strategies?
  3. What do people mean when they say that o1 and o3 have "opened up new scaling laws" and that inference-time compute will be really exciting? Doesn't "scaling inference compute" just mean "spending more money and waiting longer on each prompt"? Why do we expect this to scale? Does inference compute scaling mean that o3 will use ten supercomputers for one hour per prompt, o4 will use a hundred supercomputers for ten hours per prompt, and o5 will use a thousand supercomputers for a hundred hours per prompt? Since they already have all the supercomputers (for training scaling) why does it take time and progress to get to the higher inference-compute levels? What is o3 doing that you couldn't do by running o1 on more computers for longer?

I looked into this and got some useful information. Enough people asked me to keep their comments semi-confidential that I'm not going to post everything publicly, but if someone has a reason to want to know more, they can email me. I haven't paid any attention to this situation since early 2022 and can't speak to anything that's happened since then.

My overall impression is that the vague stereotype everyone has is accurate - Michael is pretty culty, has a circle of followers who do a lot of psychedelics and discuss things about trauma in altered states, and many of those people have had pretty bad psychotic breaks. 

But I wasn't able to find any direct causal link between Michael and the psychotic breaks - people in this group sometimes had breaks before encountering him, or after knowing him for long enough that it didn't seem triggered by meeting him, or triggered by obvious life events. I think there's more reverse causation (mentally fragile people who are interested in psychedelics join, or get targeted for recruitment into, his group) than direct causation (he convinces people to take psychedelics and drives them insane), though I do think there's a little minor direct causation in a few cases. 

I retraced the same argument about Olivia that people are having here - yes, she likes manipulating people and claiming that she's driven them insane (it's unclear how effective she actually is or whether she just takes credit, but I would still avoid her), she briefly hung out with Michael in 2017 and often says that Michael inspired her to do this, but Michael denies continued affiliation with her, and she hasn't been part of his inner circle of followers since the late 2010s (if she ever was). The few conversation logs I got failed to really back up any continuing connection between them, and I think she's more likely doing it on her own and sort of piggybacking on his reputation.

I continue to recommend that everybody just stay away from this entire scene and group of people.

Thanks for this perspective.

The therapy paradigm you describe here (going to a clinic to receive Spravato), is, as you point out, difficult and bureaucratic.

Through a regulatory loophole, there's another pathway where you can get ketamine sent to your house with less bureaucracy. https://www.mindbloom.com/ is the main provider I know of. They're very expensive, but in theory this could be done for cheap and maybe other providers are doing it, I don't know. If you have a cooperative psychiatrist, you can see if they know about this version and are willing to prescribe it.

As you point out, ketamine lasts a few weeks and then some people will crash back to their previous level of depression. If I am able to successfully treat a patient with ketamine, I usually recommend they continue it for six months, just like any other antidepressant. A cooperative doctor can do this by prescribing it to a cooperative compounding pharmacy. I don't know if Mindbloom or other companies provide this service by default. Obviously this is easier when you're doing the version in your house than if you have to go to a clinic each time.

I've written more of my thoughts about ketamine at https://lorienpsych.com/2021/11/02/ketamine/

Who is the wealthy person?

But it's also relevant that we're not asking the superintelligence to grant a random wish, we're asking it for the right to keep something we already have. This seems more easily granted than the random wish, since it doesn't imply he has to give random amounts of money to everyone.

My preferred analogy would be:

You founded a company that was making $77/year. Bernard launched a hostile takeover, took over the company, then expanded it to make $170 billion/year. You ask him to keep paying you the $77/year as a pension, so that you don't starve to death.

This seems like a very sympathetic request, such that I expect the real, human Bernard would grant it. I agree this doesn't necessarily generalize to superintelligences, but that's Zack's point - Eliezer should choose a different example.

Thanks, this is interesting.

My understanding is that cavities are formed because the very local pH on that particular sub-part of the tooth is below 5.5. IIUC teeth can't get cancer. Are you imagining Lumina colonies on the gums having this effect there, the Lumina colonies on the teeth affecting the general oral environment (which I think would require more calculation than just comparing to the hyper-local cavity environment) or am I misunderstanding something?

Thanks, this is very interesting.

One thing I don't understand: you write that a major problem with viruses is:

As one might expect, the immune system is not a big fan of viruses. So when you deliver DNA for a gene editor with an AAV, the viral proteins often trigger an adaptive immune response. This means that when you next try to deliver a payload with the same AAV, antibodies created during the first dose will bind to and destroy most of them.

Is this a problem for people who expect to only want one genetic modification during their lifetime?

I agree with everyone else pointing out that centrally-planned guaranteed payments regardless of final outcome doesn't sound like a good price discovery mechanism for insurance. You might be able to hack together a better one using https://www.lesswrong.com/posts/dLzZWNGD23zqNLvt3/the-apocalypse-bet , although I can't figure out an exact mechanism.

Superforecasters say the risk of AI apocalypse before 2100 is 0.38%. If we assume whatever price mechanism we come up with tracks that, and value the world at GWP x 20 (this ignores the value of human life, so it's a vast underestimate), and that AI companies pay it in 77 equal yearly installments from now until 2100, that's about $100 billion/year. But this seems so Pascalian as to be almost cheating. Anybody whose actions have a >1/25 million chance of destroying the world would owe $1 million a year in insurance (maybe this is fair and I just have bad intuitions about how high 1/25 million really is)

An AI company should be able to make some of its payments (to the people whose lives it risks, in exchange for the ability to risk those lives) by way of fractions of the value that their technology manages to capture. Except, that's complicated by the fact that anyone doing the job properly shouldn't be leaving their fingerprints on the future. The cosmic endowment is not quite theirs to give (perhaps they should be loaning against their share of it?).

This seems like such a big loophole as to make the plan almost worthless. Suppose OpenAI said "If we create superintelligence, we're going to keep 10% of the universe for ourselves and give humanity the other 90%" (this doesn't seem too unfair to me, and the exact numbers don't matter for the argument). It seems like instead of paying insurance, they can say "Okay, fine, we get 9% and you get 91%" and this would be in some sense a fair trade (one percent of the cosmic endowment is worth much more than $100 billion!) But this also feels like OpenAI moving some numbers around on an extremely hypothetical ledger, not changing anything in real life, and continuing to threaten the world just as much as before.

But if you don't allow a maneuver like this, it seems like you might ban (through impossible-to-afford insurance) some action that has an 0.38% chance of destroying the world and a 99% chance of creating a perfect utopia forever.

There are probably economic mechanisms that solve all these problems, but this insurance proposal seems underspecified.

Load More