I have a flexible schedule so I wake up naturally almost every day, but my sleep length still has massive variance. Even though I have over a thousand nights of data, I still have no clue how long my sleep cycles last.
Here is a histogram of my time spent in bed:[1]
My average is 535 minutes, but only 40% of my nights fall within the 60-minute window from 505 to 565 minutes.
Given the theory about how sleep cycles work, I'd expect to see a multimodal histogram with peaks every ~90 minutes. But instead the histogram is unimodal. So I don't know what to do with this information.
[1] I'm reporting time in bed rather than time asleep because first of all, my phone isn't very good at knowing when I fall asleep, and second, I think time in bed is more relevant because I can't control when I fall asleep but I can control when I go to bed.
In my experience, if I look at the Twitter account of someone I respect, there's a 70–90% chance that Twitter turns them into a sort of Mr. Hyde self who's angrier, less thoughtful, and generally much worse epistemically. I've noticed this tendency in myself as well; historically I tried pretty hard to avoid writing bad tweets, and avoid reading low-quality Twitter accounts, but I don't think I succeeded, and recently I gave up and just blocked Twitter using LeechBlock.
I'm sad about this because I think Twitter could be really good, and there's a lot of good stuff on it, but there's too much bad stuff.
So like, 1 in 1000? 1 in 10,000? Smaller?
What is your subjective probability that shrimps can experience suffering?
My probability is pretty low but I still like SWP, so either we disagree on just how low the probability is, or we disagree on something else.
All of them, or just these two?
There were just four reasons right? Your three numbered items, plus "effectful wise action is more difficult than effectful unwise action, and requires more ideas / thought / reflection, relatively speaking; and because generally humans want to do good things". I think that quotation was the strongest argument. As for numbered item #1, I don't know why you believe it, but it doesn't seem clearly false to me, either.
For example, the leaders of AGI capabilities research would be smarter--which is bad in that they make progress faster, but good in that they can consider arguments about X-risk better.
This mechanism seems weak to me. For example, I think the leaders of all AI companies are considerably smarter than me, but I am still doing a better job than they are of reasoning about x-risk. It seems unlikely that making them even smarter would help.
(All else equal, you're more likely to arrive at correct positions if you're smarter, but I think the effect is weak.)
Another example: it's harder to give even a plausible justification for plunging into AGI if you already have a new wave of super smart people making much faster scientific progress in general, e.g. solving diseases.
If enhanced humans could make scientific progress at the same rate as ASI, then ASI would also pose much less of an x-risk because it can't reliably outsmart humans. (Although it still has the advantage that it can replicate and self-modify.) Realistically I do not think there is any level of genetic modification at which humans can match the pace of ASI.
That all isn't necessarily to say that human intelligence enhancement is a bad idea; I just didn't find the given reasons convincing.
That shouldn't matter too much for stock price, right? If Google is currently rolling out its own TPUs then the long-term expectation should be that Nvidia won't be making significant revenue off of Google's AI.
This caught me off guard:
all of which one notes typically run on three distinct sets of chips (Nvidia for GPT-5, Amazon Trainium for Anthropic and Google TPUs for Gemini)
I was previously under the impression that ~all AI models ran on Nvidia, and that was (probably) a big part of why Nvidia now has the largest market cap in the world. If only one out of three of the biggest models uses Nvidia, that's a massive bear signal relative to what I believed two minutes ago. And it looks like the market isn't pricing this in at all, unless I'm missing something.
(I assume I'm missing something because markets are usually pretty good at pricing things in.)
I think this community often fails to appropriately engage with and combat this argument.
What do you think that looks like? To me, that looks like "give object-level arguments for AI x-risk that don't depend on what AI company CEOs say." And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
I try to minimize external variables.
The biggest variables I can think of are