LESSWRONG
LW

179
MichaelDickens
1440112480
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2MichaelDickens's Shortform
4y
133
No wikitag contributions to display.
Master your sleep cycles
MichaelDickens8h20

I try to minimize external variables.

  • I sleep with the window open (to keep my room cool) but I sleep with earplugs + a fan, and it's rare that I get woken up by outside noise.
  • I usually don't eat any food within 3–4 hours of going to sleep, to avoid digestion potentially disrupting sleep.
  • I rarely drink alcohol (twice a year maybe).
  • I live alone, no kids/partner.
  • My stress levels vary but it's rare for stress to literally keep me up at night.
  • I take 300mcg melatonin an hour before bed and I use a blue light filter on my computer. (It would be even better to stop using my computer an hour or two before bed, but I don't do that.)

The biggest variables I can think of are

  • My bedtime varies from 9pm to 11pm or so.
  • The sun rising usually wakes me up (I have blackout curtains but the sun wakes me up anyway). Although this isn't a complete explanation because my wakeup time varies by more than sunrise time.
  • I have caffeine 4 days a week (M/W/F/Sa), always in the early morning. This may affect sleep quality.
Reply
Master your sleep cycles
MichaelDickens2d110

I have a flexible schedule so I wake up naturally almost every day, but my sleep length still has massive variance. Even though I have over a thousand nights of data, I still have no clue how long my sleep cycles last.

Here is a histogram of my time spent in bed:[1]

My average is 535 minutes, but only 40% of my nights fall within the 60-minute window from 505 to 565 minutes.

Given the theory about how sleep cycles work, I'd expect to see a multimodal histogram with peaks every ~90 minutes. But instead the histogram is unimodal. So I don't know what to do with this information.

[1] I'm reporting time in bed rather than time asleep because first of all, my phone isn't very good at knowing when I fall asleep, and second, I think time in bed is more relevant because I can't control when I fall asleep but I can control when I go to bed.

Reply1
Shortform
MichaelDickens4d115

In my experience, if I look at the Twitter account of someone I respect, there's a 70–90% chance that Twitter turns them into a sort of Mr. Hyde self who's angrier, less thoughtful, and generally much worse epistemically. I've noticed this tendency in myself as well; historically I tried pretty hard to avoid writing bad tweets, and avoid reading low-quality Twitter accounts, but I don't think I succeeded, and recently I gave up and just blocked Twitter using LeechBlock.

I'm sad about this because I think Twitter could be really good, and there's a lot of good stuff on it, but there's too much bad stuff.

Reply
Contra Shrimp Welfare.
MichaelDickens5d0-3

So like, 1 in 1000? 1 in 10,000? Smaller?

Reply
Contra Shrimp Welfare.
MichaelDickens5d20

What is your subjective probability that shrimps can experience suffering?

My probability is pretty low but I still like SWP, so either we disagree on just how low the probability is, or we disagree on something else.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
MichaelDickens7d40

All of them, or just these two?

There were just four reasons right? Your three numbered items, plus "effectful wise action is more difficult than effectful unwise action, and requires more ideas / thought / reflection, relatively speaking; and because generally humans want to do good things". I think that quotation was the strongest argument. As for numbered item #1, I don't know why you believe it, but it doesn't seem clearly false to me, either.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
MichaelDickens8d40

For example, the leaders of AGI capabilities research would be smarter--which is bad in that they make progress faster, but good in that they can consider arguments about X-risk better.

This mechanism seems weak to me. For example, I think the leaders of all AI companies are considerably smarter than me, but I am still doing a better job than they are of reasoning about x-risk. It seems unlikely that making them even smarter would help.

(All else equal, you're more likely to arrive at correct positions if you're smarter, but I think the effect is weak.)

Another example: it's harder to give even a plausible justification for plunging into AGI if you already have a new wave of super smart people making much faster scientific progress in general, e.g. solving diseases.

If enhanced humans could make scientific progress at the same rate as ASI, then ASI would also pose much less of an x-risk because it can't reliably outsmart humans. (Although it still has the advantage that it can replicate and self-modify.) Realistically I do not think there is any level of genetic modification at which humans can match the pace of ASI.

That all isn't necessarily to say that human intelligence enhancement is a bad idea; I just didn't find the given reasons convincing.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
MichaelDickens8d20

That shouldn't matter too much for stock price, right? If Google is currently rolling out its own TPUs then the long-term expectation should be that Nvidia won't be making significant revenue off of Google's AI.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
MichaelDickens8d170

This caught me off guard:

all of which one notes typically run on three distinct sets of chips (Nvidia for GPT-5, Amazon Trainium for Anthropic and Google TPUs for Gemini)

I was previously under the impression that ~all AI models ran on Nvidia, and that was (probably) a big part of why Nvidia now has the largest market cap in the world. If only one out of three of the biggest models uses Nvidia, that's a massive bear signal relative to what I believed two minutes ago. And it looks like the market isn't pricing this in at all, unless I'm missing something.

(I assume I'm missing something because markets are usually pretty good at pricing things in.)

Reply
chanamessinger's Shortform
MichaelDickens8d32

I think this community often fails to appropriately engage with and combat this argument.

What do you think that looks like? To me, that looks like "give object-level arguments for AI x-risk that don't depend on what AI company CEOs say." And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).

Reply
Load More
64Outlive: A Critical Review
2mo
4
9How concerned are you about a fast takeoff due to a leap in hardware usage?
Q
3mo
Q
7
24Why would AI companies use human-level AI to do alignment research?
5mo
8
16What AI safety plans are there?
5mo
3
7Retroactive If-Then Commitments
8mo
0
5A "slow takeoff" might still look fast
3y
3
2How much should I update on the fact that my dentist is named Dennis?
Q
3y
Q
3
15Why does gradient descent always work on neural networks?
Q
3y
Q
11
2MichaelDickens's Shortform
4y
133
19How can we increase the frequency of rare insights?
4y
10
Load More