All of Zachary's Comments + Replies

Zachary55

Collins not Gladwell lmao

Zachary30

If you're trying to make the most of your abundant free time then learning Quenya is a mistake. Learning a language nobody speaks seriously is at best a way to signal status to a very specific group of people and at worst a party trick

Some ideas of ways to spend that time that would pay higher dividends over the course of the rest of your life:

  • learn a musical instrument
  • become skilled at a competitive game (mtg, poker, rocket league, chess, etc)
  • practice yoga
    • personally I've found practicing yoga to be the best thing I can do for making me feel good in my bod
... (read more)
Zachary41

Hey Zvi. Love and appreciate your writing. I've been an avid reader since the covid posts. I know it's difficult since your posts are so long, but this one and others could use a proof-reading for typos. I regret that I didn't write down any particular instances, but there were a number in this post.

That sort of thing doesn't usually bother me, but your writing is precise and high-entropy to the point where a single mistaken word can make the thought much harder to digest. For example, in the sentence:

If this changes the rule from ‘you can build a house bu

... (read more)
3NoSignalNoNoise
I think it's intentional. He's saying that reducing the fee from $23k to $230 would be an improvement.
Zachary40

It's very possible that Murati's talk at Dartmouth was my source's source, i.e. the embedded video around 13:30. She doesn't say GPT-5 specifically but does sort of imply that by mentioning the jump from GPT-3 to GPT-4, then says "And then in the next couple of years we're looking at PhD-level intelligence for specific tasks...Yeah, a year and a half let's say"

 

Zachary20

I have moderately strong evidence that OpenAI has pushed back GPT-5 to late 2025 (not naming source for confidentiality reasons). Conditional on this being true:

  1. What do you think the most likely explanation is as to why it's being delayed?
  2. How would this affect your AI timelines?
  3. What impact would you expect this news to have on AI relevant stocks?
1Decaeneus
1. Things slow down when Ilya isn't there to YOLO in the right direction in an otherwise very high-dimensional space.
6Vladimir_Nesov
Since they said they are training the next frontier model now (which was on May 28), probably tests on an intermediate checkpoint indicate it's not worthy of moniker "GPT-5" (which was hyped as a significant advance), and late 2025 is the plan for deployment of an even bigger model that's scaled one step further than that. So this is evidence that the late-2024/early-2025 deployment model finishing training now will be called something else like GPT-4.5. This also agrees with the recent iterative deployment buzz, a sudden GPT-5 worthy of the name would be discordant with it. (If the currently training model was turning out too strong instead, other labs would also be approaching similarly powerful models, in which case having plans to delay to a specific distant date but not further seems strange. If training was getting unstable late into the training run, it might be too early to call a specific delay by at least half a year relative to prior plans.)
2p.b.
Mira Murati said publicly that "next gen models" will come out in 18 months, so your confidential source seems likely to be correct.

Strong upvoted. This post (especially as it relates to ask/guess culture) puts into words what I've previously referred to vaguely as "spiritual differences". I'm hopeful that I can train myself to recognize mismatched stances and pivot instead of concluding that someone else and I have incompatible personalities

Zachary109

The speed with which GPT-4, was hooked up to the internet via plugins has basically convinced me that boxing isn't a realistic strategy. The economic incentive to unbox an AI is massive. Combine that with the fact that an ASI would do everything it could to appear safe enough to be granted internet access, and I just don't see a world in which everyone cooperates to keep it boxed.

Zachary170

I wanted to say thank you for this post - I'm 26 years old, and up until last year I'd been afflicted with back pain that would onset after standing still for an extended period of time. I think it started after doing hang-cleans with bad form when I was in high school. Over the years, the amount of time it took for the pain to appear got shorter and shorter, and the pain grew more and more intense, to the point where I would be uncomfortable and unable to enjoy myself after standing for more than ~45 minutes.

Knowing that I would eventually be in pain if I... (read more)

6Steven Byrnes
Thank you for sharing, I'm so happy to have helped!!!!
Zachary10-2

Personally I don't like the feeling of having a wet asshole

Honestly, smoking weed has always helped me with this because it has a way of forcing those issues I'm ignoring to the surface

We don’t know exactly how a self-aware AI would act, but we know this: it will strive to prevent its own shutdown. No matter what the AI’s goals are, it wouldn’t be able to achieve them if it gets turned off. The only sure fire way to prevent its shutdown would be to eliminate the ones with the power to do so: humans. There is currently no known method to teach an AI to care about humans. Solving this problem may take decades, and we are running out of time.

1trevor
Shutdown points are really important. It could probably fit well into all of my entries, since they target executives and policymakers who will mentally beeline to "off-switch". But it's also really hard to do it right concisely, because that brings an anthropomorphic god-like entity to mind, which rapidly triggers the absurdity heuristic. And the whole thing with "wanting to turn itself off but turning off the wrong way or doing damage in the process" is really hard to keep concise.