Julian Bradshaw

Wikitag Contributions

Comments

Sorted by

After an inter-party power-struggle, the CCP commits to the perpetual existence of at least one billion Han Chinese people with biological reproductive freedom

You know, this isn't such a bad idea - that is, explicit government commitments against discarding their existing, economically-unproductive populace. Easier to ask for today, rather than later.

Hypothetically this is more valuable in autocracies than in democracies, where the 1 person = 1 vote rule keeps political power in the hands of the people, but I think I'd support adding a constitutional amendment in the United States that offered some further guarantee. 

Obviously those in power could perhaps ignore the guarantees later, but in this scenario we're dealing with basically aligned AIs, which may be enforcing laws and constitutions better than your average dictator/president would.

It's unclear exactly what the product GPT-5 will be, but according to OpenAI's Chief Product Officer today it's not merely a router between GPT-4.5/o3.

swyx
appreciate the update!!

in gpt5, are gpt* and o* still separate models under the hood and you are making a model router? or are they going to be unified in some more substantive way?


Kevin Weil
Unified 👍

Here's a fun related hypothetical. Let's say you're a mid-career software engineer making $250k TC right now. In a world with no AI progress you plausibly have $5m+ career earnings still coming. In a world with AGI, maybe <$1m. Would you take a deal where you sell all your future earnings for, say, $2.5m right now?

(me: no, but I might consider selling a portion of future earnings in such a deal as a hedge)

Is there any way to make this kind of trade? Arguably a mortgage is kind of like this, but you have to pay that back unless the government steps in when everyone loses their jobs...

You're right that there's nuance here. The scaling laws involved mean exponential investment -> linear improvement in capability, so yeah it naturally slows down unless you go crazy on investment... and we are, in fact, going crazy on investment. GPT-3 is pre-ChatGPT, pre-current paradigm, and GPT-4 is nearly so. So ultimately I'm not sure it makes that much sense to compare the GPT1-4 timelines to now. I just wanted to note that we're not off-trend there.

soon when we were racing through GPT-2, GPT-3, to GPT-4. We just aren't in that situation anymore

I don't think this is right.

GPT-1: 11 June 2018
GPT-2: 14 February 2019 (248 days later)
GPT-3: 28 May 2020 (469 days later)
GPT-4: 14 March 2023 (1,020 days later)

Basically, wait until next model doubled every time. By that pattern, GPT-5 ought to come around September 20, 2028, but Altman said today it'll be out within months. (and frankly, I think o1 qualifies as a sufficiently-improved successor model, and that released December 5, 2024, or really September 12, 2024, if you count o1-preview; either way, shorter than the GPT-3 to 4 gap)

Still-possible good future: there's a fast takeoff to ASI in one lab, contemporary alignment techniques somehow work, that ASI prevents any later unaligned AI from ruining world, ASI provides life and a path for continued growth to humanity (and to shrimp, if you're an EA).


Copium perhaps, and certainly less likely in our race-to-AGI world, but possible. This is something like the “original”, naive plan for AI pre-rationalism, but it might be worth remembering as a possibility?

The only sane version of this I can imagine is where there's either one aligned ASI, or a coalition of aligned ASIs, and everyone has equal access. Because the AI(s) are aligned they won't design bioweapons for misanthropes and such, and hopefully they also won't make all human effort meaningless by just doing everything for us and seizing the lightcone etc etc.

It's strange that he doesn't mention DeepSeek-R1-Zero anywhere in that blogpost, which is arguably the most important development DeepSeek announced (self-play RL on reasoning models). R1-Zero is what stuck out to me in DeepSeek's papers, and ex. the Arc Prize team behind the Arc-Agi benchmark says

R1-Zero is significantly more important than R1.

Was R1-Zero already obvious to the big labs, or is Amodei deliberately underemphasizing that part?

I like it. Thanks for sharing.

(spoilers below)

While I recognize that in the story it's assumed alignment succeeds, I'm curious on a couple worldbuilding points.

First, about this stage of AI development:

His work becomes less stressful too — after AIs surpass his coding abilities, he spends most of his time talking to users, trying to understand what problems they’re trying to solve.

The AIs in the story are really good at understanding humans. How does he retain this job when it seems like AIs would do it better? Are AIs just prevented from taking over society from humans through a combination of alignment and some legal enforcement?

Second, by the end of the story, it seems like AIs are out of the picture entirely, except perhaps as human-like members of the hivemind. What happened to them?

 

In other words: I'd like to know what kind of alignment or legal framework you think could get us to this kind of utopia.

 

EDIT: I found this tweet from someone who says they just interviewed Richard Ngo. Full interview isn't out yet, but the tweet says that when asked about ways in which his stories seem unrealistic, Richard Ngo:

wasn't attached to them, nor did he say "these ideas are going to happen" or "these ideas should make you feel like AGI risk isn't a big deal." He juggles with ideas with a light touch, which was cool.

So it would seem that my questions don't have answers. Fair enough, I suppose.

The problem with Dark Forest theory is that, in the absence of FTL detection/communication, it requires a very high density and absurdly high proportion of hiding civilizations. Without that, expansionary civilizations dominate. The only known civilization, us, is expansionary for reasons that don't seem path-determinant, so it seems unlikely that the preconditions for Dark Forest theory exist.

To explain:

Hiders have limited space and mass-energy to work with. An expansionary civilization, once in its technological phase, can spread to thousands of star systems in mere thousands of years and become unstoppable by hiders. So, hiders need to kill expansionists before that happens. But if they're going to hide in their home system, they can't detect anything faster than FTL! So you need murderous hiding civs within at least a thousand light years of every single habitable planet in the galaxy, all of which need to have evolved before any expansionary civs in the area. This is improbable unless basically every civ is a murderous hider. The fact that the only known civ is not a murderous hider, for generalizable reasons, is thus evidence against the Dark Forest theory.

 

Potential objections:

  • Hider civs would send out stealth probes everywhere.

Still governed by FTL, expansionary civ would become overwhelmingly strong before probes reported back.

  • Hider civs would send out killer probes everywhere.

If the probes succeed in killing everything in the galaxy before they reach the stars, you didn't need to hide in the first place. (Also, note that hiding is a failed strategy for everyone else in this scenario, you can't do anything about a killer probe when you're the equivalent of the Han dynasty. Or the equivalent of a dinosaur.) If the probes fail, the civ they failed against will have no reason to hide, having been already discovered, and so will expand and dominate. 

  • Hider civs would become so advanced that they could hide indefinitely from expansionary civs, possibly by retreating to another dimension.

Conceivable, but I'd rather be the expansionary civs here?

  • Hider civs would become so advanced that they could kill any later expansionary civ that controlled thousands of star systems.

I think this is the strongest objection. If, for example, a hider civ could send out a few ships that can travel at a higher percentage of lightspeed than anything the expansionary civ can do, and those ships can detonate stars or something, and catching up to this tech would take millions of years, then just a few ships could track down and obliterate the expansionary civ within thousands/tens of thousands of years and win.

The problem is that the "hider civ evolved substantially earlier" part has to be true everywhere in the galaxy, or else somewhere an expansionary civilization wins and then snowballs with their resource advantages - this comes back to the "very high density and absurdly high proportion of hiding civilizations" requirement. The hiding civs have to always be the oldest whenever they meet an expansionary civ, and older to a degree that the expansionary civ's likely several orders of magnitude more resources and population doesn't counteract the age difference.

Load More