magic9mushroom

Wikitag Contributions

Comments

Sorted by

You're encouraged to write a self-review, exploring how you think about the post today. Do you still endorse it? Have you learned anything new that adds more depth? How might you improve the post? What further work do you think should be done exploring the ideas here?

Still endorse. Learning about SIA/SSA from the comments was interesting. Timeless but not directly useful, testable or actionable.

There is no war in the run-up to AGI that would derail the project, e.g. by necessitating that most resources be used for capabilities instead of safety research.

 

Assuming short timelines, I think it’s likely impossible to reach my desired levels of safety culture.

I feel obliged to note that a nuclear war, by dint of EMPs wiping out the power grid, would likely remove private AI companies as a thing for a while, thus deleting their current culture. It would also lengthen timelines.

Certainly not ideal in its own right, though.

There are a couple of things that are making me really nervous about the idea of donating:

  1. "AI safety" is TTBOMK a broad term and encompasses prosaic alignment as well as governance. I am of the strong opinion that prosaic alignment is a blind alley that's mostly either wasted effort or actively harmful due to producing fake alignment that makes people not abandon neural nets. ~97% of my P(not doom) routes through Butlerian Jihad against neural nets (with or without a nuclear war buying us more time) that lasts long enough to build GOFAI. And frankly, I don't spend that much time on LW, so I've little idea which of these efforts (or others!) gets most of the benefit you claim from the site.
  2. As noted above, I think a substantial chunk of useful futures (though not a vast majority) route through nuclear war destroying the neural-net sector for a substantial amount of time (via blast wiping out factories, EMP destroying much of existing chip stocks, destruction of power and communication infrastructure reducing the profitability of AI, economic collapse more broadly, and possibly soft errors). As such, I've been rather concerned for years about the fact that the Ratsphere's main IRL presence is in the Bay Area and thus nuke-bait; we want to disproportionately survive that, not die in it. Insofar as Lighthaven is in the Bay Area, I am thus questioning whether its retention is +EV.

>Second, I imagine that such a near-miss would make Demis Hassabis etc. less likely to build and use AGIs in an aggressive pivotal-act-type way. Instead, I think there would be very strong internal and external pressures (employees, government scrutiny, public scrutiny) preventing him and others from doing much of anything with AGIs at all.

I feel I should note that while this does indeed form part of a debunk of the "good guy with an AGI" idea, it is in and of itself a possible reason for hope. After all, if nobody anywhere dares to make AGI, well, then, AGI X-risk isn't going to happen. The trouble is getting the Overton Window to the point where sufficient bloodthirst to actually produce that outcome (i.e. nuclear-armed countries saying "if anyone attempts to build AGI, everyone who cooperated in doing it hangs or gets life without parole, and if any country does not enforce this vigorously we will invade, and if they have nukes or have a bigger army than us then we pre-emptively nuke them because their retaliation is still higher-EV than letting them finish") is seen as something other than insanity, which a warning shot could well pull off.

This is not a permanent solution - questions of eventual societal relaxation aside, humanity cannot expand past K2 without the Jihad breaking down unless FTL is a thing - but it buys a lot of breathing time, which is the key missing ingredient you note in a lot of these plans.

I've got to admit, I look at most of these and say "you're treating the social discomfort as something immutable to be routed around, rather than something to be fixed by establishing different norms". Forgive me, but it strikes me (especially in this kind of community with high aspie proportion) that it's probably easier to tutor the... insufficiently-assertive... in how to stand up for themselves in Ask Culture than it is to tutor the aspies in how to not set everything on fire in Guess Culture.

Amusingly, "rare earths" are actually concentrated in the crust compared to universal abundance and thus would make awful candidates for asteroid mining, while "tellurium", literally named after the Earth, is an atmophile/siderophile element with extreme depletion in the crust and one of the best candidates.

It strikes me that I'm not sure whether I'd prefer to lose $20,000 or have my jaw broken. I'm pretty sure I'd prefer to have my jaw broken than to lose $200,000, though. So, especially in the case that the money cannot actually be extracted back from the thief, I would tend to think the $200,000 theft should be punished more harshly than the jaw-breaking. And, sure, you've said that the $20,000 would be punished more harshly than the jaw-breaker, but that's plausibly just because 2 days is too long for a $100 theft to begin with.

I mean, most moral theories do either give the answers of "zero", "as large as can be fed", or "a bit less than as large as can be fed". Given the potential to scale feeding in the future, the latter two round off to "infinity".

I think the basic assumed argument here (though I'm not sure where or even if I've seen it explicitly laid out) goes essentially like this:

  • Using neural nets is more like the immune system's "generate everything and filter out what doesn't work" than it is like normal coding or construction. And there are limits on how much you can tamper with this, because the whole point of neural nets is that humans don't know how to write code as good as neural nets - if we knew how to write such code deliberately, we wouldn't need to use neural nets in the first place.
  • You hopefully have part of that filter designed to filter out misalignment. Presumably we agree that if you don't have this, you are going to have a bad time.
  • This means that two things will get through your filter: golden-BB false negatives in exactly the configurations that fool all your checks, and true aligned AIs which you want.
  • But both corrigibility and perfect sovereign alignment are highly rare (corrigibility because it's instrumentally anti-convergent, and perfect sovereign alignment because value is fragile), which means that your filter for misalignment is competing against that rarity to determine what comes out.
  • If P(golden-BB false negative) << P(alignment), all is well.
  • But if P(golden-BB false negative) >> P(alignment) despite your best efforts, then you just get golden-BB false negatives. Sure, they're highly weird, but they're still less weird than what you're looking for and so you wind up creating them reliably when you try hard enough to get something that passes your filter.

The earliness of life appearing on Earth isn't amazingly-consistent with life's appearance on Earth being a filter-break. It suggests either abiogenesis is relatively-easy or that panspermia is easy (as I noted, in the latter case abiogenesis could be as hard as you like but that doesn't explain the Great Silence).

Frankly, it's premature to be certain it's "abiogenesis rare, no panspermia" before we've even got a close look at Earthlike exoplanets.

Load More