davekasten

Wikitag Contributions

Comments

Sorted by

I think there's at least one missing one, "You wake up one morning and find out that a private equity firm has bought up a company everyone knows the name of, fired 90% of the workers, and says they can replace them with AI."

This essay earns a read for the line, "It would be difficult to find a policymaker in DC who isn’t happy to share a heresy or two with you, a person they’ve just met" alone.

I would amplify to suggest that while many things are outside the Overton Window, policymakers are also aware of the concept of slowly moving the Overton Window, and if you explicitly admit you're doing that, they're usually on board (see, e.g., the conservative legal movement, the renewable energy movement, etc.).  It's mostly only if you don't realize you're proposing that that you trigger a dismissive response.

Ok, so it seems clear that we are, for better or worse, likely going to try to get AGI to do our alignment homework. 

Who has thought through all the other homework we might give AGI that is as good of an idea, assuming a model that isn't an instant-game-over for us?  E.G., I remember @Buck rattling off a list of other ideas that he had in his The Curve talk, but I feel like I haven't seen the list of, e.g., "here are all the ways I would like to run an automated counterintelligence sweep of my organization" ideas.

(Yes, obviously, if the AI is sneakily misaligned, you're just dead because it will trick you into firing all your researchers, etc.; this is written in a "playing to your outs" mentality, not an "I endorse this as a good plan" mentality.)

Huh?  "fighting election misinformation" is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.  

Without commenting on any strategic astronomy and neurology, it is worth noting that "bias", at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).  

I am not a fan, but it is worth noting that these are the issues that many politicians bring up already, if they're unfamiliar with the more catastrophic risks. Only one missing on there is job loss. So while this choice by OpenAI sucks, it sort of usefully represents a social fact about the policy waters they swim in.

I am (sincerely!) glad that this is obvious to other people too and that they are talking about it already!

I mean, the literal best way to incentivize @Ricki Heicklen and me to do this again for LessOnline and Manifest 2025 is to create a prediction market on it, so I encourage you to do that

One point that maybe someone's made, but I haven't run across recently:  if you want to turn AI development into a Manhattan Project, you will by-default face some real delays from the reorganization of private efforts into one big national effort.  In a close race, you might actually see pressures not to do so, because you don't want to give up 6 months to a year on reorg drama -- so in some possible worlds, the Project is actually a deceleration move in the short term, even if it accelerates in the long term!

Load More