Nicholas / Heather Kross

Theoretical AI alignment (and relevant upskilling) in my free time. My current view of the field is here (part 1) and here (part 2).

Genderfluid (differs on hour/day-ish timescale.). It's not a multiple-personality thing.

/nickai/

Wikitag Contributions

Comments

Sorted by

Something else I just realized: Georgism is a leftish idea that recognizes some (but not all) leftish ideas I've discussed or referenced above, and its modern form is currently rationalist-adjacent. Progress!

Ah, sorry yeah I think it was a mistake on my part to mostly make the post a verbatim Discord reply. Lots of high-context stuff that I didn't explain well.

This specific part is (in my usage/interpretation; if you click the link, the initial context was an Emmett Shear tweet) basically a shorthand for one or more "basic" leftist views, along the lines of these similar-but-somewhat-distinct claims:

  • Capitalism more-reliably rewards power-maximizers than social-utility-maximizers.
  • Under capitalism and similar incentive-structures, we'd expect conflict theory to predict entities' wealth better than mistake theory.
  • General outcomes, under capitalism and similar incentive-structures, are downstream of "brute power" (from guns to monopolies) far more than the things we'd "want" to reward (innovation, good service, helping people, etc).

I said "one of the best movies about", not "one of the best movies showing you how to".

The punchline is "alignment could productively use more funding". Many of us already know that, but I felt like putting a mildly-opinionated spin on what kind of things, at the margin, may help top researchers. (Also I spent several minutes editing/hedging the joke)

Virgin 2030s [sic] MIRI fellow:
- is cared for so they can focus on research
- has staff to do their laundry
- soyboys who don't know *real* struggle
- 3 LDT-level alignment breakthroughs per week

CHAD 2010s Yudkowsky:
- founded a whole movement to support himself
- "IN A CAVE, WITH A BOX OF SCRAPS"
- walked uphill both ways to Lightcone offices.
- alpha who knows *real* struggle
- 1 LDT-level alignment breakthrough per decade

Kinda, my current mainline-doom-case is "some AI gets controlled --> powerful people use it to prop themselves up --> world gets worse until AI gets uncontrollably bad --> doom". I would call it a different yet also-important doom case of "perpetual low-grade-AI dictatorship where the AI is controlled by humans in a surveillance state".

EDIT: Due to the incoming administration's ties to tech investors, I no longer think an AI crash is so likely. Several signs IMHO point to "they're gonna go all-in on racing for AI, regardless of how 'needed' it actually is".

For more details on (the business side of) a potential AI crash, see recent articles by the blog Where's Your Ed At, which wrote the sorta-well-known post "The Man Who Killed Google Search".

For his AI-crash posts, start here and here and click on links to his other posts. Sadly, the author falls into the trap of "LLMs will never get to reasoning because they don't, like, know stuff, man", but luckily his core competencies (the business side, analyzing reporting) show why an AI crash could still very much happen.

Further context on the Scott Adams thing lol: He claims to have taken hypnosis lessons decades ago and has referred to using it multiple times. His, uh, personality also seems to me like it'd be more susceptible to hypnosis than average (and even he'd probably admit this in a roundabout way).

Load More