Late to the party, I know, but I just got my copies and really looking forward to reading them! I really miss the tiny form factor of "Maps that Reflect the Territory" though, they really felt elegant in a way the more recent ones don't.
I hope we get to see grades for these comments from at least EY and PC.
~1 hour's thoughts, by a total amateur. It doesn't feel complete, but it's what I could come up with before I couldn't think of anything new without >5 minutes' thought. Calibrate accordingly—if your list isn't significantly better than this, take some serious pause before working on anything AI related.
Most of the discussion I've seen around AGI alignment is on adequately, competently solving the alignment problem before we get AGI. The consensus in the air seems to be that those odds are extremely low.
What concrete work is being done on dumb, probably-inadequate stop-gaps and time-buying strategies? Is there a gap here that could usefully be filled by 50-90th percentile folks?
Examples of the kind of strategies I mean:
I'm sure there are many others, but I hope this gets across the idea—stuff with obvious, disastrous failure modes that might nonetheless shift us towards survival in some possible universes, if by no other mechanism than buying time for 99th percentile alignment folk to figure out better solutions. Actually winning this level of solution seems like piling up sandbags to hold back a rising tide, which doesn't work at all (except sometimes it does).
Is this stuff low-hanging fruit, or are people plucking it already? Are any of these counterproductive?
Hey, sorry for the long time replying - last I checked, it was a few hundred $s to sequence exome-only (that is, only DNA that actually gets translated into protein) and about $1-1.5k for whole genome - but that was a couple of years ago, and I'm not sure how much cheaper it is now.
To clear up a possible confusion around microarrays, SNP sequencing, and GWAS - microarrays are also used to directly measure gene expression (as opposed to trait expression) by hybridizing mRNA extracted from a tissue sample and hybridizing that against a library of known RNA sequences for different genes. This uses the same technology as microarray-based GWAS, but for different purpose (gene expression vs. genomic variation), and with different material (mRNA vs amplified genomic DNA) and analysis math.
Also, there's increasingly less reason to use microarrays for anything. It's cheap enough to just sequence a whole genome now that I'm pretty sure newer studies just use whole genome sequencing. For scale, the lab I worked in during undergrad (midsized lab at a medium sized liberal arts college, running on a few 100k $/yr) was transitioning from microarray gene expression data to whole-transcriptome sequencing back in 2014. There's a lot of historical microarray data out there that I'm sure researchers will still be reanalyzing for years, but high throughput sequencing is the present and future of genomics.
~2 hours' of analysis here: https://github.com/sclamons/LW_Quest_Analysis, notebook directly viewable at https://nbviewer.jupyter.org/github/sclamons/LW_Quest_Analysis/blob/main/lw_dnd.ipynb.
Quick takeaways:
1) From simple visualizations, it doesn't look like there are correlations between stats, either in the aggregated population or in either the hero or failed-hero populations.
2) I decided to base my stat increases on what would add the most probability of success for improving that stat, looking at each stat in isolation, where success probabilities were estimated by simply tabulating the fraction of students with that particular stat value ended up heroes.
3) Based on that measure, I decided to go with +4 Cha, +1 Str, +2 Wis, +3 Con, and I wish I could reduce my Dex.
Small nitpick, half a decade late: bottlecaps are arguably proportional controllers—the pressure they exert on the inside is proportional to the pressure applied by the inside, until the bottlecap hits a performance limit and breaks.