I wonder why the line doesn’t instead go from the bottom of the ellipse to the top
I think that would give you a line that predicts x given y, rather than y given x.
Hm, if I look in your table (https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long?curius=1279#The_Table), are you saying that LLMs (GPT-3, Chinchilla) are more general in their capabilities than a cat brain or a lizard brain?
At the brain-level I'd agree, but at the organism level I'm less sure. Today's LLMs may indeed be more general than a cat brain. But I'm not sure they're more general than the cat as a whole. The cat (or lizard) has an entire repertoire of adaptive features built into th...
AIs are a totally normal part of lawmaking, e.g. laws are drafted and proofread by lawbots who find mistakes and suggest updates.
I love that AI would be a part of lawmaking!
Hopefully it would be done to humanity's advantage (i.e. helping us find pareto improvements that make life better for everyone, helping us solve tragedy of the commons problems etc.). But there are negative possibilities too, of course.
Any opinions on how to explicitly enable more of the good side of this possibility?
This section says:
However, I thought the description above said:
How can we unify that level 4 is "trusting their instincts and does that which creates or captures power", while ... (read more)