Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

This section says:

L-4: The Lizard trusts their instincts and does that which creates or captures power.

 

However, I thought the description above said:

The key issue with the lion and pandemic definitions is treating Level 4 as if it has motivations and does things for logical reasons. One can think strategically about level 4 implications but those operating on Level 4 mostly not only don’t do this, they have lost the ability to do so.

 

How can we unify that level 4 is "trusting their instincts and does that which creates or captures power", while also not having motivations / not doing things for logical reasons? It seems the essence might be more of "can't think strategically or logically about how this action meets its motivations"

I wonder why the line doesn’t instead go from the bottom of the ellipse to the top

 

I think that would give you a line that predicts x given y, rather than y given x.

Hm, if I look in your table (https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long?curius=1279#The_Table), are you saying that LLMs (GPT-3, Chinchilla) are more general in their capabilities than a cat brain or a lizard brain?

At the brain-level I'd agree, but at the organism level I'm less sure.  Today's LLMs may indeed be more general than a cat brain. But I'm not sure they're more general than the cat as a whole.  The cat (or lizard) has an entire repertoire of adaptive features built into the rest of the organism's physiology (not just the brain). Prof. Michael Levin has a great talk on this topic; the first 2-3 minutes give a good overview.

 

I'm not sure if we should evaluate generality as being what the brain-part itself could do (where, I agree, the LLM brain is more general), or about what the organism can do (here, I think the cat and the lizard can potentially do more). The biological and cellular machinery are a whole lot more adaptive under the hood.

And I suppose even at the whole-organism level it's sort of a tough call which one's more general!

AIs are a totally normal part of lawmaking, e.g. laws are drafted and proofread by lawbots who find mistakes and suggest updates.

 

I love that AI would be a part of lawmaking! 


Hopefully it would be done to humanity's advantage (i.e. helping us find pareto improvements that make life better for everyone, helping us solve tragedy of the commons problems etc.). But there are negative possibilities too, of course. 

Any opinions on how to explicitly enable more of the good side of this possibility?