owngrove
owngrove has not written any posts yet.

I think "alignment/capabilities > 1" is a closer heuristic than "alignment/capabilities > average", in the sense of '[fraction of remaining alignment this solves] / [fraction of remaining capabilities this solves]'. That's a sufficient condition if all research does it, though not IRL e.g. given pure capabilities research also exists; but I think it's still a necessary condition for something to be net helpful.
Seconding all of this.
Another way to state your second point - the only way to exploit that free energy may be through something that looks a lot like a 'pivotal act'. And in your third point, there may be no acceptable way to exploit that free energy, in which case the only option is to prevent any equally-capable unaligned AI from existing - not necessarily through a pivotal act, but Eliezer argues that's the only practical way to do so.
I think the existence/accessibility of these kinds of free energy (offense-favored domains whose exploitation is outside of the Overton window or catastrophic) this is a key crux for 'pivotal act' vs. gradual risk... (read more)
One reason you might do something like "writing up a list but not publishing it" is if you perceive yourself to be in a mostly-learning mode rather than a mostly-contributing one. You don't want to dilute the discussion with your thoughts that don't have a particularly good chance of adding anything, and you don't want to be written off as someone not worth listening to in a sticky way, but you want to write something down develop your understanding / check against future developments / record anything that might turn out to have value later after all once you understand better.
Of course, this isn't necessarily an optimal or good strategy, and people... (read more)
I think the debate really does need to center on specific pivotal outcomes, rather than how the outcomes come about. The sets of pivotal outcomes attainable by pivotal acts v.s. by pivotal processes seem rather different.
I suspect your key crux with pivotal-act advocates is whether there actually exist any pivotal outcomes that are plausibly attainable by pivotal processes. Any advantages that more distributed pivotal transitions have in the abstract are moot if there are no good concrete instantiations.
For example, in the stereotypical pivotal act, the pivotal outcome is that no (other) actors possess the hardware to build an AGI. It's clear how this world state is safe from AGI, and how a... (read more)
Happened on this song on Tiny Desk: Paperclip Maximizer (by Rosie Tucker, from an album titled "Utopia Now!").