NiklasGregorLessWrong

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Thank you 🙏  @mesaoptimizer for the summary!

  • Optimization power is the source of the danger, not agency. Agents merely wield optimality to achieve their goals.
  • Agency is orthogonal to optimization power

@All: It seems we agree that optimality, when pursued blindly, is about extreme optimization that can lead to dangerous outcomes.

Could it be that we are overlooking the potential for a (superintelligent) system to prioritize what matters more—the effectiveness of a decision—rather than simply optimizing for a single goal? 🤔

For example, optimizing too much for a single goal (getting the most paperclips) might overlook ethical or long-term considerations which may contribute to the greater good for all Beings. 

Final question:
Under what circumstances might you prefer a (superintelligent) system to reject the paperclip request and suggest alternative solutions, or seek to understand the requester’s underlying needs and motivations?

I would love to hear additional comments or feedback on when to prioritize effectiveness, as I am still trying to understand decision-making better 🤗