Wiki Contributions

Comments

Sorted by
eg20

[removed]

[This comment is no longer endorsed by its author]Reply
eg00

[removed]

[This comment is no longer endorsed by its author]Reply
eg250

It's way too late for the kind of top-down capabilities regulation Yudkowsky and Bostrom fantasized about; Earth just doesn't have the global infrastructure.  I see no benefit to public alarm--EA already has plenty of funding.

We achieve marginal impact by figuring out concrete prosaic plans for friendly AI and doing outreach to leading AI labs/researchers about them.  Make the plans obviously good ideas and they will probably be persuasive.  Push for common-knowledge windfall agreements so that upside is shared and race dynamics are minimized.

Load More