Max D'Ambrosio

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I’ve been wondering: Regarding the kinds of policy changes by governments and legitimate businesses intended to stall AI development… won’t such policies simply ensure that illegitimate actors eventually develop AI outside the law, without states and more well-intended actors being able to counter them? Or is there no such thing as a sufficiently large AI data centre that can be kept secret? I would imagine, at the very least, that some states would have the resources to keep such secrets effectively, though maybe not for long. By turning to secrecy, doesn’t the AI race shift from being one of “who’s the fastest at achieving foom” to “who’s the best at keeping secrets for longer?” If so, does that shift actually advantage people who are better at achieving friendly AI? It seems obvious that the endgame is to race to understanding friendliness generally before actualizing strong AI, so that someone can achieve the former before the latter. Perhaps better minds than me have concluded that secrecy will actually lend more momentum to friendliness while effectively delaying progress towards AI foom, but given that progress still seems possible in secrecy, I don’t understand how. Hopefully I’m not jeopardizing such a use of secrecy just by saying this, but if that were possible then lots of other people are capable of jeopardizing it anyway, and it’s better to draw attention to it.