Maximizing paperclips is the de facto machine ethics / AI alignment meme. I showcase some practical problems with Nick Bostrom's paperclip thought experiment and posit that if we really tried to maximize paperclips in the universe we would have to sacrifice utility measurements as a result.
Let's Start Making Paperclips
How do we actually maximize paperclips? Ought we make tiny nano-scale paperclips or large planet-sized paperclips? Do we need to increase the number of paperclips in the universe or can we simply increase the paperclip-ness of the universe instead? Assuming the same amount of mass gets converted to paperclips either way, which way of producing paperclips is best?
To be very clear, I don't want to... (read 2081 more words →)
Do you have numbers on how many students were using AI friend apps?