One unresolved problem in ethics is that aggregate consequentialist ethical theories tend to break down if the universe is infinite. An infinite universe could contain both an infinite amount of good and an infinite amount of bad. If so, you are unable to change the total amount of good or...
I would like to propose an idea for aligning AI. First, I will provide some motivation for it. Suppose you are a programmer who's having a really hard time implementing a function in a computer program you're developing. Most of the code is fine, but there's this one function that...
Here, I provide a new definition of "optimizer", as well as better explain a previous one I gave. I had previously posted a definition, but I think the definition I gave was somewhat wrong. It also didn't help that I accidentally wrote my previous definition incorrectly, and only days later...
Here I propose the idea that creating AIs that perform human mimicry would result in similar capabilities and results to an AI made with iterated amplification. However, it may provide are greater degree of flexibility than using any hard-coded iterated amplification, which might make it preferable. I don't know if...
I've been thinking about how to define "optimizer". My attempted definition of "optimizer" is, "something such that there is a method of describing a change to it to concretely describe a system that scores unusually highly on another function, for a wide range of functions, in a way that's significantly...