It's important to note that the only person who can determine how much a situation satisfies their values, is that person. I think a lot of alignment proposals have as a subtle underlying assumption that someone (or some set of someones) is going to be making decisions for everyone else. I think a world where that is the case has already been lost. I'm not sure to what extent these suggestions are examples of that, but they feel similar to me.
Btw, your vision of utopia on your website sounds roughly identical to mine which I've been thinking about for roughly ten years now. I am bad at writing in an organized way (and I constantly improve my understanding of things) so I've never written it up, but it might be nice for you and I to talk. In particular I've lately been thinking deeply about the ethics of consent and the need for counterfactual people (particularly the unborn and the dead) to have enforced rights - and the structures you call "nonperson forces" (I'd call them egregores) I consider to possibly have moral rights as well. I would really love to be able to think about all this stuff with you.
This is defnitely interesting. I have some specific issues I'll ask about if you haven't addressed them yourself in a few months.
(this post has been written for the first Refine blog post day, at the end of the week of readings, discussions, and exercises about epistemology for doing good conceptual research)
this is the follow-up to the Insulated Goal-Program idea in which i suggest doing alignment by giving an AI a program to run as its ultimate goal, the running of which would hopefully realize our values. in this post, i talk about what pieces of software could be used to put together an appropriate goal-program, as well as some example of plans built out of them.
here are some naive examples of outlines for goal-program which seem like they could be okay:
these feel like we could be getting somewhere in terms of figuring out actual goal-program that could contain to valuable outcomes; at the very least, it seems like a valuable avenue of investigation. in addition, unlike AGI, individual many pieces of the goal-program can be individually tested, iterated on, etc. in the usual engineering fashion.