You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dolores1984 comments on Brief Question about FAI approaches - Less Wrong Discussion

3 Post author: Dolores1984 19 September 2012 06:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dolores1984 20 September 2012 05:58:38AM *  -1 points [-]

By bounded, I simply meant that all reported utilities are normalized to a universal range before being summed. Put another way, every person has a finite, equal fraction of the machine's utility to distribute among possible future universes. This is entirely to avoid utility monsters. It's basically a vote, and they can split it up however they like.

Also, the reflexive consistency criteria should probably be applied even to people who don't exist yet. We don't want plans to rely on creating new people, then turning them into happy monsters, even if it doesn't impact the utility of people who already exist. So, basically, modify the reflexive utility criteria to say that in order for positive utility to be reported from a model, all past versions of that model (to some grain) must agree that they are a valid continuation of themselves.

I'll need to think harder about how to actually implement the approval judgements. It really depends on how detailed the models we're working with are (i.e. cable of realizing that they are a model). I'll give it more thought and get back to you.

Comment author: Mitchell_Porter 20 September 2012 07:24:10AM 1 point [-]

how to actually implement the approval judgements

This matters more for initial conditions. A mature "FAI" might be like a cross between an operating system, a decision theory, and a meme, that's present wherever sufficiently advanced cognition occurs; more like a pervasive culture than a centralized agent. Everyone would have a bit of BAUM in their own thought process.