You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

selylindi comments on Futarchy and Unfriendly AI - Less Wrong Discussion

9 Post author: jkaufman 03 April 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: selylindi 16 April 2015 09:38:44PM 0 points [-]

There's no room for human feedback between setting the values and implementing the optimal strategy.

Here and elsewhere I've advocated* that, rather than using Hanson's idea of target-values that are objectively verifiable like GDP, futarchy would do better to add human feedback in the stage of the process where it gets decided whether the goals were met or not. Whoever proposed the goal would decide after the prediction deadline expired, and thus could respond to any improper optimizing by refusing to declare the goal "met" even if it technically was met.

[ * You can definitely do better than the ideas on that blog post, of course.]