You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Epictetus comments on Futarchy and Unfriendly AI - Less Wrong Discussion

9 Post author: jkaufman 03 April 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread.

Comment author: Epictetus 06 April 2015 12:14:26AM 4 points [-]

Feedback controls. Futarchy is transparent,carried out in real time, and gives plenty of room to adjust values and change strategies if the present ones prove defective. On the other hand, a superintelligent AI would basically run as a black box. The operators would set the values, then the AI would use some method to optimize and then spit out the optimal strategy (and presumably implement it). There's no room for human feedback between setting the values and implementing the optimal strategy.

Comment author: Anders_H 07 April 2015 05:35:32PM 1 point [-]

This relates to my previous post on confounding in Prediction Markets. In my analysis, if we allow human feedback between setting the values and implementing the strategy, you break the causal interpretation of the prediction market and therefore lose the ability to use it for optimization. This is obviously a trade-off between other considerations that may be more important, but you will run into big problems if the market participants expect there is a significant probability that humans will override the market

Comment author: selylindi 16 April 2015 09:38:44PM 0 points [-]

There's no room for human feedback between setting the values and implementing the optimal strategy.

Here and elsewhere I've advocated* that, rather than using Hanson's idea of target-values that are objectively verifiable like GDP, futarchy would do better to add human feedback in the stage of the process where it gets decided whether the goals were met or not. Whoever proposed the goal would decide after the prediction deadline expired, and thus could respond to any improper optimizing by refusing to declare the goal "met" even if it technically was met.

[ * You can definitely do better than the ideas on that blog post, of course.]