A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.
I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.
I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.
This leads me to two possible conclusions:
- I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
- Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.
Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?
I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.
So an alarm system has preferences? That is not most people's understanding of the word "preference", which requires a degree of agency that most rationalists wouldn't attribute to an alarm system.
Nonetheless, let us say an alarm system has preferences. You didn't answer any of my follow-on questions for that case.
As for explaining away the rainbow, you seem to have me confused with an anti-reductionist. See Explaining vs. Explaining Away, in particular:
At this point, I am attempting to show that the very concept of a "preference" existing in the first place is something projected onto the world by an inbuilt bias in human perception. Reality does not have preferences, it has behaviors.
This is not erasing the rainbow from the world, it's attempting to erase the projection of a mind-modeling variable ("preference") from the world, in much the same way as Eliezer broke down the idea of "possible" actions in one of his series.
So, if you are claiming that preference actually exists, please give your definition of a preference, such that alarm systems and humans both have them.
A good reply, if only you approached the discussion this constructively more often.
Note that probability is also in the mind, but yet your see all the facts through it, and you can't ever revoke it, each mind is locked in its subjectively objective character. What do you think of that?