Do the people behind the veil of ignorance vote for "specks"?

1 D227 11 November 2011 01:26AM

The veil of ignorance as Rawls put it ..."no one knows his place in society, his class position or social status; nor does he know his fortune in the distribution of natural assets and abilities, his intelligence and strength, and the like." 

 

The device allows for certain issues like slavery and income distribution to be determined beforehand.  Would one vote for a society in which there is a chance of severe misfortune, but greater total utility?  e.g, a world where 1% earn $1 a day and 99% earn $1,000,000 vs. a world where everyone earns $900,000 a day.  Assume that dollars are utilons and they are linear (2 dollars indeed gives twice as much utility).  What is the obvious answer?  Bob chooses $900,000 a day for everyone.  

 

But Bob is clever and he does not trust himself that his choice is the rational choice, so he goes into self-dialogue to investigate:

Q: "What is my preference, value or goal(PVG), such that, instrumental rationality may achieve it?"

A "I my preference/value/goal is for there to be a world in which total utility is less, but severe misfortune eliminated for everyone"

Q " As an agent are you maximizing your own utility by your actions of choosing a $900,000 a day world?

A " Yes, my actions are consistent with my preferences; I will maximize my utility by achieving my preference of limiting everyone's utility.  This preference takes precedence.

Q: "I will now attack your position with the transitivity argument.  At which point does your consistency change?  What if the choices where 1% earns $999,999 and 99% earn 1,000,000?"

A: "My preference,values and goals have already determined a threshold, in fact my threshold is my PVG.  Regardless the fact that my threshold may be different from everyone else's threshold, my threshold is my PVG.  And achieving my PVG is rational."

Q: "I will now attack your position one last time, with the "piling argument".  If every time you save one person from destitution, you must pile on the punishment on the others such that everyone will be suffering."

A: "If piling is allowed then it is to me a completely different question.  Altering what my PVG is.  I have one set of values for a non piling and piling scenario.  I am consistent because piling and not piling are two different problems."

 

In the insurance industry, purchasing insurance comes with a price.  Perhaps 1.5% premium of the cost of reimbursing you for your house that may burn down.  The actuaries have run the probabilities and determine that you have a 1% chance that your house will burn down.  Assume that all dollar amounts are utilons across all assets.  Bob once again is a rational man.  Every year Bob is chooses to pay 1.5% in premium even though his average risk is technically a 1% loss, because Bob is risk adverse. So risk adverse that he prefers a world in which he has less wealth, the .5% went to the insurance companies making a profit. Once again Bob questions his rationality on purchasing insurance:

Q: "What is my preference?"

A: "I would prefer to sacrifice more than my share of losses( .5% more), for the safety-net of zero chance catastrophic loss."

Q "Are your actions achieving your values?"

A "Yes, I purchased insurance, maximizing my preference for safety."

Q "Shall I attack you with the transitivity argument?"

A "It wont work.  I have already set my PVG, it is a premium price at which I judge to make the costs prohibitive.  I will not pay 99% premium to protect my house , but I will pay 5%."

Q "Piling?"

A "This is a different problem now."

 

Eliezer's post on Torture vs. Dust Specks [Herehas generated lots of discussion as well as what Eliezer describes as interesting [ways] of [avoiding] the question.  We will do no sort of thing in this post, we will answer the question as intended; I will interpret that eye specks is cumulatively greater suffering than the suffering of 50 years. 

 My PVG tells me that I would rather have a speck in my eye, as well as the eye's of 3^^^3 people, than to risk to have one (perhaps me) suffer torture for 50 years, even though my risk is only 1/(3^^^3) which is a lot less than 50 years (Veil of ignorance).  My PVG is what I will maximize, and doing so is the definition of instrumental rationality.  

In short, the rational answer is not TORTURE or SPECKS, but depends on what your preference, values and goals are.  You may be one of those whose preference is to let that one person feel torture for 50 years, as long as your actions that steer the future toward outcomes ranked higher in your preferences, you are right too.

Correct me if I am wrong but I thought rationality did not imply that there were absolute rational preferences, but rather rational ways to achieve your preferences...

 

I want to emphasize that in no way did I intend for this post to declare anything.  And want to thank everyone in advance for picking apart every single word I have written.  Being wrong is like winning the lottery.  I do not claim to know anything, the assertive manner in which I wrote this post was merely a way to convey my ideas, of which, I am not sure off.   

 

 

 

A question on rationality.

1 D227 10 November 2011 12:20AM

My long runs on Saturdays give me time to ponder the various material at lesswrong.  Recently my attention has been kept busy pondering a question about rationality that I have not yet resolved and would like to present to lesswrong as a discussion.  I will try to be as succinct as possible, please correct me if I make any logical fallacies.


Instrumental rationality is defined as the art of choosing actions that steer the future toward outcomes ranked higher in your preferences/values/goals (PVGs)

 Here are my questions:

1. If rationality is the function of achieving our preferences/values/goals, what is the function of choosing our PVGs to begin with, if we could choose  our preferences?  In other words, is there an "inherent rationality" absence of preference or values?  It seems as if the definition of instrumental rationality is saying that if you have a PVG, that there is a rational way to achieve it, but there is not necessarily rational PVGs.  


2.If the answer is no, there is no "inherent rationality" absence of a PVG, then what would preclude the possibility that a perfect rationalist, given enough time and resources, will eventually become a perfectly self interested entity with only one overall goal which is to perpetuate his existence, at the sacrifice of everything and everyone else?

Suppose a superintelligence visits Bob and grants him the power to edit his own code.  Bob can now edit or choose his own preferences/values/goals.  

Bob is a perfect rationalist.

Bob is genetically predisposed to abuse alcohol, as such he rationally did everything he could to keep alcohol off his mind.  

Now, Bob no longer has to do this, he simply goes into his own code and deletes this code/PVG/meme for alcohol abuse.

Bob continues to cull his code of "inefficient" PVGs.   

Soon Bob only has one goal, the most important goal, self preservation. 

3. Is it rational for Bob, having these powers, to rid himself of humanity, and rewrite his code to only want to support one meme, that is the meme to ensure his existence.  Everything he will do will goes to support this meme.  He will drop all his relationships, his hobbies, all his wants and desires into concentrate on a single objective.  How does Bob not become a monster superintelligence hell bent on using all the energy in the universe for his own selfish reasons?

 

I have not resolved any of these questions yet, and look forward to any responses I may receive.  I am very perplexed at Bob's situation.  If there are some sequences that would help me better understand my questions please suggest them.