Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Virtue, group and individual prestige

0 DeVliegendeHollander 19 February 2015 02:55PM

Let's assume now that people respect other people who have or appear to have high levels of  virtue.  Let's also say that Alice has Level 10 virtue and for this reason she has Level X prestige in other people's eyes, purely based on her individual merits.

Now let's assume that Alice teams up with a lot of other people who have Level 10 virtue and form the League of Extraordinarily Virtuous People. How high a prestige would membership in the League would convey on its members? Higher or lower than X?

I would say, higher, for two reasons. You give Alice a really close look, and you judge her virtue levels must be somewhere around Level 10. However you don't trust your judgement very much and for that reason you discount a bit the prestige points you award to her. However, she was accepted into the League by other people who also appear to be very virtuous. This suggests your estimation was correct, and you can afford to award her more points. Every Well Proven Virtue a League member has increases the chance that the virtues of other members are not fake either or else he or she would not accept to be in the same League with them, and this increases the amount of prestige points you award to them.  Second, few people know Alice up close and personally. The bigger the distance, the less they know about her, her personal fame radiates only so far. But the combined fame of the League radiates much farther. Thus more people notice their virtuousness and award prestige points to them.

In other words, if virtuous people want to maximize the prestige points they have, it is a good idea for them to form famous groups with strict entry requirements.

And suddenly Yale class rings make sense now. They get more prestige for being a member of a group who is famous for having whatever virtues it takes to graduate from Yale, than the prestige they could get for simply having those virtues.

The flip side of it, if you want to motivate people to be more virtuous, and if you think prestige assigned to virtue is a good way to do that, encourage them to form famous groups with strict entry requirements.

One funny thing is that the stricter you make the entry requirements (base minimum level of virtue), the more prestige the group will _automatically_ get. You just design the entry test, basically the cost paid, but you don't need to design the reward, it is automatically happening! That is just handy.

Well, the whole thing is fairly obvious as long as the virtue in question is "studying your butt off". It is done all the time. This is what the term "graduated from a prestigious university" means. 

It is less obvious once the virtue in question is something like "stood up for the victims of injustice, even facing danger for it".

Have you ever wondered why the same logic is not done there? Find a moral cause. Pick out the people who support it the most virtuously, who took the most personal danger and the least personal benefit etc. make them examples and make them form an elite club. That club will convey a lot of prestige on its members. This suggests other people will take more pains to support that cause in order to get into that club.

Yet, it is not really done. What was the last time you saw strict entry requirements for any group or club or association related to any social cause? It is usually the opposite, making entry easy, just sign up for the newsletter here, which means it does not convey much prestige.

If there is anything that matters to you, not even necessarily a moral social cause, but just anything you wish more people done, just stop for a minute and think over if such high prestige famous elite groups with strict entry requirements should be formed with regard to that.

And now I don't understand why I don't see badges like "Top MIRI donator" beside usernames around here. Was the idea not thought before, or is it more like I am missing something important here?

It can also be useful to form groups of people who are virtuous at _anything_, putting the black-belt into the same group as the scholar or the activist who stood up against injustice. "Excel at anything and be one of us." This seems to be the most efficient prestige generator and thus motivator, because different people notice and reward with prestige points different kinds of virtues. If I respect mainly edge.org level scientists, if they are are willing to be in the same club as some political activist who never published science, I will find that activist curious, interesting and respectable.  That is partially why I toy with the idea of knightly orders.

Sidekick Matchmaking

7 diegocaleiro 19 February 2015 12:13AM

Thanks linkhyrule5 for suggesting this.  

Post your request for Sidekicks or your desire to be a sidekick in the comment section below. 

Send a personal message to your potential match to start communicating instead of replying in the thread, to save space and avoid biases, besides privacy. 

[edit] Mathias Zamman suggests some questions: 

Questions for both Heroes and Sidekicks (and Dragons, etc.)

  • Post a short description of yourself: personality, skills, general goals.
  • Where do you live?
  • How do you see the contact between the two of you going?
  • What you require in your counterpart: This can be a bit vague but it might be too hard to verbalize for some people

Questions for Heroes:

  • What is your goal?
  • Why are you a Hero?
  • Why do you require a Sidekick?
  • What specific tasks would a Sidekick perform for you?
  • What qualities would you not want in a Sidekick?

Questions for Sidekicks:

  • What sort of goals are you looking for?
  • Why are you Sidekick material?
  • Why do you require a Hero?
  • What sort of tasks could you do for a Hero?
  • What qualities don't you want in a Hero?

The Galileo affair: who was on the side of rationality?

34 Val 15 February 2015 08:52PM

Introduction

A recent survey showed that the LessWrong discussion forums mostly attract readers who are predominantly either atheists or agnostics, and who lean towards the left or far left in politics. As one of the main goals of LessWrong is overcoming bias, I would like to come up with a topic which I think has a high probability of challenging some biases held by at least some members of the community. It's easy to fight against biases when the biases belong to your opponents, but much harder when you yourself might be the one with biases. It's also easy to cherry-pick arguments which prove your beliefs and ignore those which would disprove them. It's also common in such discussions, that the side calling itself rationalist makes exactly the same mistakes they accuse their opponents of doing. Far too often have I seen people (sometimes even Yudkowsky himself) who are very good rationalists but can quickly become irrational and use several fallacies when arguing about history or religion. This most commonly manifests when we take the dumbest and most fundamentalist young Earth creationists as an example, winning easily against them, then claiming that we disproved all arguments ever made by any theist. No, this article will not be about whether God exists or not, or whether any real world religion is fundamentally right or wrong. I strongly discourage any discussion about these two topics.

This article has two main purposes:

1. To show an interesting example where the scientific method can lead to wrong conclusions

2. To overcome a certain specific bias, namely, that the pre-modern Catholic Church was opposed to the concept of the Earth orbiting the Sun with the deliberate purpose of hindering scientific progress and to keep the world in ignorance. I hope this would prove to also be an interesting challenge for your rationality, because it is easy to fight against bias in others, but not so easy to fight against bias on yourselves.

The basis of my claims is that I have read the book written by Galilei himself, and I'm very interested (and not a professional, but well read) in early modern, but especially 16-17th century history.

 

Geocentrism versus Heliocentrism

I assume every educated person knows the name of Galileo Galilei. I won't waste the space on the site and the time of the readers to present a full biography about his life, there are plenty of on-line resources where you can find more than enough biographic information about him.

The controversy?

What is interesting about him is how many people have severe misconceptions about him. Far too often he is celebrated as the one sane man in an era of ignorance, the sole propagator of science and rationality when the powers of that era suppressed any scientific thought and ridiculed everyone who tried to challenge the accepted theories about the physical world. Some even go as far as claiming that people believed the Earth was flat. Although the flat Earth theory was not propagated at all, it's true that the heliocentric view of the Solar System (the Earth revolving around the Sun) was not yet accepted.

However, the claim that the Church was suppressing evidence about heliocentrism "to maintain its power over the ignorant masses" can be disproved easily:

- The common people didn't go to school where they could have learned about it, and those commoners who did go to school, just learned to read and write, not much more, so they wouldn't care less about what orbits around what. This differs from 20-21th century fundamentalists who want to teach young Earth creationism in schools - back then in the 17th century, there would be no classes where either the geocentric or heliocentric views could have been taught to the masses.

- Heliocentrism was not discovered by Galilei. It was first proposed by Nicolaus Copernicus almost 100 years before Galilei. Copernicus didn't have any affairs with the Inquisition. His theories didn't gain wide acceptance, but he and his followers weren't persecuted either.

- Galilei was only sentenced to house arrest, and mostly because of insulting the pope and doing other unwise things. The political climate in 17th century Italy was quite messy, and Galilei did quite a few unfortunate choices regarding his alliances. Actually, Galilei was the one who brought religion into the debate: his opponents were citing Aristotle, not the Bible in their arguments. Galilei, however, wanted to redefine the Scripture based on his (unproven) beliefs, and insisted that he should have the authority to push his own views about how people interpret the Bible. Of course this pissed quite a few people off, and his case was not helped by publicly calling the pope an idiot.

- For a long time Galilei was a good friend of the pope, while holding heliocentric views. So were a couple of other astronomers. The heliocentrism-geocentrism debates were common among astronomers of the day, and were not hindered, but even encouraged by the pope.

- The heliocentrism-geocentrism debate was never an ateism-theism debate. The heliocentrists were committed theists, just like  the defenders of geocentrism. The Church didn't suppress science, but actually funded the research of most scientists.

- The defenders of geocentrism didn't use the Bible as a basis for their claims. They used Aristotle and, for the time being, good scientific reasoning. The heliocentrists were much more prone to use the "God did it" argument when they couldn't defend the gaps in their proofs.

 

The birth of heliocentrism.

By the 16th century, astronomers have plotted the movements of the most important celestial bodies in the sky. Observing the motion of the Sun, the Moon and the stars, it would seem obvious that the Earth is motionless and everything orbits around it. This model (called geocentrism) had only one minor flaw: the planets would sometimes make a loop in their motion, "moving backwards". This required a lot of very complicated formulas to model their motions. Thus, by the virtue of Occam's razor, a theory was born which could better explain the motion of the planets: what if the Earth and everything else orbited around the Sun? However, this new theory (heliocentrism) had a lot of issues, because while it could explain the looping motion of the planets, there were a lot of things which it either couldn't explain, or the geocentric model could explain it much better.

 

The proofs, advantages and disadvantages

The heliocentric view had only a single advantage against the geocentric one: it could describe the motion of the planets by a much simper formula.

However, it had a number of severe problems:

- Gravity. Why do the objects have weight, and why are they all pulled towards the center of the Earth? Why don't objects fall off the Earth on the other side of the planet? Remember, Newton wasn't even born yet! The geocentric view had a very simple explanation, dating back to Aristotle: it is the nature of all objects that they strive towards the center of the world, and the center of the spherical Earth is the center of the world. The heliocentric theory couldn't counter this argument.

- Stellar parallax. If the Earth is not stationary, then the relative position of the stars should change as the Earth orbits the Sun. No such change was observable by the instruments of that time. Only in the first half of the 19th century did we succeed in measuring it, and only then was the movement of the Earth around the Sun finally proven.

- Galilei tried to used the tides as a proof. The geocentrists argued that the tides are caused by the Moon even if they didn't knew by what mechanisms, but Galilei said that it's just a coincidence, and the tides are not caused by the Moon: just as if we put a barrel of water onto a cart, the water would be still if the cart was stationary and the water would be sloshing around if the cart was pulled by a horse, so are the tides caused by the water sloshing around as the Earth moves. If you read Galilei's book, you will discover quite a number of such silly arguments, and you'll see that Galilei was anything but a rationalist. Instead of changing his views against overwhelming proofs, he used  all possible fallacies to push his view through.

Actually the most interesting author in this topic was Riccioli. If you study his writings you will get definite proof that the heliocentrism-geocentrism debate was handled with scientific accuracy and rationality, and it was not a religious debate at all. He defended geocentrism, and presented 126 arguments in the topic (49 for heliocentrism, 77 against), and only two of them (both for heliocentrism) had any religious connotations, and he stated valid responses against both of them. This means that he, as a rationalist, presented both sides of the debate in a neutral way, and used reasoning instead of appeal to authority or faith in all cases. Actually this was what the pope expected of Galilei, and such a book was what he commissioned from Galilei. Galilei instead wrote a book where he caricatured the pope as a strawman, and instead of presenting arguments for and against both world-views in a neutral way, he wrote a book which can be called anything but scientific.

By the way, Riccioli was a Catholic priest. And a scientist. And, it seems to me, also a rationalist. Studying the works of such people like him, you might want to change your mind if you perceive a conflict between science and religion, which is part of today's public consciousness only because of a small number of very loud religious fundamentalists, helped by some committed atheists trying to suggest that all theists are like them.

Finally, I would like to copy a short summary about this book:

Journal for the History of Astronomy, Vol. 43, No. 2, p. 215-226
In 1651 the Italian astronomer Giovanni Battista Riccioli published within his Almagestum Novum, a massive 1500 page treatise on astronomy, a discussion of 126 arguments for and against the Copernican hypothesis (49 for, 77 against). A synopsis of each argument is presented here, with discussion and analysis. Seen through Riccioli's 126 arguments, the debate over the Copernican hypothesis appears dynamic and indeed similar to more modern scientific debates. Both sides present good arguments as point and counter-point. Religious arguments play a minor role in the debate; careful, reproducible experiments a major role. To Riccioli, the anti-Copernican arguments carry the greater weight, on the basis of a few key arguments against which the Copernicans have no good response. These include arguments based on telescopic observations of stars, and on the apparent absence of what today would be called "Coriolis Effect" phenomena; both have been overlooked by the historical record (which paints a picture of the 126 arguments that little resembles them). Given the available scientific knowledge in 1651, a geo-heliocentric hypothesis clearly had real strength, but Riccioli presents it as merely the "least absurd" available model - perhaps comparable to the Standard Model in particle physics today - and not as a fully coherent theory. Riccioli's work sheds light on a fascinating piece of the history of astronomy, and highlights the competence of scientists of his time.

The full article can be found under this link. I recommend it to everyone interested in the topic. It shows that geocentrists at that time had real scientific proofs and real experiments regarding their theories, and for most of them the heliocentrists had no meaningful answers.

 

Disclaimers:

- I'm not a Catholic, so I have no reason to defend the historic Catholic church due to "justifying my insecurities" - a very common accusation against someone perceived to be defending theists in a predominantly atheist discussion forum.

- Any discussion about any perceived proofs for or against the existence of God would be off-topic here. I know it's tempting to show off your best proofs against your carefully constructed straw-men yet again, but this is just not the place for it, as it would detract from the main purpose of this article, as summarized in its introduction.

- English is not my native language. Nevertheless, I hope that what I wrote was comprehensive enough to be understandable. If there is any part of my article which you find ambiguous, feel free to ask.

I have great hopes and expectations that the LessWrong community is suitable to discuss such ideas. I have experience with presenting these ideas on other, predominantly atheist internet communities, and most often the reactions was outright flaming, a hurricane of unexplained downvotes, and prejudicial ad hominem attacks based on what affiliations they assumed I was subscribing to. It is common for people to decide whether they believe a claim or not, based solely by whether the claim suits their ideological affiliations or not. The best quality of rationalists, however, should be to be able to change their views when confronted by overwhelming proof, instead of trying to come up with more and more convoluted explanations. In the time I spent in the LessWrong community, I became to respect that the people here can argue in a civil manner, listening to the arguments of others instead of discarding them outright.

 

Making a Rationality-promoting blog post more effective and shareable

1 Gleb_Tsipursky 16 February 2015 07:09PM

I wrote a blog post that popularizes the "false consensus effect" and the debiasing strategy of "imagining the opposite" and "avoiding failing at other minds." Thoughts on where the post works and where it can be improved would be super-helpful for improving our content and my writing style. Especially useful would be feedback on how to make this post more shareable on Facebook and other social media, as we'd like people to be motivated to share these posts with their friends. For example, what would make you more likely to share it? What would make others you know more likely to share it?


For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively. This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!

 

Does consciousness persist?

-10 G0W51 14 February 2015 03:52PM

Edit: the below paragraphs are wrong. See the comments for an explanation.

 

Some people believe that the consciousness currently in one's body is the "same" consciousness as the one that was in one's body in the past and the one that will be in it in the future, but a "different" consciousness from those in other bodies. In this post I dissolve the question.

The question is meaningless because the answer doesn't correspond to any physical state in the universe and in no way influences or is influenced by sensory experiences. If one's consciousness suddenly became a totally different one, we know of no quantum particles that would change. Furthermore, swapping consciousnesses would make no changes to what is perceived. E.g. if one agent perceives p and time t and p' at the next time t+1, and another agent perceives q at time t and q' at time t+1, then if their consciousnesses are "swapped," the percepts would still be identical: p and q will be perceived at time t, and p' and q' will be perceived at t+1. One could argue that the percepts did change because the consciousness-swapping changed what a particular consciousness at time t will perceive at t+1, but that presupposes that a future consciousness will be in some meaningful way the "same" consciousness as the current one! Thus, the statement that two consciousnesses are the same consciousness is meaningless.

Can you find any flaws in my reasoning?

 

 

AI-created pseudo-deontology

6 Stuart_Armstrong 12 February 2015 09:11PM

I'm soon going to go on a two day "AI control retreat", when I'll be without internet or family or any contact, just a few books and thinking about AI control. In the meantime, here is one idea I found along the way.

We often prefer leaders to follow deontological rules, because these are harder to manipulate by those whose interests don't align with ours (you could say the similar things about frequentist statistics versus Bayesian ones).

What about if we applied the same idea to AI control? Not giving the AI deontological restrictions, but programming with a similart goal: to prevent a misalignment of values to be disastrous. But who could do this? Well, another AI.

My rough idea goes something like this:

AI A is tasked with maximising utility function u - a utility function which, crucially, it doesn't know yet. Its sole task is to create AI B, which will be given a utility function v and act on it.

What will v be? Well, I was thinking of taking u and adding some noise - nasty noise. By nasty noise I mean v=u+w, not v=max(u,w). In the first case, you could maximise v while sacrificing u completely, it w is suitable. In fact, I was thinking of adding an agent C (which need not actually exist). It would be motivated to maximise -u, and it would have the code of B and the set of u+noise, and would choose v to be the worst possible option (form the perspective of a u-maximiser) in this set.

So agent A, which doesn't know u, is motivated to design B so that it follows its motivation to some extent, but not to extreme amounts - not in ways that might sacrifice some of the values of some sub-part of its utility function, because that might be part of the original u.

Do people feel this idea is implementable/improvable?

A rational approach to the issue of permanent death-prevention

-4 Nanashi 11 February 2015 12:22PM
Edit: Removed intro because it adds no value to the post. Left in for posterity. The vast majority of all ethical and logistical problems revolve around a single inconvenient fact: human beings die unwillingly. "Should we sacrifice one person to save ten?" or "Is it ethical to steal a loaf of bread to feed your starving family?" become irrelevant questions if no one has to die unless they want to. Similarly, almost all altruistic goals have, at their core, the goal of stopping death in some way shape or form.

The question, "How can we permanently prevent death?" is of paramount importance, and not just to Rationalists. So, it should be a surprise to no one that mystics, crackpots, spiritualists and pseudo-scientists of all walks of life have co-opted this quest as their own. The loftiness of the goal, combined with the cosmic implications of its success, combined with the sheer number of irrational people also seeking to achieve the same goal may make it tempting to apply the non-central fallacy and say, "I'm not interested in stopping death; that's something crazy people do." 

But it's a fallacy for a reason: there is a rational way to approach the problem. Let's start with a pair of general statements:

  • X is the cause of the perception of consciousness. (Current hypothesis: X="human brain").
  • Recreation of X with >Y% fidelity results in a the perception of a consciousness functionally indistinguishable from the original to an outside observer. original text: "results in the continuation of the perception of consciousness".   
These two statements border on tautological, and so they aren't that helpful by themselves. It doesn't sound nearly as impressive to say "Something causes something else," nor does it sound impressive to say, "If you copy all properties of X, all properties of X are duplicated." 

But it's important because it lays down the basic framework for which an extremely complex question can begin to be solved. In this case, the solution can be broken down into at least two major sub-problems: The Collection Problem ("How do we 'collect' enough information on X in order to be able to recreate it with Y% fidelity") and The Creation Problem ("Once we have that information, how do we create a physical representation of it?").

Neither of these problems are trivial, quite the opposite. They are ridiculously difficult and me describing them simplistically should not be mistaken for me implying they are simple problems. 

The Collection Problem

This problem is most pressing, because once we solve it, it buys us time. Once that data is stored securely, you've dramatically extended your effective timeline. Even if you, personally, happen to die, you've still got a copy of yourself in backup that some future generation will hopefully be able to reconstruct. But, more importantly, this also applies to all of humanity. Once the Collection Problem is solved, everyone can be backed up. As long as you can stay alive until the problem is solved, (especially if you live in a first-world country), you have probably got a pretty good shot at living forever. 

The Collection Problem brings to mind a number of non-trivial sub-problems, but they are fairly trivial *in comparison* to the monumental task of scanning a brain (assuming the brain alone is the seat of consciousness) with sufficient fidelity. Such as logistics, data-storage and security, etc.. I don't mean to blithely dismiss the difficulties of these problems. But these are problems that humanity is already solving. Logistics, data-storage, and security are all billion dollar industries. 

The Creation Problem

Once the Collection Problem is solved, you have another problem which is how to take that data and do something useful with it. There's a pretty big gap between an architect drawing up a plan for a building and actually creating that building. But, once this problem is resolved, it's very likely that its solution will also make life itself much, much more convenient. Any method that can physically create something as complex as a human brain at-will can almost certainly be adopted to create other things. Food. Clean water. Shelter. etc.  Those likely benefits, of course, are orthogonal, but they are a nice cherry on top.

One of the potential solutions to the Creation Problem involves simulations. I won't go into a ton of detail there because that's a pretty significant discussion unto itself, whether life in a simulation is as valid or fulfilling as life in the "real world". For the purposes of this thought exercise though, it is fairly irrelevant. If you consider a simulation to be an acceptable solution, great. If you don't, that's fine too, it just means the Creation Problem will take longer to solve. Either way, it's likely you're going to be in cold storage for quite some time before the problem does get solved. 

 

What about the rest of us?

All this theory is fine and good. But what if you get hit by a bus tomorrow and don't live to see the resolution of the Collection Problem? What about all of us who have lost loved ones in the past? This is where this exercise dovetails with traditional ethics. Given this system, it's easy enough to argue that we have a responsibility to try to ensure that as many human beings as possible survive until the Collection Problem is resolved. 

However, for those of us unlucky enough to die before that, there's one final get-out-of-jail free card: The Recreation Problem. This problem may be thoroughly intractable. And to be sure, it is probably the most difficult problem of them all. In extremely simple (and emotionally charged) terms: "How can we bring back the dead?" Or, if you prefer to dress it up in the literary genre of science: "How can we recreate a system that occurred in the past with Y% fidelity using only knowledge of the present system?" 

This may be so improbable as to be effectively impossible. But it's not actually impossible. There's no need for perfect physical fidelity (which is all-but-proven to be impossible). We only need to achieve Y% fidelity, whatever Y% may be. Conceptually, we do this all the time. A ballistics expert can track the trajectory of a bullet with no prior knowledge of that trajectory. A two-way function can be iterated in reverse for as many steps as you have computing power. Etc. 

A complex system can be recreated. Is there an upper limit to how far in the past a system can be before it is infeasible to recreate it? Quite possibly. Let's say that upper limit is Z seconds (incidentally, the Collection Problem is actually just a special case of the Recreation Problem where Z is approximately equal to zero). The fact that Z is unknown means you can't simply abandon all your ethical pursuits and say, "It doesn't matter, we're all going to be resurrected anyway!"  Z may in fact be equal to approximately zero. 

The importance of others.

It is most likely that you, individually, will not be able to solve all three problems on your own. Which means that if you truly desire to live forever, you have to rely on other people to a certain extent. But, it does give one a certain amount of peace when contemplating the horror of death: if every human being commits themselves to solving these three problems, it does not matter if you, personally, fail. All of humanity would have to fail. 

Whether that thought actually gives any comfort depends largely on your estimation of humanity and the difficulty of these problems. But regardless of whether you derive any comfort from that, it doesn't diminish the importance of the contributions of others. 

The moral of this story...

As a rationalist, you should take a few things away from this.

  1. You should try as hard as possible to stay alive until the Collection Problem is resolved. 
  2. You should try as hard as possible to make sure everyone else stays alive until that point as well. 
  3. When feasible, you should try to bring other people around to the ways of rationalism. 
  4. Death is a tragedy, but it is conceptually reversible.
  5. Don't despair if you don't make any progress towards resolving these problems in your lifetime.

 

Post Script:

Note: this was added on as an edit due to feedback in the comments. 

The original intent of this article was to explain that there's a rational, scientific way to approach the logistical problem of "living forever". 

 

  • I removed the first introductory paragraph. It was inconsistent in both tone and scope with the rest of the post. 
  • I've changed the title and removed references to "immortality" to try to eliminate some of the "science fiction" vibe.
  • I've tried to update the language so as not to imply that it is universally agreed upon that backing up a brain is a valid method of generating consciousness. 

 

 

How to save (a lot of) money on flying

8 T3t 03 February 2015 06:25PM

I was going to wait to post this for reasons, but realized that was pretty dumb when the difference of a few weeks could literally save people hundreds, if not thousands of collective dollars.

 

If you fly regularly (or at all), you may already know about this method of saving money.  The method is quite simple: instead of buying a round-trip ticket from the airline or reseller, you hunt down much cheaper one-way flights with layovers at your destination and/or your point of origin.  Skiplagged is a service that will do this automatically for you, and has been in the news recently because the creator was sued by United Airlines and Orbitz.  While Skiplagged will allow you to click-through to purchase the one-way ticket to your destination, they have broken or disabled the functionality of the redirect to the one-way ticket back (possibly in order to raise more funds for their legal defense).  However, finding the return flight manually is fairly easy as the provide all the information to filter for it on other websites (time, airline, etc).  I personally have benefited from this - I am flying to Texas from Southern California soon, and instead of a round-trip ticket which would cost me about $450, I spent ~$180 on two one-way tickets (with the return flight being the "layover" at my point-of-origin).  These are, perhaps, larger than usual savings; I think 20-25% is more common, but even then it's a fairly significant amount of money.

 

Relevant warnings by gwillen:

You should be EXTREMELY CAREFUL when using this strategy. It is, at a minimum, against airline policy.

If you have any kind of airline status or membership, and you do this too often, they will cancel it. If you try to do this on a round-trip ticket, they will cancel your return. If the airlines have any means of making your life difficult available to them, they WILL use it.

Obviously you also cannot check bags when using this strategy, since they will go to the wrong place (your ostensible, rather than your actual, destination.) This also means that if you have an overhead-sized carryon, and you board late and are forced to check it, your bag will NOT make it to your intended destination; it will go to the final destination marked on your ticket. If you try to argue about this, you run the risk of getting your ticket cancelled altogether, since you're violating airline policies by using a ticket in this way.

 

Additionally, you should do all of your airline/hotel/etc shopping using whatever private browsing mode your web browser has.  This will often let you purchase the exact same product for a cheaper price.

 

That is all.

Update: A failed attempt at rationality testing

-9 SilentCal 30 January 2015 10:43PM

This post was originally a link post to

http://arstechnica.com/business/2015/01/fcc-chairman-mocks-industry-claims-that-customers-dont-need-faster-internet/

together with an instruction to read the article before proceeding, and then the following text rot13'd:

I believe this article is a nice rationality test. Did you notice that you were reading a debate over a definition and try to figure out what the purpose of the classification was? Or did you get carried away in the condemnation of the hated telecoms? If you noticed, how long did it take you?

 

I'm open to feedback on whether this test was worthwhile and also on whether I could have presented it better. There's a tradeoff here where explaining the post's value to Less Wrong undermines that value. Had I put "Rationality Test" in the title, I could have avoided the appearance of posting an inappropriate article but made the test weaker.

and filler so you couldn't see any comments without scrolling.

As you can see from the comments here, it didn't work very well.

I'm mostly editing this now because the apparent outrage-bait link in the discussion section was a bit of a nuisance, but I'll take the chance to list what I've learned:

 

  • Not many LWers are susceptible to this genre of outrage-bait. That is, they don't have the intended gut reaction in the first place, so this didn't test whether they overcame it.
  • The only commenter who admits having had said reaction immediately and effortlessly accounted for the fact that the debate was over a definition. This suggests the test was on the easy side, even for those eligible. (Unless a bunch of people failed and didn't comment, but I doubt that)
  • Most commenters did not indicate finding it obvious that this was a test. The sort of misdirection I employed is quite viable.
  • Feedback on the idea of the test is mixed. People don't seem to mind the concept of being misdirected, but (if I read the top comment correctly) being put through the experience of an outrage-bait link was annoying and the test didn't offer enough value to justify that.

 

Prediction Markets are Confounded - Implications for the feasibility of Futarchy

14 Anders_H 26 January 2015 10:39PM

(tl;dr:  In this post, I show that prediction markets estimate non-causal probabilities, and can therefore not be used for decision making by rational agents following causal decision theory.  I provide an example of a simple situation where such confounding leads to a society which has implemented futarchy making an incorrect decision)

 

It is October 2016, and the US Presidential Elections are nearing. The most powerful nation on earth is about to make a momentous decision about whether being the brother of a former president is a more impressive qualification than being the wife of a former president. However, one additional criterion has recently become relevant in light of current affairs:   Kim Jong-Un, Great Leader of the Glorious Nation of North Korea, is making noise about his deep hatred for Hillary Clinton. He also occasionally discusses the possibility of nuking a major US city. The US electorate, desperate to avoid being nuked, have come up with an ingenious plan: They set up a prediction market to determine whether electing Hillary will impact the probability of a nuclear attack. 

The following rules are stipulated:  There are four possible outcomes, either "Hillary elected and US Nuked", "Hillary elected and US not nuked", "Jeb elected and US nuked", "Jeb elected and US not nuked".   Participants in the market can buy and sell contracts for each of those outcomes,  the contract which correponds to the actual outcome will expire at $100, all other contracts will expire at $0

Simultaneously in a country far, far away,  a rebellion is brewing against the Great Leader.  The potential challenger not only appears not to have no problem with Hillary, he also seems like a reasonable guy who would be unlikely to use nuclear weapons. It is generally believed that the challenger will take power with probability 3/7; and will be exposed and tortured in a forced labor camp for the rest of his miserable life with probability 4/7.     Let us stipulate that this information is known to all participants  - I am adding this clause in order to demonstrate that this argument does not rely on unknown information or information asymmetry. 

A mysterious but trustworthy agent named "Laplace's Demon" has recently appeared, and informed everyone that, to a first approximation,  the world is currently in one of seven possible quantum states.  The Demon, being a perfect Bayesian reasoner with Solomonoff Priors, has determined that each of these states should be assigned probability 1/7.     Knowledge of which state we are in will perfectly predict the future, with one important exception:   It is possible for the US electorate to "Intervene" by changing whether Clinton or Bush is elected. This will then cause a ripple effect into all future events that depend on which candidate is elected President, but otherwise change nothing. 

The Demon swears up and down that the choice about whether Hillary or Jeb is elected has absolutely no impact in any of the seven possible quantum states. However, because the Prediction market has already been set up and there are powerful people with vested interests, it is decided to run the market anyways. 

 Roughly, the demon tells you that the world is in one of the following seven states:

 

State

Kim overthrown

Election winner (if no intervention)

US Nuked if Hillary elected

US Nuked if Jeb elected

US Nuked

1

No

Hillary

Yes

Yes

Yes

2

No

Hillary

No

No

No

3

No

Jeb

Yes

Yes

Yes

4

No

Jeb

No

No

No

5

Yes

Hillary

No

No

No

6

Yes

Jeb

No

No

No

7

Yes

Jeb

No

No

No


Let us use this table to define some probabilities:   If one intervenes to make Hillary win the election, the probability of the US being nuked is 2/7 (this is seen from column 4).  If one intervenes to make Jeb win the election, the probability of the US being nuked is 2/7 (this is seen from column 5).   In the language of causal inference, these probabilities are Pr (Nuked| Do (Elect Clinton)] and Pr[Nuked | Do(Elect Bush)].  The fact that these two quantities  are equal confirms the Demon’s claim that the choice of President has no effect on the outcome.  An agent operating under Causal Decision theory will use this information to correctly conclude that he has no preference about whether to elect Hillary or Jeb. 

However, if one were to condition on who actually was elected, we get different numbers:  Conditional on being in a state where Hillary is elected, the probability of the US being nuked is 1/3; whereas conditional on being in a state where Jeb is elected, the probability of being nuked is ¼.  Mathematically, these probabilities are Pr [Nuked | Clinton Elected] and Pr[Nuked | Bush Elected].  An agent operating under Evidentiary Decision theory will use this information to conclude that he will vote for Bush.  Because evidentiary decision theory is wrong, he will fail to optimize for the outcome he is interested in. 

Now, let us ask ourselves which probabilities our prediction markets will converge to, ie which probabilities participants in the market have an incentive to provide their best estimate of.  We defined our contract as "Hillary is elected and the US is nuked".  The probability of this occurring in 1/7;  if we normalize by dividing by the marginal probability that Hillary is elected, we get 1/3 which is equal to  Pr [Nuked | Clinton Elected].   In other words, the prediction market estimates the wrong quantities.

Essentially, what happens is structurally the same phenomenon as confounding in epidemiologic studies:  There was a common cause of Hillary being elected and the US being nuked.  This common cause - whether Kim Jong-Un was still Great Leader of North Korea - led to a correlation between the election of Hillary and the outcome, but that correlation is purely non-causal and not relevant to a rational decision maker. 

The obvious next question is whether there exists a way to save futarchy; ie any way to give traders an incentive to pay a price that reflects their beliefs about Pr (Nuked| Do (Elect Clinton)]  instead of Pr [Nuked | Clinton Elected]).    We discussed this question at the Less Wrong Meetup in Boston a couple of months ago. The only way we agreed will definitely solve the problem is the following procedure: 

 

  1. The governing body makes an absolute pre-commitment that no matter what happens, the next President will be determined solely on the basis of the prediction market 
  2. The following contracts are listed: “The US is nuked if Hillary is elected” and “The US is nuked if Jeb is elected”
  3. At the pre-specified date, the markets are closed and the President is chosen based on the estimated probabilities
  4. If Hillary is chosen,  the contract on Jeb cannot be settled, and all bets are reversed.  
  5. The Hillary contract is expired when it is known whether Kim Jong-Un presses the button. 

 

This procedure will get the correct results in theory, but it has the following practical problems:  It allows maximizing on only one outcome metric (because one cannot precommit to choose the President based on criteria that could potentially be inconsistent with each other).  Moreover, it requires the reversal of trades, which will be problematic if people who won money on the Jeb contract have withdrawn their winnings from the exchange. 

The only other option I can think of  in order to obtain causal information from a prediction market is to “control for confounding”.   If, for instance, the only confounder is whether Kim Jong-Un is overthrown, we can control for it by using Do-Calculus to show that Pr (Nuked| Do (Elect Clinton)] = Pr (Nuked| (Clinton elected,  Kim Overthrown)* Pr (Kim Overthrown) + Pr (Nuked| (Clinton elected,  Kim Not Overthrown)* Pr (Kim Not Overthrown).   All of these quantities can be estimated from separate prediction markets.  

 However, this is problematic for several reasons:

 

  1. There will be an exponential explosion in the number of required prediction markets, and each of them will ask participants to bet on complicated conditional probabilities that have no obvious causal interpretation. 
  2. There may be disagreement on what the confounders are, which will lead to contested contract interpretations.
  3. The expert consensus on what the important confounders are may change during the lifetime of the contract, which will require the entire thing to be relisted. Etc.    For practical reasons, therefore,  this approach does not seem feasible.

 

I’d like a discussion on the following questions:  Are there any other ways to list a contract that gives market participants an incentive to aggregate information on  causal quantities? If not, is futarchy doomed?

(Thanks to the Less Wrong meetup in Boston and particularly Jimrandomh for clarifying my thinking on this issue)

(I reserve the right to make substantial updates to this text in response to any feedback in the comments)

View more: Next