Occupy Wall Street: Predictions, Speculations

-5 byrnema 19 October 2011 12:44AM

On reddit today I read 'the-Gandhi-quote' on a post about the Wall Street Occupation Protest:

"First they ignore you. Then they laugh at you.."

I'm transitioning, possibly, from the laughing stage and am beginning to feel the tiniest bit excited that perhaps some actual change is in order. On the one hand, I feel sufficiently skeptical about the probability of a 'revolution'. On the other hand, given how fast the world has been changing (the internet, gloablization), maybe change is inevitable and this is the way it happens now.

I know this is a rationality site, not a current affairs site, but when something tweaks my interest I like to know what "Less Wrong" thinks...waiting for a spontaneous post could take forever and finding a 'rationality-spin' would be disingenuous so I'll just ask:

What is possible and what is likely?

What factors are important?

 

 

Bayesian Reasoning Applied to House Selling: Listing Price

1 byrnema 26 August 2011 11:43PM

Like Yvain's parents, I am planning on moving house. Selling a house and buying a house involve making a lot of decisions based on limited information, which I thought would make a set of good exercises for the application of Bayesian reasoning. I need to decide what price to list my house for, determine how much time and money to put into fixing it up, choose a new home and then there's the two poker games of the final negotiations of the sale.

(I logged onto Less Wrong having just made the decision to consider posting this article, so I was kind of weirded out at first by the title of Yvain's post; but then I was relieved that the topic was somewhat different. I am used to coincidences but on the other hand they push me a little paranoid on my spectrum and I'll feel less stable for a few hours. I already know Google tracks me and who knows what algorithms could be running given a bunch of computer scientists...?)

 

House Story

tldr; We're listing at the appraised value +10%.

A few years ago, we purchased a beautiful house. 'We' is my husband and I and my parents. We purchased the house because it includes a guest house where my parents can retire. However, my mom continues to postpone retirement and in the meantime my husband and I decided we would a) like more light, b) like a shorter commute and c) could purchase two homes we prefer for the price of this one -- my parents would enjoy a house on the water. (Great post and spot on about the features that matter, Yvain!)

I would be happy to sell the house for +5%, covering real estate fees and new flooring we put in. However, three houses in the cul de sac have sold this year for +10% and so we listed it at that price too. Our house is bigger than theirs but not as nice (they have granite and impressive entrances and we don't). On the other hand, having the guest house makes us special.

Via agent and potential buyer feedback, we're coming to realize that we might be lucky to sell the house for +5%. At this price level, people prefer a house that is impressive and in perfect condition.

 

Primary Bayesian Question

My primary question is the following: how should we decide to modify our listing price as we get more information?

First, I've read that if a house is priced correctly you'll get an average of one offer every 10 showings. So far we've had 2 showings without an offer. After how many showings should we reduce the price?

Second, the other three houses sold in 6 or 7 months. After how many months should we reduce the price?

Keep in mind, we don't have to move and I estimate that I would be willing to stay in this house for about +3% per year. In other words, I would be willing to wait 2 years for a higher offer if I could sell it for +3% more by doing so.

I anticipate that after posting this I will be embarrassed that it is so pecuniary. On the other hand, this makes it concrete and the problem in general doesn't have too many emotional factors. Any money we make over the first +5% can be used as a down payment for our next house after we pay our parents back. (I did feel embarrassed, so I took out the dollar values and replaced with relative percents.)

Most-Moral-Minority Morality

-3 byrnema 27 June 2011 04:28PM

In this post, I discuss a theoretical strategy for finding a morally optimum world in which -- regardless of my intrinsic moral preferences -- I fold to the preferences of the most moral minority.


I don't personally find X to be intrinsically immoral. I know that if some people knew this about me, they might feel shocked, sad and disgusted. I can understand how they would feel because I feel that Y is immoral and not everyone does, even though they should.

These are unpleasant feelings and, combined with the fear that immoral events will happen more frequently due to apathy,  I'm willing to fold X into my category of things that shouldn't happen. Not because of X itself, but because I know it makes people feel bad.

This is more than the gaming strategy I'll be anti-X if they'll be anti-Y. This is a reflection that the most moral world is a world in which people's moral preferences are maximally satisfied, so that no one needs to feel that their morality is marginalized and suffer the feelings of disgust and sadness.

Ideal Application: Nested Morality Model

The sentiment and strategy just described is ideal in the case of a nested model of moralities in which preferences can be roughly universally ranked from most immoral to least immoral: X1, X2, X3, X4, ... . Every one has an immoral threshold where they no longer care. For example, all humans consider the first few elements to be immoral. However, only the most morally sensitive humans care about the elements after the first few thousand. In such a world where this model was accurate, it would be ideal to fold to the morality of the most morally sensitive. Not only would you be satisfying the morality of everyone, you could be certain that you were also satisfying the morality of your most moral, future selves, especially by extending the fold a little further out.

Figure: Hierarchy of Moral Preferences in the Nested Morality Model

Note that in this model it doesn't actually matter if individual humans would rank the preferences differently. Since they're all satisfied, the ordering of preferences doesn't matter. Folding to the most moral minority should solve all moral conflicts that result from varying sensitivity to a moral issue, regardless of differences in relative rankings. For example, by such a strategy I should become a vegetarian (although I'm not).

Real Life Application: Very Limited

However, in reality, moral preferences aren't nested in sensitivity, but conflicting. Someone may have a moral preference for Y, while someone else may have a clear preference for ~Y. Such conflicts are not uncommon and may represent the majority of moral conflicts in the world.

Secondly, even if a person is indifferent about the moral value of Y and ~Y, they may value the freedom or the diversity of having both Y and ~Y in the world.

When it comes to the latter conflicts, I think that the world would be a happier place if freedom and diversity suffered a little bit for very strong (albeit minority) moral preferences. However, freedom and diversity should not suffer too much for very weak or very small sample size preferences. With such a trade-off situation, an optimum can not be found since I don't expect to be able to place relative weights on 'freedom', 'diversity' and an individual's moral preference in a general case.

 

For now, I think I will simply resolve to (consider) folding to the moral preference Z of a fellow human in the simplest case where I am apathetic about Z and also indifferent to the freedom and diversity of Z and ~Z.

Hypotheses For Dualism

1 byrnema 09 January 2010 08:05AM

In this post I present the first few hypotheses that I can think for why people insist on a metaphysical aspect to consciousness, and develop one in some detail: a "reality is simulated" hypothesis.

Please contribute your own hypotheses.  Why is there a persistent belief that consciousness is metaphysical?

continue reading »

Dissenting Views

19 byrnema 26 May 2009 06:55PM

Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.

Maintaining a High Signal to Noise Ratio

The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.

For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.

continue reading »

Rationalist Role in the Information Age

4 byrnema 30 April 2009 06:24PM

In response to Marketing rationalism, Bystander Apathy, Step N1 in Extreme Rationality: It Could Be Great, and Rationality: Common Interest of Many Causes.

The problem that motivates this post is:
 “Given a controversial question in which there are good and bad arguments on both sides, as well as unreliable and conflicting information, how do you determine the answer when you’re not yourself an expert in the subject?”

Well into the information age, we are still not pooling our resources in the most efficient way to get to the bottom of things. It would be enormously useful to develop some kind of group strategy to answer questions that have solutions somewhere in there.

The idea I'm presenting is a way to apply our intellectual (and rational) resources in a niche way, that I will shortly describe, to facilitate public (non-expert) understanding of real world problems.

The Niche and the Need

Science, obviously, does the best job of solving problems. I'm confident that epidemiologists are effectively and efficiently working on the best models for pandemics as I write this post.

And journalists do a pretty good job of what they do: providing information about what the scientists are doing. What's best about how journalists do that is that they always provide the source of the information, so that a rational person can judge the truth-value of that information. A qualified and rational person can then put the information in perspective.

Alas, people are not so good at interpreting the information: they are neither rational in weighting the information nor qualified to put it in perspective. (Presumably, the epidemiologists are too busy to do this.)

Interpreting information correctly is a service that rationalists could provide.

continue reading »

Wednesday depends on us.

1 byrnema 29 April 2009 03:47AM

In response to Theism, Wednesday, and Not Being Adopted

It is a theist cliché: you need religion to define morality. The argument doesn't have to be as simplistic as “you need God to impose it”, but at the least it is the belief that your community needs to agree on what is ethical. When a community starts talking about what is ethical, they quickly depart from anything strictly fact-based. As a community, they need to figure out what the morality is (e.g., love your neighbor), construct a narrative using symbols that make sense to everyone (there’s this external entity God, someone like your father, who wants it this way) and enforce it (if you don’t go along, you go to Hell.) This is probably 40% of what religion is.

While the development of a religion is a community effort to some extent (communities choose among competing religions and religions evolve), the main work is done by the priests. The priest is usually an exceptionally good thinker/reasoner/philosopher – maybe 4-7 3-5 standard deviations from the mean. [Correction]

There are a few things that are very confusing to Wednesday when you try to convert her. First of all, she understands on at least a subconscious level that religion is her community’s ethical system. When you say you don’t believe in God, she thinks you’re saying, ‘it’s OK to torture babies’. What’s scary is that she’s somewhat justified here: without an externally applied ethical belief system, individual ethics can vary widely from what she accepts as ethical (and what you accept as ethical).

 If morality is something that humanity protects, can we blame Wednesday for that?

continue reading »

Atheist or Agnostic?

4 byrnema 18 April 2009 09:25PM

If you’re not sure:

Where I come from, if you don’t believe in God and you don’t have a proof that God doesn’t exist, you say you’re agnostic. A typical conversation in polite company would go like this:

Woman: What are your religious views?

Me: Oh, I’m an atheist. You?

Woman: Well, do you know for certain that God doesn’t exist?

Me: I’m pretty sure, that’s what I believe.

Woman: How do you know that God isn’t withholding all evidence that he exists to test your faith? How do you know that’s not the case?

Me: Well, it’s possible that everything is an illusion.

Woman (with finality): You’re agnostic.

 

Every community has its own set of definitions. Here on LW, you are an atheist, simply, if you don’t believe in God. You don’t have to be 100% certain – we understand that nothing is 100% certain and you believe in God’s non-existence if you believe it with the same conviction that you believe other things, such as the Earth is orbiting around the sun. For a fuller explanation, see this comment.

 For the rest of us:

My favorite passage in the Bible is Exodus 4 because this is the part of the bible that made me suspect that it was written by men; men that were pretty unsophisticated in their sense of justice and reasonable deity behavior. God asks Moses to come be on His side, and Moses agrees. The next thing that happens is that God is trying to kill Moses because his son isn’t circumcised. I guess God already asked Moses to do that? They left that part out of the story. Nevertheless, God seems more peevish than rational here. Presumably, he chose Moses for a reason. So trying to kill him in the very next scene doesn’t make a lot of sense.

As someone who has had some trouble figuring out how things are thought about in atheist circles, I would like to suggest not being like God in Exodus 4 when people ask why we’re atheist even though we can’t prove there’s no God. It’s understandably annoying to repeat yourself, but they need to understand the context of atheism here. You can refer them to  this comment again or "The Fallacy of Gray" or here.

And steel yourself for the inevitable argument that belief in God is a special case and deserves extra certainty. These are final steps…

 

----

I would like this to be a reference for people coming onto the site that consider themselves agnostic. Any editing suggestions welcome.

GroupThink, Theism ... and the Wiki

-4 byrnema 13 April 2009 05:28PM

In response to the  The uniquely awful example of theism, I presented myself as a datapoint of someone in the group who disagrees that theism is uncontroversially irrational.

With a loss of considerable time, several karma points and two bad posts, I now retract my position.

continue reading »

Maybe Theism Is OK -- Part 2

-6 byrnema 11 April 2009 06:32AM

In response to: The uniquely awful example of theism

And Maybe Theism Is OK

Finally, I think I understand where gim and others are coming from when they made statements that I thought represented overly intolerant views of religious belief. I think that a good summary of the source of the initial difference in opinion is that while many people in this group have the purpose to eliminate all sources of irrationality,  I would like to pick and choose which sources of irrationality I have in the optimization of a different problem: general life-hacking.

Probably many people in this group believe that the best life-hack would be to eliminate irrationality. But I'm pretty sure this depends on the person (not everyone is suited for X-rationality), and I'm pretty sure -- though not certain -- that my best life-hack would include some irrationality.

Since my goals are different than that of this forum, many of my views are not relevant here, and there is no need to debate them.

Instead, I would like to present two arguments (1,2) for why it could be rational to hold an irrational belief, and two arguments (3,4) as to why someone could be more accepting of the existence of irrational beliefs (i.e., why not to hate it).

(1) It could be rational to hold an irrational belief if you are aware of your irrational belief and choose to hold it because it is grafted to components of your personality/ psyche that are valuable to you. For example, you may find that

  • eschewing your religious beliefs makes you feel depressed and you are unable to work productively
  • your ability to control unwanted impulses is tied with a moral conscience that is inextricably tied with beliefs about God.
  • ability to perform a certain artistic activity that you enjoy is compartmentalized with spiritual beliefs

I imagine these situations would be the result of an organically developing mind that has made several errors and is possibly unstable. But until we have a full understanding of mental processes/psychology/the physiology of emotions, we cannot expect a rational person to just "tough it out" to optimize rationality while his life falls apart.

Later added: This argument has since been described better, with a better emphasis, with [this comment.](http://lesswrong.com/lw/aq/how_much_thought/6zp)

(2) It could be rational to hold an irrational belief if you choose to hold it because you would like to exercise true control of your mind. Put another way, you may find it to be an aesthetic art of some form to choose a set of beliefs and truly believe them. Why would anyone want to do this? Eliminating all beliefs and becoming rational is a good exercise in controlling your mind. I hazard that a second exercise would be to believe what you consciously choose to.

(3) I think there is another reason to consciously choose to try to believe something that you don't believe rationally-- true understanding of the enemy; the source and the grip of an irrational thought. What irked me most about the negative comments about religious views was the lack of any empathy for those views. It may seem like a contradiction but while I believe some religious views are irrational I do not dismiss people who hold them as hopelessly irrational. With empathy, I believe that it is possible to hold religious views and not greatly compromise rationality.

(4) Maybe you are indeed right that any kind of religious view is irrational and that we would be better off without it. However, it is not at as clear that religious views can ever be completely exorcised... Suppose we wanted to create a world in which important parts of people's personalities are never tied to religious views. Are children allowed to daydream? Is a child allowed to daydream they are omnipotent? Are they allowed to pretend there is a God for a day? How will it affect creativity and motivation and development if there is no empathy for an understanding of God?

View more: Prev | Next