In response to The Crackpot Offer
Comment author: Barkley__Rosser 11 September 2007 05:38:01PM 1 point [-]

Not sure if this is cranky or not, but when I was youthful I noticed that the Lorentz transformation of space-time due relativistic effects, square root of one minuc v squared over c squared, implies an imaginary solution for an v greater than c, that is for traveling faster than the speed of light. Now, most sci fi stories suggest that one would go backwards in time if one exceeded the speed of light, but I deduced that one would go into a second time dimension.

Of course the problem is that as long as Einstein is right, it is simply impossible to exceed the speed of light, thereby making the entire speculation irrelevant.

Comment author: Barkley__Rosser 17 August 2007 08:59:55PM 0 points [-]

Eliezer,

This is about to scroll off, but, frankly, I do not know what you mean by "normative" in this context. The usual usage of this term implies statements about values or norms. I do not see that anything about this has anything to do with values or norms. Perhaps I do not understand the "wholel point of Bayes' Theorem." Then again, I do not see anything in your reply that actually counters the argument I made.

Bottom line: I think your "law" is only true by assumption.

Comment author: Barkley__Rosser 15 August 2007 07:56:41PM 0 points [-]

Eliezer,

I do not necessarily believe that likelihood ratios are fixed for all time. The part of me that is Bayesian tends to the radically subjective form a la Keynes.

Also, I am a fan of nonstandard analysis. So, I have no problem with infinities that are not mere limits.

Comment author: Barkley__Rosser 14 August 2007 08:27:05PM 0 points [-]

Furthermore, I remind one and all that Bayes' Theorem is asymptotic. Even if the conditions hold, the "true" probability is approached only in the infinite time horizon. This could occur so slowly that it might stay on the "wrong" side of 50% well past the time that any finite viewer might hang around to watch.

There is also the black swan problem. It could move in the wrong direction until the black swan datum finally shows up pushing it in the other direction, which, again, may not occur during the time period someone is observing. This black swan question is exactly the frame of discussion here, as it is Taleb who has gone on and on about this business about evidence and absence thereof.

Comment author: Barkley__Rosser 14 August 2007 08:24:10PM 0 points [-]

Tom,

Bayes' Theorem has its limits. The support must be continuous, the dimensionality must be finite. Some of the discussion here has raised issues here that could be relevant to these kinds of conditiosn, such as fuzziness about the truth or falsity of H. This is not as straightforward as you claim it is.

Comment author: Barkley__Rosser 13 August 2007 10:06:38PM -1 points [-]

I would agree that the lack of sabotage cannot be argued as support for accepting an increase in the probability of the existence of a fifth column. But it may not be sufficient to lower the probability that there is a fifth column, and certainly may not be sufficient to lower a prior of greater than 50% to below 50%, even assuming that one is a Bayesian.

In response to Feeling Rational
Comment author: Barkley__Rosser 26 April 2007 06:09:40PM 0 points [-]

This is one of those rare moments where the usually horribly heterodox economist, me, defends orthodox economic theory. So, looked at very closely, orthodox microeconomic says nothing at all about peoples' preferences themselves, which presumably involve their emotional reactions to various things. What is assumed is certain things about these preferences, that people know what they are, that they exhibit continuity, that they have a degree of internal consistency in the sense of exhibiting transitivity, and it also makes people behave more "rationall" and exhibit continuous demand functions if their utility functions exhibit convexity. So, rationality is not about what your preferences are or the degree to which they are based on one's emotions. They are that one know what they are, that they have a degree of internal coherence or consistency, and the, the biggie, that people actually act on the basis of their real preferences.

A lot of the problems regarding "irrationality" involve people behaving in internally consistent manners, especially over time. Behavioral economists are now arguing it out whether one should deal with this via multiple personality (or preference systems) models or approaches that stress focusing on "rationality" and keeping mind one's "real" preferences. Thus, hyperbolic discounting involves "time inconsistency." I want things now that I shall regret having wanted so much later. I eat the candy bar now and wake up fat later, etc. etc. Is this a combat of two preference systems or just "irrationality," People like Matthew Rabin who tend to use the latter approch, in fact say that the goal is to have people be "rational," to know their own real preferences and to act on them. If they really do not mind being fat, then go ahead and eat the candy bar. But in any case, it is perfectly OK either way to have the caring about being fat or not caring about being fat to be based on one's emotional reactions. One should undertand one's own emotional reactions. That is rationality.

Comment author: Barkley__Rosser 12 April 2007 05:34:48PM 0 points [-]

Hal,

You are being a bad boy. In his earlier discussion Eliezer made it clear that he did not approve of this terminology of "updating priors." One has posterior probability distributions. The prior is what one starts with. However, Eliezer has also been a bit confusing with his occasional use of such language as a "prior learning." I repeat, agents learn, not priors, although in his view of the post-human computerized future, maybe it will be computerized priors that do the learning.

The only way one is going to get "wrong learning" at least somewhat asymptotically is if the dimensionality is high and the support is disconnected. Eliezer is right that if one starts off with a prior that is far enough off, one might well have "wrong learning," at least for awhile. But, unless the conditions I just listed hold, eventually the learning will move in the right direction and head towards the correct answer, or probability distribution, at least that is what Bayes' Theorem asserts.

OTOH, the reference to "deep Bayesianism" raises another issue, that of fundamental subjectivism. There is this deep divide among Bayesians between the ones that are ultimately classical frequentists but who argue that Bayesian methods are a superior way of getting to the true objective distribution, and the deep subjectivist Bayesians. For the latter, there are no ultimately "true" probability distributions. We are always estimating something derived out of our subjective priors as updated by more recent information, wherever those priors came from.

Also, saying a prior should the known probability distribution, say of cancer victims, assumes that this probability is somehow known. The prior is always subject to how much information the assumer of a prior has when they being their process of estimation.

In response to "Inductive Bias"
Comment author: Barkley__Rosser 10 April 2007 06:14:47PM 0 points [-]

Eliezer,

Ah, so you are a constructivist, perhaps even an intuitionist? Even so, the point of such theorems is that they can happen in a long transient within finite constraints, with the biggie here being the non-connectedness of the support. One can get stuck in a cycle going nowhere for a long time, just as in such phenomena as transient chaos. With a suitably large, but finite, dimensionality and a disconnected support, one can wander in a wilderness with not much serious convergence for a very long time.

I find the idea of a "prior learning" to be a bit weird. It is an agent who learns, although the prior the agent walks in with will certainly play a role in the ability of the agent to learn. But the problem of inertia that I raised has more to do with the nature of agents than with their priors.

Getting to the raison d'etre of this blog, the question here is does bias arise from the nature of the prior an agent brings to a decision or analytical process, or is it something about the open-mindedness or willing to adjust posteriors in the face of evidence that is more important? Presumably both are playing at least some role.

In response to "Inductive Bias"
Comment author: Barkley__Rosser 09 April 2007 08:36:31PM 0 points [-]

Eliezer,

Yes, thank you for correcting my sloppy wording.

So, it is the marginal posterior probabilities that exhibit inertia, or slow updating through learning, not the eternally unvarying "priors."

View more: Prev | Next