Why is the A-Theory of Time Attractive?

6 Tyrrell_McAllister 31 October 2014 11:11PM

I've always been puzzled by why so many people have such strong intuitions about whether the A-theory or the B-theory1 of time is true.  [ETA: I've written "A-theory" and "B-theory" as code for "presentism" and "eternalism", but see the first footnote.]  It seems like nothing psychologically important turns on this question.  And yet, people often have a very strong intuition supporting one theory over the other.  Moreover, this intuition seems to be remarkably primitive.  That is, whichever theory you prefer, you probably felt an immediate affinity for that conception of time as soon as you started thinking about time at all.  The intuition that time is A-theoretic or B-theoretic seems pre-philosophical, whichever intuition you have.  This intuition will then shape your subsequent theoretical speculations about time, rather than vice-verse.

Consider, by way of contrast, intuitions about God.  People often have a strong pre-theoretical intuition about whether God exists.  But it is easy to imagine how someone could form a strong emotional attachment to the existence of God early in life.  Can emotional significance explain why people have deeply felt intuitions about time?  It seems like the nature of time should be emotionally neutral2.

Now, strong intuitions about emotionally neutral topics aren't so uncommon.  For example, we have strong intuitions about how addition behaves for large integers.  But usually, it seems, such intuitions are nearly unanimous and can be attributed to our common biological or cultural heritage.  Strong disagreeing intuitions about neutral topics seem rarer.

Speaking for myself, the B-theory has always seemed just obviously true.  I can't really make coherent sense out of the A-theory.  If I had never encountered the A-theory, the idea that time might work like that would not have occurred to me.  Nonetheless, at the risk of being rude, I am going to speculate about how A-theorists got that way.  (B-theorists, of course, just follow the evidence ;).)

I wonder if the real psycho-philosophical root of the A-theory is the following. If you feel strongly committed to the A-theory, maybe you are being pushed into that position by two conflicting intuitions about your own personal identity.

Intuition 1: On the one hand, you have a notion of personal identity according to which you are just whatever is accessible to your self-awareness right now, plus maybe whatever metaphysical "supporting machinery" allows you to have this kind of self-awareness.

Intuition 2: On the other hand, you feel that you must identify yourself, in some sense, with you-tomorrow.  Otherwise, you can give no "rational" account of the particular way in which you care about and feel responsible for this particular tomorrow-person, as opposed to Brittany-Spears-tomorrow, say.

But now you have a problem.  It seems that if you take this second intuition seriously, then the first intuition implies that the experiences of you-tomorrow should be accessible to you-now.  Obviously, this is not the case.  You-tomorrow will have some particular contents of self-awareness, but those contents aren't accessible to you-now.  Indeed, entirely different contents completely fill your awareness now — contents which will not be accessible in this direct and immediate way to you-tomorrow.

So, to hold onto both intuitions, you must somehow block the inference made in the previous paragraph.  One way to do this is to go through the following sequence:

  1. Take the first intuition on board without reservation.
  2. Take the second intuition on board in a modified way: "identify" you-now with you-tomorrow, but don't stop there.  If you left things at this point, the relationship of "identity" would entail a conduit through which all of your tomorrow-awareness should explode into your now, overlaying or crowding out your now-awareness.  You must somehow forestall this inference, so...
  3. Deny that you-tomorrow exists!  At least, deny that it exists in the full sense of the word.  Thus, metaphorically, you put up a "veil of nonexistence" between you-tomorrow and you-now.  This veil of nonexistence explains the absence of the tomorrow-awareness from your present awareness. The tomorrow-awareness is absent because it simply doesn't exist!  (—yet!)  Thus, in step (2), you may safely identify you-now with you-tomorrow.  You can go ahead and open that conduit to the future, without any fear of what would pour through into the now, because there simply is nothing on the other side.

One potential problem with this psychological explanation is that it doesn't explain the significance of "becoming".  Some A-theorists report that a particular basic experience of "becoming" is the immediate reason for their attachment to the A-theory.  But the story above doesn't really have anything to do with "becoming", at least not obviously.  (This is because I can't make heads or tails of "becoming".)

Second, intuitions about time, even in their primitive pre-reflective state, are intuitions about everything in time.  Yet the story above is exclusively about oneself in time.  It seems that it would require something more to pass from intuitions about oneself in time to intuitions about how the entire universe is in time.


 

1 [ETA: In this post, I use the words "A-theory" and "B-theory" as a sloppy shorthand for "presentism" and "eternalism", respectively.  The point is that these are theories of ontology ("Does the future exist?"), and not just theories about how we should talk about time.  This shouldn't seem like merely a semantic or vacuous dispute unless, as in certain caricatures of logical positivism, you think that the question of whether X exists is always just the question of whether X can be directly experienced.]

2 Some people do seem to be attached to the A-theory because they think that the B-theory takes away their free will by implying that what they will choose is already the case right now.  This might explain the emotional significance of the A-theory of time for some people.  But many A-theorists are happy to grant, say, that God already knows what they will do.  I'm trying to understand those A-theorists who aren't bothered by the implications of the B-theory for free will.

Rationality Quotes October 2014

4 Tyrrell_McAllister 01 October 2014 11:02PM

Another month, another rationality quotes thread. The rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.

 

Link: How Community Feedback Shapes User Behavior

4 Tyrrell_McAllister 17 September 2014 01:49PM

This article discusses how upvotes and downvotes influence the quality of posts on online communities.  The article claims that downvotes lead to more posts of lower quality from the downvoted commenter.

From the abstract:

Social media systems rely on user feedback and rating mechanisms for personalization, ranking, and content filtering. [...] This paper investigates how ratings on a piece of content affect its author’s future behavior. [...] [W]e find that negative feedback leads to significant behavioral changes that are detrimental to the community.  Not only do authors of negatively-evaluated content contribute more, but also their future posts are of lower quality, and are perceived by the community as such.  In contrast, positive feedback does not carry similar effects, and neither encourages rewarded authors to write more, nor improves the quality of their posts.

The authors of the article are Justin Cheng, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec.

Edited to add: NancyLebovitz already posted about this study in the Open Thread from September 8-14, 2014.

Rationality Quotes June 2014

9 Tyrrell_McAllister 01 June 2014 08:32PM

Another month, another rationality quotes thread. The rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.

 

What Bayesianism taught me

62 Tyrrell_McAllister 12 August 2013 06:59AM

David Chapman criticizes "pop Bayesianism" as just common-sense rationality dressed up as intimidating math[1]:

Bayesianism boils down to “don’t be so sure of your beliefs; be less sure when you see contradictory evidence.”

Now that is just common sense. Why does anyone need to be told this? And how does [Bayes'] formula help?

[...]

The leaders of the movement presumably do understand probability. But I’m wondering whether they simply use Bayes’ formula to intimidate lesser minds into accepting “don’t be so sure of your beliefs.” (In which case, Bayesianism is not about Bayes’ Rule, after all.)

I don’t think I’d approve of that. “Don’t be so sure” is a valuable lesson, but I’d rather teach it in a way people can understand, rather than by invoking a Holy Mystery.

What does Bayes's formula have to teach us about how to do epistemology, beyond obvious things like "never be absolutely certain; update your credences when you see new evidence"?

I list below some of the specific things that I learned from Bayesianism. Some of these are examples of mistakes I'd made that Bayesianism corrected. Others are things that I just hadn't thought about explicitly before encountering Bayesianism, but which now seem important to me.

continue reading »

[SEQ RERUN] Your Strength as a Rationalist

5 Tyrrell_McAllister 07 July 2011 11:46PM

Today's post, Your Strength as a Rationalist, was originally published on 11 August 2007. A summary (taken from the LW wiki):

A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was The Apocalypse Bet, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

An explanation of Aumann's agreement theorem

6 Tyrrell_McAllister 07 July 2011 06:22AM

I've written up a 2-page explanation and proof of Aumann's agreement theorem.  Here is a direct link to the pdf via Dropbox.  The document is also available on Scribd.  (It can be viewed by anyone, but a Scribd login appears to be required to download, so I won't be using Scribd anymore.)  

The proof in Aumann's original paper is already very short and accessible.  (Wei Dai gave an exposition closely following Aumann's in this post.)  My intention here was to make the proof even more accessible by putting it in elementary Bayesian terms, stripping out the talk of meets and joins in partition posets.  (Just to be clear, the proof is just a reformulation of Aumann's and not in any way original.)

I will appreciate any suggestions for improvements.


Update:  I've added an abstract and made one of the conditions in the formal description of "common knowledge" explicit in the informal description.

UpdateHere is a direct link to the pdf via Dropbox (ht to Vladimir Nesov).

UpdateIn this comment, I explain why the definition of "common knowledge" in the write-up is the same as Aumann's.

[SEQ RERUN] The Apocalypse Bet

3 Tyrrell_McAllister 06 July 2011 05:27PM

Today's post, The Apocalypse Bet, was originally published on 09 August 2007. A summary (taken from the LW wiki):

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was You Can Face Reality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] You Can Face Reality

6 Tyrrell_McAllister 05 July 2011 03:25PM

Today's post, You Can Face Reality, was originally published on 09 August 2007. A summary (taken from the LW wiki):

This post quotes a poem by Eugene Gendlin, which reads, "What is true is already so. / Owning up to it doesn't make it worse. / Not being open about it doesn't make it go away. / And because it's true, it is what is there to be interacted with. / Anything untrue isn't there to be lived. / People can stand what is true, / for they are already enduring it."

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was The Virtue of Narrowness, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] The Virtue of Narrowness

4 Tyrrell_McAllister 03 July 2011 07:47PM

Today's post, The Virtue of Narrowness, was originally published on 07 August 2007. A summary (taken from the LW wiki):

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was The Proper Use of Doubt, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

View more: Next