[LINK] The Bayesian Second Law of Thermodynamics

8 shminux 12 August 2015 04:52PM

Sean Carroll et al. posted a preprint with the above title. Sean also has a discussion of it in his blog. 

While I am a physicist by training, statistical mechanics and thermodynamics is not my strong suit, and I hope someone with expertise in the area can give their perspective on the paper. For now, here is my summary, apologies for any potential errors:

There is a tension between different definitions of entropy: Boltzmann entropy, which counts macroscopically indistinguishable microstates always increases, except for extremely rare decreases. Gibbs/Shannon entropy, which counts our knowledge of a system, can decrease if an observer examines the system and learns something new about it. Jaynes had a paper on that topic, Eliezer discussed this in the Sequences, and spxtr recently wrote a post about it. Now Carroll and collaborators propose the "Bayesian Second Law" that quantifies this decrease in Gibbs/Shannon entropy due to a measurement:

[...] we derive the Bayesian Second Law of Thermodynamics, which relates the original (un-updated) distribution at initial and final times to the updated distribution at initial and final times. That relationship makes use of the cross entropy between two distributions [...] 

[...] the Bayesian Second Law (BSL) tells us that this lack of knowledge — the amount we would learn on average by being told the exact state of the system, given that we were using the un-updated distribution — is always larger at the end of the experiment than at the beginning (up to corrections because the system may be emitting heat)

This last point seems to resolve the tension between the two definitions of entropy, and has applications to non-equilibrium processes, where an observer is replaced with an outcome of some natural process, such as RNA self-assembly.

 

Philosophy professors fail on basic philosophy problems

16 shminux 15 July 2015 06:41PM

Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.

Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.

Abstract:

We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

Some quotes (emphasis mine):

When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.

[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.

I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?

 

Agency is bugs and uncertainty

10 shminux 06 June 2015 04:53AM

(Epistemic status: often discussed in bits in pieces, haven't seen it summarized in one place anywhere.)

Do you feel that your computer sometimes has a mind of its own? "I have no idea why it is doing that!" Do you feel that, the more you understand and predict someone's action, the less intelligent and more "mechanical" they appear? 

My guess is that, in many cases, agency (as in, the capacity to act and make choices) is a manifestation of the observer's inability to explain and predict the agent's actions. To Omega in the Newcomb's problem humans are just automatons without a hint of agency. To a game player some NPCs appear stupid and others smart, and the more you play and the more you can predict the NPCs, the less agenty they appear to you.

Note that randomness is not the same as uncertainty, since if you can predict that someone or something behaves randomly, it is still a prediction. What I mean is more of a Knightian uncertainty, where one fails to make a useful prediction at all. Something like a tornado may appear to intentionally go after you if you fail to predict where it will be going and you have trouble escaping.

If you are a user of a computer program, and it does not behave as you expect it to, you often get a feeling of there being a hostile intelligence opposing you, occasionally resulting in an aggressive behavior toward it, usually with verbal violence, though occasionally getting physical, the way we would confront an actual enemy. On the other hand, if you are the programmer who wrote the code in question, you think of the misbehavior as bugs, not intentional hostility, and treat the code by debugging or documenting. Mostly. Sometimes I personalize especially nasty bugs.

I was told by a nurse that this is also how they are taught to treat difficult patients: you don't get upset at someone's misbehavior and instead treat them not as an agent, but more like an algorithm in need of debugging. Parents of young children are also advised to take this approach.

This seems to also apply to self-analysis, though to a lesser degree. If you know yourself well, and can predict what you would do in a specific situation, you may feel that your response is mechanistic or automatic and not agenty or intelligent. Or maybe not. I am not sure. I think if I had the capacity for full introspection, not just the surface level understanding of my thoughts and actions, I would ascribe much less agency to myself. Probably because it would cease to be a useful concept. I wonder if this generalizes to a superintelligence capable of perfect or near perfect self-reflection.

This leads us to the issue of feelings, deliberate choices, free will and ability to consent and take responsibility. These seem to be useful, if illusory, concepts for when you live among your intellectual peers and want to be treated at least as having as much agency as you ascribe to them. But this is a topic for a different post. 

 

A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats

17 shminux 18 April 2015 11:46PM

"This book is awful" => "I dislike this book" => "I dislike this book because it is shallow and is full of run-on sentences." => I dislike this book because I prefer reading books I find deep and clearly written."

"The sky is blue" => ... => "When I look at the sky, the visual sensation I get is very similar to when I look at a bunch of other objects I've been taught to associate with the color blue."

"Team X lost but deserved to win" => ...

"Being selfish is immoral" 

"The Universe is infinite, so anything imaginable happens somewhere"

In general, consider a quick check whether in a given context replacing "is" with "appears to be" leads to something you find non-trivial.

Why? Because it exposes the multiple levels of maps we normally skip. So one might find illuminating occasionally walking through the levels and making sure they are still connected as firmly as the last time. And maybe figuring out where the people who hold a different opinion from yours construct a different chain of maps. Also to make sure you don't mistake a map for the territory.

That is all. ( => "I think that I have said enough for one short post and adding more would lead to diminishing returns, though I could be wrong here, but I am too lazy to spend more time looking for links and quotes and better arguments without being sure that they would improve the post.")

 

[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying

7 shminux 02 April 2015 04:55PM

Scott Adams, the author of the Dilbert comic and several books, my favorite being How to Fail at Almost Everything and Still Win Big, named his debating format The Rationality Engine. He calls it this way because he claims that it is "the system for turning irrational opinions into rational outcomes". He applies it to several polarizing issues, those this site tends to label "Politics" and "Mind Killer" and shy away from.

His first application, investigating the gender pay gap, seems to have worked pretty well, resulting in several unexpected conclusions. His second, Who is More Anti-Science? I found to be slightly less impressive, but still producing a rather balanced output.

Now he is applying it to the debate about Assisted Dying. Scott's goal is to have a law passed in California that is similar to the ones already in effect in Oregon and several other places.

Scott will debate Jimmy Akin, a prominent contributor to Catholic Answers

I am quite attracted to Scott's attempts at hands-on instrumental rationality, and on a rather grand scale to boot. They are very much in the spirit of his latest book.

Currently he is accepting suggestions for questions and links for all sides of the issue. Feel free to contribute.

EDIT: I think adding cryonics to the discussion would only complicate the issue and not be helpful, but that's just a guess.

 

In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him?

10 shminux 27 February 2015 08:57PM

Hopefully at least one or two would show a virtue of non-straw rationality.

Episode list

 

 

We live in an unbreakable simulation: a mathematical proof.

-31 shminux 09 February 2015 04:01AM

Actually, the title is a sensationalist lie designed to attract attention. I have no proof. Obviously. I'm not a mathematician. But if I did, it would go something like the following. 

Step 1: Assume that there are ultimate laws of physics governing everything in the world. Say, the wave function of the Universe, whose knowledge allows one to know the Multiverse, as it was, is or will be. Or some other set of laws. 

Step 2: Write these laws as a mathematically consistent formal system representing something akin to the Tegmark Level IV Ultimate ensemble.

Step 3: By Godel's incompleteness, there are some theorems in this formal system that cannot be proven.

Step 4: By construction, these theorems correspond to physical laws whose origins must forever remain a mystery to those inside the Multiverse, because they are a part of it.

Step 5: The consistency of our Multiverse can be proven in a formal system which describes physical laws of a larger world, in which our Multiverse is a small part of, essentially a simulation.

Step 6: Since we cannot determine the origins of our own physics, we cannot figure out a way to break out of our simulation. 

 

On the bright side, there is a Corollary:  Every level above us is also a simulation, so we are not alone!

 

Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later.

19 shminux 28 August 2014 11:37PM

Some of the comments on the link by James_Miller exactly six months ago provided very specific estimates of how the events might turn out:

James_Miller:

  • The odds of Russian intervening militarily = 40%.
  • The odds of the Russians losing the conventional battle (perhaps because of NATO intervention) conditional on them entering = 30%.
  • The odds of the Russians resorting to nuclear weapons conditional on them losing the conventional battle = 20%.

Me:

"Russians intervening militarily" could be anything from posturing to weapon shipments to a surgical strike to a Czechoslovakia-style tank-roll or Afghanistan invasion. My guess that the odds of the latter is below 5%.

A bet between James_Miller and solipsist:

I will bet you $20 U.S. (mine) vs $100 (yours) that Russian tanks will be involved in combat in the Ukraine within 60 days. So in 60 days I will pay you $20 if I lose the bet, but you pay me $100 if I win.

While it is hard to do any meaningful calibration based on a single event, there must be lessons to learn from it. Given that Russian armored columns are said to capture key Ukrainian towns today, the first part of James_Miller's prediction has come true, even if it took 3 times longer than he estimated.

Note that even the most pessimistic person in that conversation (James) was probably too optimistic. My estimate of 5% appears way too low in retrospect, and I would probably bump it to 50% for a similar event in the future.

Now, given that the first prediction came true, how would one reevaluate the odds of the two further escalations he listed? I still feel that there is no way there will be a "conventional battle" between Russia and NATO, but having just been proven wrong makes me doubt my assumptions. If anything, maybe I should give more weight to what James_Miller (or at least Dan Carlin) has to say on the issue. And if I had any skin in the game, I would probably be even more cautious.


[LINK] Could a Quantum Computer Have Subjective Experience?

16 shminux 26 August 2014 06:55PM

Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:

  • Is an FHE-encrypted sim with a lost key conscious?
  • If you "untorture" a reversible simulation, did it happen? What does the untorture feel like?
  • Is Vaidman brain conscious? (You have to read the blog post to learn what it is, not going to spoil it.)

Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable". 

Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".

There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.

I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.

Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already. 

 

[LINK] Physicist Carlo Rovelli on Modern Physics Research

6 shminux 22 August 2014 09:46PM

A blog post in Scientific American, well worth reading. Rovelli is a researcher in Loop Quantum Gravity.

Some quotes:

Horgan: Do multiverse theories and quantum gravity theories deserve to be taken seriously if they cannot be falsified?

Rovelli: No.

Horgan: What’s your opinion of the recent philosophy-bashing by Stephen Hawking, Lawrence Krauss and Neil deGrasse Tyson?

Rovelli: Seriously: I think they are stupid in this.   I have admiration for them in other things, but here they have gone really wrong.  Look: Einstein, Heisenberg, Newton, Bohr…. and many many others of the greatest scientists of all times, much greater than the names you mention, of course, read philosophy, learned from philosophy, and could have never done the great science they did without the input they got from philosophy, as they claimed repeatedly.  You see: the scientists that talk philosophy down are simply superficial: they have a philosophy (usually some ill-digested mixture of Popper and Kuhn) and think that this is the “true” philosophy, and do not realize that this has limitations.

Horgan: Can science attain absolute truth?

 

Rovelli: I have no idea what “absolute truth” means. I think that science is the attitude of those who find funny the people saying they know something is absolute truth.  Science is the awareness that our knowledge is constantly uncertain.  What I know is that there are plenty of things that science does not understand yet. And science is the best tool found so far for reaching reasonably reliable knowledge.

Horgan: Do you believe in God?

Rovelli: No.  But perhaps I should qualify the answer, because like this it is bit too rude and simplistic. I do not understand what “to believe in God” means. The people that “believe in God” seem like Martians to me.  I do not understand them.  I suppose this means that I “do not believe in God”. If the question is whether I think that there is a person who has created Heavens and Earth, and responds to our prayers, then definitely my answer is no, with much certainty.

Horgan: Are science and religion compatible?

Rovelli: Of course yes: you can be great in solving Maxwell’s equations and pray to God in the evening.  But there is an unavoidable clash between science and certain religions, especially some forms of Christianity and Islam, those that pretend to be repositories of “absolute Truths.”

 

View more: Next