Filter This week

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Entropy and Temperature

26 spxtr 17 December 2014 08:04AM

Eliezer Yudkowsky previously wrote (6 years ago!) about the second law of thermodynamics. Many commenters were skeptical about the statement, "if you know the positions and momenta of every particle in a glass of water, it is at absolute zero temperature," because they don't know what temperature is. This is a common confusion.


To specify the precise state of a classical system, you need to know its location in phase space. For a bunch of helium atoms whizzing around in a box, phase space is the position and momentum of each helium atom. For N atoms in the box, that means 6N numbers to completely specify the system.

Lets say you know the total energy of the gas, but nothing else. It will be the case that a fantastically huge number of points in phase space will be consistent with that energy.* In the absence of any more information it is correct to assign a uniform distribution to this region of phase space. The entropy of a uniform distribution is the logarithm of the number of points, so that's that. If you also know the volume, then the number of points in phase space consistent with both the energy and volume is necessarily smaller, so the entropy is smaller.

This might be confusing to chemists, since they memorized a formula for the entropy of an ideal gas, and it's ostensibly objective. Someone with perfect knowledge of the system will calculate the same number on the right side of that equation, but to them, that number isn't the entropy. It's the entropy of the gas if you know nothing more than energy, volume, and number of particles.


The existence of temperature follows from the zeroth and second laws of thermodynamics: thermal equilibrium is transitive, and entropy is maximum in equilibrium. Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium.

If two systems are in equilibrium then they cannot increase entropy by flowing energy from one to the other. That means that if we flow a tiny bit of energy from one to the other (δU1 = -δU2), the entropy change in the first must be the opposite of the entropy change of the second (δS1 = -δS2), so that the total entropy (S1 + S2) doesn't change. For systems in equilibrium, this leads to (∂S1/∂U1) = (∂S2/∂U2). Define 1/T = (∂S/∂U), and we are done.

Temperature is sometimes taught as, "a measure of the average kinetic energy of the particles," because for an ideal gas U/= (3/2) kBT. This is wrong, for the same reason that the ideal gas entropy isn't the definition of entropy.

Probability is in the mind. Entropy is a function of probabilities, so entropy is in the mind. Temperature is a derivative of entropy, so temperature is in the mind.

Second Law Trickery

With perfect knowledge of a system, it is possible to extract all of its energy as work. EY states it clearly:

So (again ignoring quantum effects for the moment), if you know the states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.

Someone who doesn't know the state of the water will observe a violation of the second law. This is allowed. Let that sink in for a minute. Jaynes calls it second law trickery, and I can't explain it better than he does, so I won't try:

A physical system always has more macroscopic degrees of freedom beyond what we control or observe, and by manipulating them a trickster can always make us see an apparent violation of the second law.

Therefore the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable; rather that it cannot be achieved reproducibly by manipulating the macrovariables {X1, ..., Xn} that we have chosen to define our macrostate. Any attempt to write a stronger law than this will put one at the mercy of a trickster, who can produce a violation of it.

But recognizing this should increase rather than decrease our confidence in the future of the second law, because it means that if an experimenter ever sees an apparent violation, then instead of issuing a sensational announcement, it will be more prudent to search for that unobserved degree of freedom. That is, the connection of entropy with information works both ways; seeing an apparent decrease of entropy signifies ignorance of what were the relevant macrovariables.


I've actually given you enough information on statistical mechanics to calculate an interesting system. Say you have N particles, each fixed in place to a lattice. Each particle can be in one of two states, with energies 0 and ε. Calculate and plot the entropy if you know the total energy: S(E), and then the energy as a function of temperature: E(T). This is essentially a combinatorics problem, and you may assume that N is large, so use Stirling's approximation. What you will discover should make sense using the correct definitions of entropy and temperature.

*: How many combinations of 1023 numbers between 0 and 10 add up to 5×1023?

Has LessWrong Ever Backfired On You?

24 Evan_Gaensbauer 15 December 2014 05:44AM

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I thought it would only be due diligence if I tried to track users on LessWrong who have received advice on this site and it's backfired. In other words, to avoid bias in the record, we might notice what LessWrong as a community is bad at giving advice about. So, I'm seeking feedback. If you have anecdotes or data of how a plan or advice directly from LessWrong backfired, failed, or didn't lead to satisfaction, please share below. 

Podcast: Rationalists in Tech

12 JoshuaFox 14 December 2014 04:14PM
I'll appreciate feedback on a new podcast, Rationalists in Tech. 

I'm interviewing founders, executives, CEOs, consultants, and other people in the tech sector, mostly software. Thanks to Laurent Bossavit, Daniel Reeves, and Alexei Andreev who agreed to be the guinea pigs for this experiment. 
  • The audience:
Software engineers and other tech workers, at all levels of seniority.
  • The hypothesized need
Some of you are thinking: "I see that some smart and fun people hang out at LessWrong. It's hard to find people like that to work with. I wonder if my next job/employee/cofounder could come from that community."
  • What this podcast does for you
You will get insights into other LessWrongers as real people in the software profession. (OK, you knew that, but this helps.) You will hear the interviewees' ideas on CfAR-style techniques as a productivity booster, on working with other aspiring rationalists, and on the interviewees' own special areas of expertise. (At the same time, interviewees benefit from exposure that can get them business contacts,  employees, or customers.) Software engineers from LW will reach out to interviewees and others in the tech sector, and soon, more hires and startups will emerge. 

Please give your feedback on the first episodes of the podcast. Do you want to hear more? Should there be other topics? A different interview style? Better music?

(Very Short) PSA: Combined Main and Discussion Feed

8 Gondolinian 18 December 2014 03:46PM

For anyone who's annoyed by having to check newest submissions for Main and Discussion separately, there is a feed for combined submissions from both, in the form of Newest Submissions - All (RSS feed).  (There's also Comments - All (RSS feed), but for me at least, it seems to only show comments from Main and none from Discussion.)

Thanks to RichardKennaway for bringing this to my attention, and to Unknowns for asking the question that prompted him.  (If you've got the time, head over there and give them some karma.)  I thought this deserved the visibility of a post in Discussion, as not everyone reads through the Open Thread, and I think there's a chance that many would benefit from this information.

Giving What We Can - New Year drive

8 Smaug123 17 December 2014 03:26PM

If you’ve been planning to get around to maybe thinking about Effective Altruism, we’re making your job easier. A group of UK students has set up a drive for people to sign up to the Giving What We Can pledge to donate 10% of their future income to charity. It does not specify the charities - that decision remains under your control. The pledge is not legally binding, but honour is a powerful force when it comes to promising to help. If 10% is a daunting number, or you don't want to sign away your future earnings in perpetuity, there is a Try Giving scheme in which you may donate less money for less time. I suggest five years (that is, from 2015 to 2020) of 5% as a suitable "silver" option to the 10%-until-retirement "gold medal".


We’re hoping to take advantage of the existing Schelling point of “new year” as a time for resolutions, as well as building the kind of community spirit that gets people signing up in groups. If you feel it’s a word worth spreading, please feel free to spread it. As of this writing, GWWC reported 41 new members this month, which is a record for monthly acquisitions (and we’re only halfway through the month, three days into the event).


If anyone has suggestions about how to better publicise this event (or Effective Altruism generally), please do let me know. We’re currently talking to various news outlets and high-profile philanthropists to see if they can give us a mention, but suggestions are always welcome. Likewise, comments on the effectiveness of this post itself will be gratefully noted.


About Giving What We Can: GWWC is under the umbrella of the Centre for Effective Altruism, was co-founded by a LessWronger, and in 2013 had verbal praise from lukeprog.

Discussion of AI control over at worldbuilding.stackexchange [LINK]

6 ike 14 December 2014 02:59AM

Go insert some rationality into the discussion! (There are actually some pretty good comments in there, and some links to the right places, including LW).

Rationality Jokes Thread

5 Gunnar_Zarncke 18 December 2014 04:17PM

This is an experimental thread. It is somewhat in the spirit of the Rationality Quotes Thread but without the requirements and with a focus on humorous value. You may post insightful jokes, nerd or math jokes or try out rationality jokes of your own invention. 

ADDED: Apparently there has been an earlier Jokes Thread which was failry successful. Consider this another instance.

Superintelligence 14: Motivation selection methods

5 KatjaGrace 16 December 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the fourteenth section in the reading guideMotivation selection methods. This corresponds to the second part of Chapter Nine.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Motivation selection methods” and “Synopsis” from Chapter 9.


  1. One way to control an AI is to design its motives. That is, to choose what it wants to do (p138)
  2. Some varieties of 'motivation selection' for AI safety:
    1. Direct specification: figure out what we value, and code it into the AI (p139-40)
      1. Isaac Asimov's 'three laws of robotics' are a famous example
      2. Direct specification might be fairly hard: both figuring out what we want and coding it precisely seem hard
      3. This could be based on rules, or something like consequentialism
    2. Domesticity: the AI's goals limit the range of things it wants to interfere with (140-1)
      1. This might make direct specification easier, as the world the AI interacts with (and thus which has to be thought of in specifying its behavior) is simpler.
      2. Oracles are an example
      3. This might be combined well with physical containment: the AI could be trapped, and also not want to escape.
    3. Indirect normativity: instead of specifying what we value, specify a way to specify what we value (141-2)
      1. e.g. extrapolate our volition
      2. This means outsourcing the hard intellectual work to the AI
      3. This will mostly be discussed in chapter 13 (weeks 23-5 here)
    4. Augmentation: begin with a creature with desirable motives, then make it smarter, instead of designing good motives from scratch. (p142)
      1. e.g. brain emulations are likely to have human desires (at least at the start)
      2. Whether we use this method depends on the kind of AI that is developed, so usually we won't have a choice about whether to use it (except inasmuch as we have a choice about e.g. whether to develop uploads or synthetic AI first).
  3. Bostrom provides a summary of the chapter:
  4. The question is not which control method is best, but rather which set of control methods are best given the situation. (143-4)

Another view


Would you say there's any ethical issue involved with imposing limits or constraints on a superintelligence's drives/motivations? By analogy, I think most of us have the moral intuition that technologically interfering with an unborn human's inherent desires and motivations would be questionable or wrong, supposing that were even possible. That is, say we could genetically modify a subset of humanity to be cheerful slaves; that seems like a pretty morally unsavory prospect. What makes engineering a superintelligence specifically to serve humanity less unsavory?


1. Bostrom tells us that it is very hard to specify human values. We have seen examples of galaxies full of paperclips or fake smiles resulting from poor specification. But these - and Isaac Asimov's stories - seem to tell us only that a few people spending a small fraction of their time thinking does not produce any watertight specification. What if a thousand researchers spent a decade on it? Are the millionth most obvious attempts at specification nearly as bad as the most obvious twenty? How hard is it? A general argument for pessimism is the thesis that 'value is fragile', i.e. that if you specify what you want very nearly but get it a tiny bit wrong, it's likely to be almost worthless. Much like if you get one digit wrong in a phone number. The degree to which this is so (with respect to value, not phone numbers) is controversial. I encourage you to try to specify a world you would be happy with (to see how hard it is, or produce something of value if it isn't that hard).

2. If you'd like a taste of indirect normativity before the chapter on it, the LessWrong wiki page on coherent extrapolated volition links to a bunch of sources.

3. The idea of 'indirect normativity' (i.e. outsourcing the problem of specifying what an AI should do, by giving it some good instructions for figuring out what you value) brings up the general question of just what an AI needs to be given to be able to figure out how to carry out our will. An obvious contender is a lot of information about human values. Though some people disagree with this - these people don't buy the orthogonality thesis. Other issues sometimes suggested to need working out ahead of outsourcing everything to AIs include decision theory, priors, anthropics, feelings about pascal's mugging, and attitudes to infinity. MIRI's technical work often fits into this category.

4. Danaher's last post on Superintelligence (so far) is on motivation selection. It mostly summarizes and clarifies the chapter, so is mostly good if you'd like to think about the question some more with a slightly different framing. He also previously considered the difficulty of specifying human values in The golem genie and unfriendly AI (parts one and two), which is about Intelligence Explosion and Machine Ethics.

5. Brian Clegg thinks Bostrom should have discussed Asimov's stories at greater length:

I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.

If you haven't already, you might consider (sort-of) following his advice, and reading some science fiction.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Can you think of novel methods of specifying the values of one or many humans?
  2. What are the most promising methods for 'domesticating' an AI? (i.e. constraining it to only care about a small part of the world, and not want to interfere with the larger world to optimize that smaller part).
  3. Think more carefully about the likely motivations of drastically augmenting brain emulations
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will start to talk about a variety of more and less agent-like AIs: 'oracles', genies' and 'sovereigns'. To prepare, read Chapter “Oracles” and “Genies and Sovereigns” from Chapter 10The discussion will go live at 6pm Pacific time next Monday 22nd December. Sign up to be notified here.

An explanation of the 'Many Interacting Worlds' theory of quantum mechanics (by Sean Carroll and Chip Sebens)

4 Ander 18 December 2014 11:36PM

This is the first explanation of a 'many worlds' theory of quantum mechanics that has ever made sense to me. The animations are excellent:


How many people am I?

4 Manfred 15 December 2014 06:11PM

Strongly related: the Ebborians

Imagine mapping my brain into two interpenetrating networks. For each brain cell, half of it goes to one map and half to the other. For each connection between cells, half of each connection goes to one map and half to the other. We can call these two mapped out halves Manfred One and Manfred Two. Because neurons are classical, as I think, both of these maps change together. They contain the full pattern of my thoughts. (This situation is even more clear in the Ebborians, who can literally split down the middle.)

So how many people am I? Are Manfred One and Manfred Two both people? Of course, once we have two, why stop there - are there thousands of Manfreds in here, with "me" as only one of them? Put like that it sounds a little overwrought - what's really going on here is the question of what physical system corresponds to "I" in english statements like "I wake up." This may matter.

The impact on anthropic probabilities is somewhat straightforward. With everyday definitions of "I wake up," I wake up just once per day no matter how big my head is. But if the "I" in that sentence is some constant-size physical pattern, then "I wake up" is an event that happens more times if my head is bigger. And so using the variable people-number definition, I expect to wake up with a gigantic head.

The impact on decisions is less big. If I'm in this head with a bunch of other Manfreds, we're all on the same page - it's a non-anthropic problem of coordinated decision-making. For example, if I were to make any monetary bets about my head size, and then donate profits to charity, no matter what definition I'm using, I should bet as if my head size didn't affect anthropic probabilities. So to some extent the real point of this effect is that it is a way anthropic probabilities can be ill-defined. On the other hand, what about preferences that depend directly on person-numbers like how to value people with different head sizes? Or for vegetarians, should we care more about cows than chickens, because each cow is more animals than a chicken is?


According to my common sense, it seems like my body has just one person in it. Why does my common sense think that? I think there are two answers, one unhelpful and one helpful.

The first answer is evolution. Having kids is an action that's independent of what physical system we identify with "I," and so my ancestors never found modeling their bodies as being multiple people useful.

The second answer is causality. Manfred One and Manfred Two are causally distinct from two copies of me in separate bodies but the same input/output. If a difference between the two separated copies arose somehow, (reminiscent of Dennett's factual account) henceforth the two bodies would do and say different things and have different brain states. But if some difference arises between Manfred One and Manfred Two, it is erased by diffusion.

Which is to say, the map that is Manfred One is statically the same pattern as my whole brain, but it's causally different. So is "I" the pattern, or is "I" the causal system? 

In this sort of situation I am happy to stick with common sense, and thus when I say me, I think the causal system is referring to the causal system. But I'm not very sure.


Going back to the Ebborians, one interesting thing about that post is the conflict between common sense and common sense - it seems like common sense that each Ebborian is equally much one person, but it also seems like common sense that if you looked at an Ebborian dividing, there doesn't seem to be a moment where the amount of subjective experience should change, and so amount of subjective experience should be proportional to thickness. But as it is said, just because there are two opposing ideas doesn't mean one of them is right.

On the questions of subjective experience raised in that post, I think this mostly gets cleared up by precise description an  anthropic narrowness. I'm unsure of the relative sizes of this margin and the proof, but the sketch is to replace a mysterious "subjective experience" that spans copies with individual experiences of people who are using a TDT-like theory to choose so that they individually achieve good outcomes given their existence.

Weekly LW Meetups

3 FrankAdamek 19 December 2014 05:27PM

This summary was posted to LW Main on December 12th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Munchkining for Fun and Profit, Ideas, Experience, Successes, Failures

3 Username 19 December 2014 05:39AM

A munchkin is someone who follows the letter of the rules of a game while breaking their spirit, someone who wins by exploiting possibilities that others don't see, reject as impossible or unsporting, or just don't believe can possibly work.

If you have done something that everyone around you thought would not work, something that people around you didn't do after they saw it work, please share your experiences. If you tried something and failed or have ideas you want to hear critique of, likewise please share those with us.

Bayes Academy Development Report 2 - improved data visualization

3 Kaj_Sotala 18 December 2014 10:11PM

See here for the previous update if you missed / forgot it.

In this update, no new game content, but new graphics.

I wasn’t terribly happy about the graphical representation of the various nodes in the last update. Especially in the first two networks, if you didn’t read the descriptions of the nodes carefully, it was very easy to just click your way through them without really having a clue of what the network was actually doing. Needless to say, for a game that’s supposed to teach how the networks function, this is highly non-optimal.

Here’s the representation that I’m now experimenting with: the truth table of the nodes is represented graphically inside the node. The prior variable at the top doesn’t really have a truth table, it’s just true or false. The “is” variable at the bottom is true if its parent is true, and false if its parent is false.

You may remember that in the previous update, unobservable nodes were represented in grayscale. I ended up dropping that, because that would have been confusing in this representation: if the parent is unobservable, should the blobs representing its truth values in the child node be in grayscale as well? Both “yes” and “no” answers felt confusing.

Instead the observational state of a node is now represented by its border color. Black for unobservable, gray for observable, no border for observed. The metaphor is supposed to be something like, a border is a veil of ignorance blocking us from seeing the node directly, but if the veil is gray it’s weak enough to be broken, whereas a black veil is strong enough to resist a direct assault. Or something.

When you observe a node, not only does its border disappear, but the truth table entries that get reduced to a zero probability disappear, to be replaced by white boxes. I experimented with having the eliminated entries still show up in grayscale, so you could e.g. see that the “is” node used to contain the entry for (false -> false), but felt that this looked clearer.

The “or” node at the bottom is getting a little crowded, but hopefully not too crowded. Since we know that its value is “true”, the truth table entry showing (false, false -> false) shows up in all whites. It’s also already been observed, so it starts without a border.

After we observe that there’s no monster behind us, the “or” node loses its entries for (monster, !waiting -> looks) and (monster, waiting -> looks), leaving only (!monster, waiting -> looks): meaning that the boy must be waiting for us to answer.

This could still be made clearer: currently the network updates instantly. I’m thinking about adding a brief animation where the “monster” variable would first be revealed as false, which would then propagate an update to the values of “looks at you” (with e.g. the red tile in “monster” blinking at the same time as the now-invalid truth table entries, and when the tiles stopped blinking, those now-invalid entries would have disappeared), and that would in turn propagate the update to the “waiting” node, deleting the red color from it. But I haven’t yet implemented this.

The third network is where things get a little tricky. The “attacking” node is of type “majority vote” - i.e. it’s true if at least two of its parents are true, and false otherwise. That would make for a truth table with eight entries, each holding four blobs each, and we could already see the “or” node in the previous screen being crowded. I’m not quite sure of what to do here. At this moment I’m thinking of just leaving the node as is, and displaying more detailed information in the sidebar.

Here’s another possible problem. Just having the truth table entries works fine to make it obvious where the overall probability of the node comes from… for as long as the valid values of the entries are restricted to “possible” and “impossible”. Then you can see at a glance that, say, of the three possible entries, two would make this node true and one would make this false, so there’s a ⅔ chance of it being true.

But in this screen, that has ceased to be the case. The “attacking” node has a 75% chance of being true, meaning that, for instance, the “is / block” node’s “true -> true” entry also has a 75% chance of being the right one. This isn’t reflected in the truth table visualization. I thought of adding small probability bars under each truth table entry, or having the size of the truth table blobs reflect their probability, but then I’d have to make the nodes even bigger, and it feels like it would easily start looking cluttered again. But maybe it’d be the right choice anyway? Or maybe just put the more detailed information in the sidebar? I’m not sure of the best thing to do here.

If anyone has good suggestions, I would be grateful to get advice from people who have more of a visual designer gene than I do!

"incomparable" outcomes--multiple utility functions?

3 Emanresu 17 December 2014 12:06AM

I know that this idea might sound a little weird at first, so just hear me out please?

A couple weeks ago I was pondering decision problems where a human decision maker has to choose between two acts that lead to two "incomparable" outcomes. I thought, if outcome A is not more preferred than outcome B, and outcome B is not more preferred than outcome A, then of course the decision maker is indifferent between both outcomes, right? But if that's the case, the decision maker should be able to just flip a coin to decide. Not only that, but adding even a tiny amount of extra value to one of the outcomes should always make that outcome be preferred. So why can't a human decision maker just make up their mind about their preferences between "incomparable" outcomes until they're forced to choose between them? Also, if a human decision maker is really indifferent between both outcomes, then they should be able to know that ahead of time and have a plan for deciding, such as flipping a coin. And, if they're really indifferent between both outcomes, then they should not be regretting and/or doubting their decision before an outcome even occurs regardless of which act they choose. Right?

I thought of the idea that maybe the human decision maker has multiple utility functions that when you try to combine them into one function some parts of the original functions don't necessarily translate well. Like some sort of discontinuity that corresponds to "incomparable" outcomes, or something. Granted, it's been a while since I've taken Calculus, so I'm not really sure how that would look on a graph.

I had read Yudkowsky's "Thou Art Godshatter" a couple months ago, and there was a point where it said "one pure utility function splintered into a thousand shards of desire". That sounds like the "shards of desire" are actually a bunch of different utility functions.

I'd like to know what others think of this idea. Strengths? Weaknesses? Implications?

Group Rationality Diary, December 16-31

3 therufs 15 December 2014 03:30AM

This is the public group rationality diary for December 16-31.

It's a place to record and chat about it if you have done, or are actively doing, things like: 

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: December 1-15

Rationality diaries archive

Velocity of behavioral evolution

2 Dr_Manhattan 19 December 2014 05:34PM

This suggested a major update on the velocity of behavioral trait evolution.

Basically mice transmitted fear of cherry smell reliably into the very next generation (via epigenetics).

This seems pretty important.

Letting it go: when you shouldn't respond to someone who is wrong

2 dhasenan 19 December 2014 03:12PM

I'm requesting that people follow a simple guide when determining whether to respond to a post. This simple algorithm should raise the quality of discussion here.

  • If you care about the answer to a question, you will research it.
  • If you don't care about the answer, don't waste people's time by arguing about it, even if someone's post seems wrong.
  • If you don't care and still want to argue, do the research.

Why should you follow these rules?


It takes very little effort to post a contradictory assertion. You just have to skim a post, find an assertion (preferably one that isn't followed or preceded immediately by paragraphs of backing evidence, but that's an optional filter), and craft a sentence indicating that that assertion is wrong or flawed. Humans can do this almost by instinct. It's magical.

Refuting a contradiction takes effort. I typically spend at least five minutes of research and five minutes of writing to make a reply refuting a bare contradiction when I have already studied the issue thoroughly and know which sources I want to use. I go to this effort because I care about these statements I've made and because I care about what other people believe. I want to craft a reply that is sufficiently thorough to be convincing. And, I'll admit, I want to crush my opponents with my impeccable data. I'm a bit petty sometimes.

If I haven't researched the issue well -- if my sources are second-hand, or if I'm using personal experience -- I might spend two hours researching a simple topic and ten to fifteen minutes creating a response. This is a fair amount of time invested. I don't mind doing it; it makes me learn more. It's a time investment, though.

So, let's compare. Half a second of thought and two minutes to craft a reply containing nothing but a contradiction, versus two hours of unpaid research. This is a huge imbalance. Let's address this by trying to research people's claims before posting a contradiction, shall we?


You are convinced that someone's argument is flawed. This means that they have not looked into the issue sufficiently, or their reasoning is wrong. As a result, you can't trust their argument to be a good example of arguments for their position. You can look for flaws of reasoning, which is easy. You can look for cases where their data is misleading or wrong -- but that requires actual effort. You have to either find a consensus in the relevant authorities that differs from what this other person is saying, or you have to look at their specific data in some detail. That means you have to do some research.


If you want people to stick around, and you're brusquely denying their points until they do hours of work to prove them, they're going to view lesswrong as a source of stress. This is not likely to encourage them to return. If you do the legwork yourself, you seem knowledgeable. If you're careful with your phrasing, you can also seem helpful. (I expect that to be the tough part.) This reduces the impact of having someone contradict you.

Advancing the argument.

From what I've seen, the flow of argument goes something like: argument → contradiction of two or three claims → proof of said claims → criticism of proof → rebuttal → acceptance, analysis of argument. By doing some research on your own rather than immediately posting a contradiction, you are more quickly getting to the meat of the issue. You aren't as likely to get sidetracked. You can say things like: "This premise seems a bit contentious, but it's a widely supported minority opinion for good reasons. Let's take it as read for now and see if your conclusions are supported, and we can come back to it if we need to."

Bonus: "You're contradicting yourself."

Spoiler: they're not contradicting themselves.

We read here a lot about how people's brains fail them in myriad interesting ways. Compartmentalization is one of them. People's beliefs can contradict each other. But people tend to compartmentalize between different contexts, not within the same context.

One post or article probably doesn't involve someone using two different compartments. What looks like a contradiction is more likely a nuance that you don't understand or didn't bother to read, or a rhetorical device like hyperbole. (I've seen someone here say I'm contradicting myself when I said "This group doesn't experience this as often, and when they do experience it, it's different." Apparently "not as often" is the same as "never"?) Read over the post again. Look for rhetorical devices. Look for something similar that would make sense. If you're uncertain, try to express that similar argument to the other person and ask if that's what they mean.

If you still haven't found anything besides a bare contradiction, a flat assertion that they're contradicting themselves is a bad way to proceed. If you're wrong and they aren't contradicting themselves, they will be annoyed at you. That's bad enough. They will have to watch everything they say very carefully so that they do not use rhetorical devices or idioms or anything that you could possibly lawyer into a contradiction. This takes a lot more effort than simply writing an argument in common modes of speech, as everyone who's worked on a journal article knows.

Arguing with you is not worth that amount of effort. Don't make it harder than it needs to be.

[Short, Meta] Should open threads be more frequent?

2 Metus 18 December 2014 11:41PM

Currently open threads are weekly and very well received. However they tend to fill up quickly. Personally I fear that my contribution will drown unless posted early on so I tend to wait if I want to add a new top level post. Does anyone else have this impression? Someone with better coding skills than me could put this statistically by plotting the number of top level posts and total posts over time: If the curve is convex people tend to delay their posts.

So should open threads be more frequent and if so what frequency?

Open thread, Dec. 15 - Dec. 21, 2014

2 Gondolinian 15 December 2014 12:01AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Post Resource Request

1 Minds_Eye 19 December 2014 05:04PM

After having viewed a recent post by Gworley I noticed that the material was deliberately opaque*.  It's not as complicated as it seems, and should be able to be taught to people at lower than "Level 4" on Kegan's Constructive Developmental Theory.  The only serious block I saw was the ridiculous gap in inferential distance. 
With that in mind I was hoping someone might have recommendations on even tangentially related material as what I have now appears to be insufficient.  (Simplifying CDT appears to be manageable, but not particularly useful without further material like Kantor's Four Player Model and Subject-Object Notation.)

*edit: Not Gworley's, it was Kegan's material that was opaque.

[Link] The Dominant Life Form In the Cosmos Is Probably Superintelligent Robots

0 Gunnar_Zarncke 20 December 2014 12:28PM

An Article on Motherboard reports about  Alien Minds by Susan Schneider who claiThe Dominant Life Form In the Cosmos Is Probably Superintelligent Robots. The article is crosslinked to other posts about superintelligence and at the end discusses the question why these alien robots leave us along. The arguments puts forth on this don't convince me though. 


How many words do we have and how many distinct concepts do we have?

-4 MazeHatter 17 December 2014 11:04PM

In another message, I suggested that, given how many cultures we have to borrow from, that our language may include multiple words from various sources that apply to a single concept.

An example is Reality, or Existence, or Being, or Universe, or Cosmos, or Nature, ect.

Another is Subjectivity, Mind, Consciousness, Experience, Qualia, Phenomenal, Mental, ect

Is there any problem with accepting these claims so far? Curious what case would be made to the contrary.

(Here's a bit of a contextual aside, between quantum mechanics and cosmology, the words "universe", "multiverse", and "observable universe" mean at least 10 different things, depending on who you ask. People often say the Multiverse comes from Hugh Everett. But what they are calling the multiverse, Everett called "universal wave function", or "universe". How did Everett's universe become the Multiverse? DeWitt came along and emphasized some part of the wave function branching into different worlds. So, if you're following, one Universe, many worlds. Over the next few decades, this idea was popularized as having "many parallel universes", which is obviously inaccurate. Well, a Scottish chap decided to correct this. He stated the Universe was the Universal Wave Function, where it was "a complete one", because that's what "uni" means. And that our perceived worlds of various objects is a "multiverse". One Universe, many Multiverses. Again, the "parallel universes" idea seemed cooler, so as it became more popular the Multiverse became one and the universe became many. What's my point? The use of these words is legitimate fiasco, and I suggest we abandon them altogether.)

If these claims are found to be palatable, what do they suggest?

I propose, respectfully and humbly as I can imagine there may be compelling alternatives presented here, that in the 21st century, we make a decision about which concepts are necessary, which term we will use to describe that concept, and respectfully leave the remaining terms for the domain of poetry.

Here are the words I think we need:

  1. reality
  2. model
  3. absolute
  4. relative
  5. subjective
  6. objective
  7. measurement
  8. observer

With these terms I feel we can construct a concise metaphysical framework, consistent with the great rationalists of history, and that accurately described Everett's "Relative State Formulation of Quantum Mechanics".

  1. Absolute reality is what is. It is relative to no observer. It is real prior to measurement.
  2. Subjective reality is what is, relative to a single observer. It exists at measurement.
  3. Objective reality is the model relative to all observers. It exists post-measurement.

Everett's Relative State formulation, is roughly this:

  1. The wave function is the "absolute state" of the model
  2. The wave function contains an observer and their measurement apparatus
  3. An observer makes a measurements and records the result in a memory
  4. those measurement records are the "relative state" of the model

Here we see that the words multiverse and universe are abandoned for absolute and relative states, which is actually the language used in the Relative State Formulation.

My conclusion then, for you consideration and comment, is that a technical view of reality can be attained by having a select set of terms, and this view is not only consistent with themes of philosophy (which I didn't really explain) but also the proper framework in which to interpret quantum mechanics (ala Everett).

(I'm not sure how familiar everyone here is with Everett specifically or not. His thesis depended on "automatically function machines" that make measurements with sensory gear and record them. After receiving his PhD, he left theoretical physics, and had a life long fascination with computer vision and computer hearing. That suggests to me, the reason his papers have been largely confounding to the general physicists, is because they didn't realize the extent to which Everett really thought he could mathematically model an observer.)

I should note, it may clarify things to add another term "truth", though this would in general be taken as an analog of "real". For example, if something is absolute true, then it is of absolute reality. If something is objectively true, then it is of objective reality. The word "knowledge" in this sense is a poetic word for objective truth, understood on the premise that objective truth is not absolute truth.