Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Secrets of the eliminati

93 Post author: Yvain 20 July 2011 10:15AM

Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.

In a utility-maximizing AI, mental states can be reduced to smaller components. The AI will have goals, and those goals, upon closer examination, will be lines in a computer program.

But in the blue-minimizing robot, its "goal" isn't even a line in its program. There's nothing that looks remotely like a goal in its programming, and goals appear only when you make rough generalizations from its behavior in limited cases.

Philosophers are still very much arguing about whether this applies to humans; the two schools call themselves reductionists and eliminativists (with a third school of wishy-washy half-and-half people calling themselves revisionists). Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

I took a similar tack asking ksvanhorn's question in yesterday's post - how can you get a more accurate picture of what your true preferences are? I said:

I don't think there are true preferences. In one situation you have one tendency, in another situation you have another tendency, and "preference" is what it looks like when you try to categorize tendencies. But categorization is a passive and not an active process: if every day of the week I eat dinner at 6, I can generalize to say "I prefer to eat dinner at 6", but it would be non-explanatory to say that a preference toward dinner at 6 caused my behavior on each day. I think the best way to salvage preferences is to consider them as tendencies currently in reflective equilibrium.


A more practical example: when people discuss cryonics or anti-aging, the following argument usually comes up in one form or another: if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper. And therefore your reluctance to sign up for cryonics violates your own revealed preferences! You must just be trying to signal conformity or something.

The problem is that not signing up for cryonics is also a "revealed preference". "You wouldn't sign up for cryonics, which means you don't really fear death so much, so why bother running from a burning building?" is an equally good argument, although no one except maybe Marcus Aurelius would take it seriously.

Both these arguments assume that somewhere, deep down, there's a utility function with a single term for "death" in it, and all decisions just call upon this particular level of death or anti-death preference.

More explanatory of the way people actually behave is that there's no unified preference for or against death, but rather a set of behaviors. Being in a burning building activates fleeing behavior; contemplating death from old age does not activate cryonics-buying behavior. People guess at their opinions about death by analyzing these behaviors, usually with a bit of signalling thrown in. If they desire consistency - and most people do - maybe they'll change some of their other behaviors to conform to their hypothesized opinion.

One more example. I've previously brought up the case of a rationalist who knows there's no such thing as ghosts, but is still uncomfortable in a haunted house. So does he believe in ghosts or not? If you insist on there being a variable somewhere in his head marked $belief_in_ghosts = (0,1) then it's going to be pretty mysterious when that variable looks like zero when he's talking to the Skeptics Association, and one when he's running away from a creaky staircase at midnight.

But it's not at all mysterious that the thought "I don't believe in ghosts" gets reinforced because it makes him feel intelligent and modern, and staying around a creaky staircase at midnight gets punished because it makes him afraid.

Behaviorism was one of the first and most successful eliminationist theories. I've so far ignored the most modern and exciting eliminationist theory, connectionism, because it involves a lot of math and is very hard to process on an intuitive level. In the next post, I want to try to explain the very basics of connectionism, why it's so exciting, and why it helps justify discussion of behaviorist principles.

Comments (252)

Comment author: [deleted] 18 July 2011 01:14:05AM 25 points [-]

I wonder:

if you had an agent that obviously did have goals (let's say, a player in a game, whose goal is to win, and who plays the optimal strategy) could you deduce those goals from behavior alone?

Let's say you're studying the game of Connect Four, but you have no idea what constitutes "winning" or "losing." You watch enough games that you can map out a game tree. In state X of the world, a player chooses option A over other possible options, and so on. From that game tree, can you deduce that the goal of the game was to get four pieces in a row?

I don't know the answer to this question. But it seems important. If it's possible to identify, given a set of behaviors, what goal they're aimed at, then we can test behaviors (human, animal, algorithmic) for hidden goals. If it's not possible, that's very important as well; because that means that even in a simple game, where we know by construction that the players are "rational" goal-maximizing agents, we can't detect what their goals are from their behavior.

That would mean that behaviors that "seem" goal-less, programs that have no line of code representing a goal, may in fact be behaving in a way that corresponds to maximizing the likelihood of some event; we just can't deduce what that "goal" is. In other words, it's not as simple as saying "That program doesn't have a line of code representing a goal." Its behavior may encode a goal indirectly. Detecting such goals seems like a problem we would really want to solve.

Comment author: Wei_Dai 18 July 2011 03:24:09AM 15 points [-]

From that game tree, can you deduce that the goal of the game was to get four pieces in a row?

One method that would work for this example is to iterate over all possible goals in ascending complexity, and check which one would generate that game tree. How to apply this idea to humans is unclear. See here for a previous discussion.

Comment author: [deleted] 18 July 2011 03:29:54AM 2 points [-]

Ok, computationally awful for anything complicated, but possible in principle for simple games. That's good, though; that means goals aren't truly invisible, just inconvenient to deduce.

Comment author: chatquitevoit 18 July 2011 03:37:10PM 3 points [-]

I think, actually, because we hardly ever play with optimal strategy goals are going to be nigh impossible to deduce. Would such a end-from-means deduction even work if the actor was not using the optimal strategy? Because humans only do so in games on the level of tic-tac-toe (the more rational ones maybe in more complex situations, but not by much), and as for machines that could utilize optimal strategy, we've just excluded them from even having such 'goals'.

Comment author: Error 05 September 2013 08:42:07PM 1 point [-]

If each game is played to the end (no resignations, at least in the sample set) then presumably you could make good initial guesses about the victory condition by looking at common factors in the final positions. A bit like zendo. It wouldn't solve the problem, but it doesn't rely on optimal play, and would narrow the solution space quite a bit.

e.g. in the connect-four example, all final moves create a sequence of four or more in a row. Armed with that hypothesis, you look at the game tree, and note that all non-final moves don't. So you know (with reasonably high confidence) that making four in a row ends the game. How to figure out whether it wins the game or loses it is an exercise for the reader.

(mental note, try playing C4 with the win condition reversed and see if it makes for an interesting game.)

Comment author: printing-spoon 18 July 2011 04:56:57AM 2 points [-]

there's always heuristics, for example seeing that the goal of making three in a row fits the game tree well suggests considering goals of the form "make n in a row" or at least "make diagonal and orthogonal versions of some shape"

Comment author: sixes_and_sevens 18 July 2011 10:30:00AM 9 points [-]

Human games (of the explicit recreational kind) tend to have stopping rules isomorphic with the game's victory conditions. We would typically refer to those victory conditions as the objective of the game, and the goal of the participants. Given a complete decision tree for a game, even a messy stochastic one like Canasta, it seems possible to deduce the conditions necessary for the game to end.

An algorithm that doesn't stop (such as the blue-minimising robot) can't have anything analogous to the victory condition of a game. In that sense, its goals can't be analysed in the same way as those of a Connect Four-playing agent.

Comment author: Khaled 18 July 2011 11:51:49AM 2 points [-]

So if the blue-minimising robot was to stop after 3 months (the stop condition is measured by a timer), can we say that the robot's goal is to stay "alive" for 3 months? I cannot see a necessry link between deducing goals and stopping conditions.

A "victory condition" is another thing, but from a decision tree, can you deduce who loses (for Connect Four, perhaps it is the one who reaches the first four that loses).

Comment author: sixes_and_sevens 18 July 2011 01:05:31PM 2 points [-]

By "victory condition", I mean a condition which, when met, determines the winning, losing and drawing status of all players in the game. A stopping rule is necessary for a victory condition (it's the point at which it is finally appraised), but it doesn't create a victory condition, any more than imposing a fixed stopping time on any activity creates winners and losers in that activity.

Comment author: Khaled 19 July 2011 10:02:33AM 1 point [-]

Can we know the victory condition from just watching the game?

Comment author: sixes_and_sevens 22 July 2011 11:23:22AM 4 points [-]

Just to underscore a broader point: recreational games have various characteristics which don't generalise to all situations modelled game-theoretically. Most importantly, they're designed to be fun for humans to play, to have consistent and explicit rules, to finish in a finite amount of time (RISK notwithstanding), to follow some sort of narrative and to have means of unambiguously identifying winners.

Anecdotally, if you're familiar with recreational games, it's fairly straightforward to identify victory conditions in games just by watching them being played, because their conventions mean those conditions are drawn from a considerably reduced number of possibilities. There are, however, lots of edge- and corner-cases where this probably isn't possible without taking a large sample of observations.

Comment author: kurokikaze 21 July 2011 03:23:14PM 1 point [-]

Well, even if we have conditions to end game we still don't know if player's goal is to end the game (poker) or to avoid ending it for as long as possible (Jenga). We can try to deduce it empirically (if it's possible to end game on first turn effortlesly, then goal is to keep going), but I'm not sure if it applies to all games.

Comment author: sixes_and_sevens 22 July 2011 11:42:00AM 2 points [-]

If ending the game quickly or slowly is part of the objective, in what way is it not included in the victory conditions?

Comment author: kurokikaze 25 July 2011 09:15:12AM 1 point [-]

I mean it could not be visible from a game log (for complex games). We will see the combination of pieces when game ends (ending condition), but it can be not enough.

Comment author: sixes_and_sevens 25 July 2011 09:33:29AM 2 points [-]

I don't think we're talking about the same things here.

A decision tree is an optimal path through all possible decision in a game, not just the history of any given game.

"Victory conditions" in the context I'm using are the conditions that need to be met in order for the game to end, not simply the state of play at the point when any given game ends.

Comment author: DanielLC 18 July 2011 03:45:55AM 6 points [-]

What I've heard is that, for an intelligent entity, it's easier to predict what will happen based on their goals rather than what they do.

For example, with the connect four game, if you manage to figure out that they always seem to get four in a row, and you never do when you play against them, before you can figure out what their strategy is, you know their goal.

Comment author: orthonormal 18 July 2011 05:38:31AM 8 points [-]

Although you might have just identified an instrumental subgoal.

Comment author: Pavitra 03 August 2011 02:20:32AM 4 points [-]

I suspect that "has goals" is ultimately a model, rather than a fact. To the extent that an agent's behavior maximizes a particular function, that agent can be usefully modeled as an optimizer. To the extent that an agent's behavior exhibits signs of poor strategy, such as vulnerability to dutch books, that agent may be better modeled as an algorithm-executer.

This suggests that "agentiness" is strongly tied to whether we are smart enough to win against it.

Comment author: wedrifid 03 August 2011 09:46:46AM 2 points [-]

I suspect that "has goals" is ultimately a model, rather than a fact. To the extent that an agent's behavior maximizes a particular function, that agent can be usefully modeled as an optimizer. To the extent that an agent's behavior exhibits signs of poor strategy, such as vulnerability to dutch books, that agent may be better modeled as an algorithm-executer.

This suggests that "agentiness" is strongly tied to whether we are smart enough to win against it.

This principle is related to (a component of) the thing referred to as 'objectified'. That is, if a person is aware that another person can model it as an algorithm-executor then it may consider itself objectified.

Comment author: lythrum 18 July 2011 11:40:07PM 3 points [-]

If you had lots of end states, and lots of non-end states, and we want to assume the game ends when someone's won, and that a player only moves into an end state if he's won (neither of these last two are necessarily true even in nice pretty games), then you could treat it like a classification problem. In that case, you could throw your favourite classifier learning algorithm at it. I can't think of any publications on someone machine learning a winning condition, but that doesn't mean it's not out there.

Dr. David Silver used temporal difference learning to learn some important spatial patterns for Go play, using self-play. Self play is basically like watching yourself play lots of games with another copy of yourself, so I can imagine similar ideas being used to watching someone else play. If you're interested in that, I suggest http://www.aaai.org/Papers/IJCAI/2007/IJCAI07-170.pdf

On a sadly less published (and therefore mostly unreliable) but slightly more related note, we did have a project once in which we were trying to teach bots to play a Mortal Kombat style game only by observing logs of human play. We didn't tell one of the bots the goal, we just told it when someone had won, and who had won. It seemed to get along ok.

Comment author: Will_Newsome 23 July 2011 10:32:10PM 1 point [-]

One of my 30 or so Friendliness-themed thought experiments is called "Implicit goals of ArgMax" or something like that. In general I think this style of reasoning is very important for accurately thinking about universal AI drives. Specifically it is important to analyze highly precise AI architectures like Goedel machines where there's little wiggle room for a deus ex machina.

Comment author: Vladimir_Nesov 23 July 2011 11:21:28PM *  1 point [-]

Compare with only ever seeing one move made in such a game, but being able to inspect in detail the reasons that played a role in deciding what move to make, looking for explanations for that move. It seems that even one move might suffice, which goes to show that it's unnecessary for behavior itself to somehow encode agent's goals, as we can also take into account the reasons for the behavior being so and so.

Comment author: kybernetikos 21 July 2011 05:10:39AM *  10 points [-]

eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Just because something only exists at high levels of abstraction doesn't mean it's not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour - even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.

The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn't do anything else, that doesn't mean that there isn't a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.

Comment author: Logos01 21 July 2011 07:27:19PM 8 points [-]

Just because something only exists at high levels of abstraction doesn't mean it's not real or explanatory.

I have often stated that, as a physicalist, the mere fact that something does not independently exist -- that is, it has no physically discrete existence -- does not mean it isn't real. The number three is real -- but does not exist. It cannot be touched, sensed, or measured; yet if there are three rocks there really are three rocks. I define "real" as "a pattern that proscriptively constrains that which exists". A human mind is real; but there is no single part of your physical body you can point to and say, "this is your mind". You are the pattern that your physical components conform to.

It seems very often that objections to reductionism are founded in a problem of scale: the inability to recognize that things which are real from one perspective remain real at that perspective even if we consider a different scale.

It would seem, to me, that "eliminativism" is essentially a redux of this quandary but in terms of patterns of thought rather than discrete material. It's still a case of missing the forest for the trees.

Comment author: kybernetikos 22 July 2011 09:14:51AM *  0 points [-]

I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the 'reality' of things when in fact they're arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don't think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves. (Also - why would we expect any biological system to do one thing and one thing only?).

I regard almost all these questions as very similar to the demarcation problem. A higher level abstraction is real if it provides predictions that often turn out to be true. It's acceptable for it to be an incomplete / imperfect model, although generally speaking if there is another that provides better predictions we should adopt it instead.

This is what would convince me that preferences were not real: At the moment I model other people by imagining that they have preferences. Most of the time this works. The eliminativist needs to provide me with an alternate model that reliably provides better predictions. Arguments about theory will not sway me. Show me the model.

Comment author: Khaled 18 July 2011 09:04:53AM 8 points [-]

But if whenever I eat dinner at 6I sleep better than when eating dinner at 8, can I not say that I prefer dinner at 6 over dinner at 8? Which would be one step over saying I prefer to sleep well than not.

I think we could have a better view if we consider many preferences in action. Taking your cyonics example, maybe I prefer to live (to a certain degree), prefer to conform, and prefer to procrastinate. In the burning-building situation, the living preference is playing more or less alone, while in the cryonics situation, preferences interact somewhat like oppsite forces and then motion happens in the winning side. Maybe this is what makes preferences seem like varying?

Comment author: MaoShan 27 July 2011 05:27:38AM *  0 points [-]

Or is it that preferences are what you get when you consider future situations, in effect removing the influence of your instincts? If I consistently applied the rationale to both situations (cryonics, burning building), and came up with the conclusion that I would prefer not to flee the burning building, that might make me a "true rationalist", but only until the point that the building was on fire. No matter what my "preferences" are, they will (rightly so) be over-ridden by my survival instincts. So, is there any practical purpose to deciding what my preferences are? I'd much rather have my instincts extrapolated and provided for.

Comment author: [deleted] 27 July 2011 06:11:08AM 0 points [-]

Depends on the extent to which you consider your instincts a part of you. Equally, if you cannot afford cryonics, you could argue that your preferences to sign up or not are irrelevant. No matter what your "preferences" are, they will be overridden by your budget.

Comment author: Eugine_Nier 17 July 2011 11:15:48PM 6 points [-]

Eliminativism is all well and good if all one wants to do is predict. However, it doesn't help answer questions like "What should I do?", or "What utility function should we give the FAI?"

Comment author: Yvain 18 July 2011 12:12:11AM *  34 points [-]

The same might be said of evolutionary psychology. In which case I would respond that evolutionary psychology helped us stop thinking in a certain stupid way.

Once, we thought that men were attracted to pretty women because there was some inherent property called "beauty", or that people helped their neighbors because there was a universal Moral Law to which all minds would have access. Once it was the height of sophistication to argue whether people were truly good but corrupted by civilization, or truly evil but restrained by civilization.

Evolutionary psychology doesn't answer "What utility function should we give the FAI?", but it gives good reasons to avoid the "solution": 'just tell it to look for the Universal Moral Law accessible to all minds, and then do that.' And I think a lot of philosophy progresses by closing off all possible blind alleys until people grudgingly settle on the truth because they have no other alternative.

I am less confident in my understanding of eliminativism than of evo psych, so I am less willing to speculate on it. But since one common FAI proposal is "find out human preferences, and then do those", if it turns out human preferences don't really exist in a coherent way, that sounds like an important thing to know.

I think many people have alluded to this problem before, and that the people seriously involved in the research don't actually expect it to be that easy, but a clear specification of all the different ways in which it is not quite that easy is still useful. The same is true for "what should I do?"

Comment author: Vaniver 20 July 2011 12:22:54AM *  3 points [-]

But since one common FAI proposal is "find out human preferences, and then do those", if it turns out human preferences don't really exist in a coherent way, that sounds like an important thing to know.

I would think that knowing evo psych is enough to realize this is a dodgy approach at best.

Comment author: TimFreeman 12 August 2011 08:53:02PM *  1 point [-]

I would think that knowing evo psych is enough to realize [having an FAI find out human preferences, and then do them] is a dodgy approach at best.

I don't see the connection, but I do care about the issue. Can you attempt to state an argument for that?

Human preferences are an imperfect abstraction. People talk about them all the time and reason usefully about them, so either an AI could do the same, or you found a counterexample to the Church-Turing thesis. "Human preferences" is a useful concept no matter where those preferences come from, so evo psych doesn't matter.

Similarly, my left hand is an imperfect abstraction. Blood flows in, blood flows out, flakes of skin fall off, it gets randomly contaminated from the environment, and the boundaries aren't exactly defined, but nevertheless it generally does make sense to think in terms of my left hand.

If you're going to argue that FAI defined in terms of inferring human preferences can't work, I hope that isn't also going to be an argument that an AI can't possibly use the concept of my left hand, since the latter conclusion would be absurd.

Comment author: Vaniver 14 August 2011 09:43:22PM 2 points [-]

Can you attempt to state an argument for that?

Sure. I think I should clarify first that I meant evo psych should have been sufficient to realize that human preferences are not rigorously coherent. If I tell a FAI to make me do what I want to do, its response is going to be "which you?", as there is no Platonic me with a quickly identifiable utility function that it can optimize for me. There's just a bunch of modules that won the evolutionary tournament of survival because they're a good way to make grandchildren.

If I am conflicted between the emotional satisfaction of food and the emotional dissatisfaction of exercise combined with the social satisfaction of beauty, will a FAI be able to resolve that for me any more easily than I can resolve it?

If my far mode desires are rooted in my desire to have a good social identity, should the FAI choose those over my near mode desires which are rooted in my desire to survive and enjoy life?

In some sense, the problem of FAI is the problem of rigorously understanding humans, and evo psych suggests that will be a massively difficult problem. That's what I was trying to suggest with my comment.

Comment author: TimFreeman 16 August 2011 05:57:37PM 0 points [-]

In some sense, the problem of FAI is the problem of rigorously understanding humans, and evo psych suggests that will be a massively difficult problem.

I think that bar is unreasonably high. If you have conflict between enjoying eating a lot vs being skinny and beautiful, and the FAI helps you do one or the other, then you aren't in a position to complain that it did the wrong thing. It's understanding of you doesn't have to be more rigorous than your understanding of you.

Comment author: Vaniver 17 August 2011 02:28:35AM 0 points [-]

It's understanding of you doesn't have to be more rigorous than your understanding of you.

It does if I want it to give me results any better than I can provide for myself. I also provided the trivial example of internal conflicts- external conflicts are much more problematic. Human desire for status is possibly the source of all human striving and accomplishment. How will a FAI deal with the status conflicts that develop?

Comment author: TimFreeman 18 August 2011 03:51:14AM 0 points [-]

It's understanding of you doesn't have to be more rigorous than your understanding of you.

It does if I want it to give me results any better than I can provide for myself.

No. For example, if it develops some diet drug that lets you safely enjoy eating and still stay skinny and beautiful, that might be a better result than you could provide for yourself, and it doesn't need any special understanding of you to make that happen. It just makes the drug, makes sure you know the consequences of taking it, and offers it to you. If you choose take it, that tells the AI more about your preferences, but there's no profound understanding of psychology required.

I also provided the trivial example of internal conflicts- external conflicts are much more problematic.

Putting an inferior argument first is good if you want to try to get the last word, but it's not a useful part of problem solving. You should try to find the clearest problem where solving that problem solves all the other ones.

How will a FAI deal with the status conflicts that develop?

If it can do a reasonable job of comparing utilities across people, then maximizing average utility seems to do the right thing here. Comparing utilities between arbitrary rational agents doesn't work, but comparing utilities between humans seems to -- there's an approximate universal maximum (getting everything you want) and an approximate universal minimum (you and all your friends and relatives getting tortured to death). Status conflicts are not one of the interesting use cases. Do you have anything better?

Comment author: Vaniver 18 August 2011 04:47:19PM 0 points [-]

For example, if it develops some diet drug that lets you safely enjoy eating and still stay skinny and beautiful, that might be a better result than you could provide for yourself, and it doesn't need any special understanding of you to make that happen.

It might not need special knowledge of my psychology, but it certainly needs special knowledge of my physiology.

But notice that the original point was about human preferences. Even if it provides new technologies that dissolve internal conflicts, the question of whether or not to use the technology becomes a conflict. Remember, we live in a world where some people have strong ethical objections to vaccines. An old psychological finding is that oftentimes, giving people more options makes them worse off. If the AI notices that one of my modules enjoys sensory pleasure, offers to wirehead me, and I reject it on philosophical grounds, I could easily become consumed by regret or struggles with temptation, and wish that I never had been offered wireheading in the first place.

Putting an inferior argument first is good if you want to try to get the last word, but it's not a useful part of problem solving. You should try to find the clearest problem where solving that problem solves all the other ones.

I put the argument of internal conflicts first because it was the clearest example, and you'll note it obliquely refers to the argument about status. Did you really think that, if a drug were available to make everyone have perfectly sculpted bodies, one would get the same social satisfaction from that variety of beauty?

If it can do a reasonable job of comparing utilities across people, then maximizing average utility seems to do the right thing here.

I doubt it can measure utilities; as I argued two posts ago, and simple average utilitarianism is so wracked with problems I'm not even sure where to begin.

Comparing utilities between arbitrary rational agents doesn't work, but comparing utilities between humans seems to -- there's an approximate universal maximum (getting everything you want) and an approximate universal minimum (you and all your friends and relatives getting tortured to death).

A common tactic in human interaction is to care about everything more than the other person does, and explode (or become depressed) when they don't get their way. How should such real-life utility monsters be dealt with?

Status conflicts are not one of the interesting use cases.

Why do you find status uninteresting?

Comment author: NancyLebovitz 18 August 2011 05:37:11PM 2 points [-]

I haven't heard of people having strong ethical objections to vaccines. They have strong practical (if ill-founded) objections-- they believe vaccines have dangers so extreme as to make the benefits not worth it, or they have strong heuristic objections-- I think they believe health is an innate property of an undisturbed body or they believe that anyone who makes money from selling a drug can't be trusted to tell the truth about its risks.

To my mind, an ethical objection would be a belief that people should tolerate the effects of infectious diseases for some reason such as that suffering is good in itself or that it's better for selection to enable people to develop innate immunities.

Comment author: TimFreeman 23 August 2011 08:28:59PM 0 points [-]

A common tactic in human interaction is to care about everything more than the other person does, and explode (or become depressed) when they don't get their way. How should such real-life utility monsters be dealt with?

If everyone's inferred utility goes from 0 to 1, and the real-life utility monster cares more than the other people about one thing, the inferred utility will say he cares less than other people about something else. Let him play that game until the something else happens, then he loses, and that's a fine outcome.

I doubt it can measure utilities

I think it can, in principle, estimate utilities from behavior. See http://www.fungible.com/respect.

simple average utilitarianism is so wracked with problems I'm not even sure where to begin.

The problems I'm aware of have to do with creating new people. If you assume a fixed population and humans who have comparable utilities as described above, are there any problems left? Creating new people is a more interesting use case than status conflicts.

Why do you find status uninteresting?

As I said, because maximizing average utility seems to get a reasonable result in that case.

Comment author: chatquitevoit 18 July 2011 03:43:27PM 0 points [-]

This may be a bit naive, but can a FAI even have a really directive utility function? It would seem to me that by definition (caveats to using that aside) it would not be running with any 'utility' in 'mind'.

Comment author: JGWeissman 18 July 2011 02:58:46AM 18 points [-]

Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Surely you mean that eliminativists take actions which, in their typical contexts, tend to result in proving that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Comment author: Yvain 18 July 2011 11:46:51PM 7 points [-]

Surely you mean that there are just a bunch of atoms which, when interpreted as a human category, can be grouped together to form a being classifiable as "an eliminativist".

Comment author: printing-spoon 18 July 2011 03:12:25AM 3 points [-]

A more practical example: when people discuss cryonics or anti-aging, the following argument usually comes up in one form or another: if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper.

nitpick: Burning to death is painful and it can happen at any stage of life. "You want to live a long life and die peacefully with dignity" can also be derived but of course it's more complicated.

Comment author: Kaj_Sotala 18 July 2011 08:31:44AM 6 points [-]

More explanatory of the way people actually behave is that there's no unified preference for or against death, but rather a set of behaviors. Being in a burning building activates fleeing behavior; contemplating death from old age does not activate cryonics-buying behavior.

YES. This so much.

Comment author: juped 18 July 2011 11:44:47PM 6 points [-]

Contemplating death from old age does activate fleeing behavior, though (at least in me), which is another of those silly bugs in the human brain. If I found a way to fix it to activate cryonics-buying behavior instead, I would probably have found a way to afford life insurance by now.

Comment author: JGWeissman 21 July 2011 07:05:50PM 4 points [-]

Three suggestions:

  1. When you notice that your fleeing behavior has been activated, ask "Am I fleeing a problem I can solve?", and if the answer is yes, think "This is silly, I should turn and face this solvable problem".

  2. Focus more on the reward of living forever than the punishment of death from old age.

  3. Contact Rudi Hoffman today.

Comment author: DSimon 21 July 2011 05:39:14PM -2 points [-]

If you can predict what a smarter you would think, why not just think that thought now?

Comment author: gwern 21 July 2011 06:55:51PM 3 points [-]

There are also problems with incompleteness; if I can think everything a smarter me would think, then in what sense am I not that smarter me? If I cannot think everything, so there is a real difference between the smarter me and the current me, then that incompleteness may scuttle any attempt to exploit my stolen intelligence.

For example, in many strategy games, experts can play 'risky' moves because they have the skill/intelligence to follow through and derive advantage from the move, but a lesser player, even if they know 'an expert would play here' would not know how to handle the opponent's reactions and would lose terribly. (I commented on Go in this vein.) Such a lesser player might be harmed by limited knowledge.

Comment author: MixedNuts 21 July 2011 06:21:23PM 3 points [-]

Not applicable here. If you can predict what a stronger you would lift, why not lift it right now? Because it's not about correct beliefs about what you want the meat robot to do, it's about making it do it. It involves different thoughts, about planning rather than goal, which aren't predicted; and resources, which also need planning to obtain.

Comment author: DSimon 21 July 2011 07:29:25PM *  3 points [-]

Good points.

I wrote my comment with the purpose in mind of providing some short-term motivation to juped, since it seems that that's currently the main barrier between them and one of their stated long-term goals. That might or might not have been accomplished, but regardless you're certainly right that my statement wasn't, um, actually true. :-)

Comment author: Torben 18 July 2011 08:04:08AM *  5 points [-]

Interesting post throughout, but don't you overplay your hand a bit here?

There's nothing that looks remotely like a goal in its programming, [...]

An IF-THEN piece of code comparing a measured RGB value to a threshold value for firing the laser would look at least remotely like a goal to my mind.

Comment author: ShardPhoenix 21 July 2011 03:40:03AM 1 point [-]

Consider a robot where the B signal is amplified and transmitted directly to the laser (so brighter blue equals strong laser firing). This eliminates the conditional logic while still keeping approximately the same apparent goal.

Comment author: RobertLumley 24 July 2011 06:39:17PM *  4 points [-]

if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper. And therefore your reluctance to sign up for cryonics violates your own revealed preferences! You must just be trying to signal conformity or something.

I don't think this section bolsters your point much. The obvious explanation for this behaviour, to me, is the utility functions for each situation.

For the fire: Expected Utility = p(longer life | Leaving fire) * Utility(longer life) - Cost(Running)

For cryonics: Expected Utility = p(longer life | Signing up for cryonics) * Utility(longer life) - Cost(Cryonics)

It's pretty safe to assume that almost everyone assigns a value almost equal to one for p(longer life | Leaving fire), and a value that is relatively insignificant to Cost(Running) which would mainly be temporary exhaustion. But those aren't necessarily valid assumptions in the case for cryonics. Even the most ardent supporter of cryonics is unlikely to assign a probability as large as that of the fire. And the monetary costs are quite significant, especially to some demographics.

Comment author: HoverHell 25 July 2011 04:06:58AM 0 points [-]

I wonder if the behaviour in the ghosts example can be explained similarly by utility functions.

Comment author: RobertLumley 25 July 2011 04:16:53AM 1 point [-]

That's a good question. I didn't really think about it when I read it, because I am personally completely dismissive of and not scared by haunted houses, whereas I am skeptical of cryonics, and couldn't afford it even if I did the research and decided it was worth it.

I'm not sure it can be, but I'm not sure a true rationalist would be scared by a haunted house. The only thing I can come up with for a rational utility function is someone who suspended his belief because he enjoyed being scared. I feel like this example is far more related to irrationality and innate, irrepressible bias than it is rationality.

Comment author: BobTheBob 21 July 2011 08:31:01PM 2 points [-]

Thanks for this great sequence of posts on behaviourism and related issues.

Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.

Here's what I take it you're committed to:

  • by 'mental states' we mean things like beliefs and desires.
  • an eliminativist has both to stop talking about them and also using them in explanations.
  • whither go beliefs and desires also goes rationality. You can't have a rational agent without what amount to beliefs and desires.
  • you are advocating eliminativism.

Can you say a bit about the implications of eliminating rationality? How do we square doing so with all the posts on this site about what is and isn't rational? Are these claims all meaningless or false? Do you want to maintain that they all can be reformulated in terms of tendencies or the like?

Alternately, if you want to avoid this implication, can you say where you dig in your heels? My prejudices lead me to suspect that the devil lurks in the details of those 'higher level abstractions' you refer to, but am interested to hear how that suggestion gets cashed-out. Apols if you have answered this and I have missed it.

Comment author: TheOtherDave 21 July 2011 08:58:02PM 2 points [-]

Can you say more about how you got that second bullet item?

It's not clear to me that being committed to the idea that mental states can be reduced to smaller components (which is one of the options the OP presented) commits one to stop talking about mental states, or to stop using them in explanations.

I mean, any economist would agree that dollars are not ontologically fundamental, but no economist would conclude thereby that we can't talk about dollars.

Comment author: BobTheBob 21 July 2011 09:32:06PM 1 point [-]

This may owe to a confusion on my part. I understood from the title of the post and some of its parts (incl the last par.) that the OP was advocating elimination over reduction (ie, contrasting these two options and picking elimination). I agree that if reduction is an option, then it's still ok to use them in explanation, as per your dollar example.

Comment author: lukeprog 20 July 2011 11:21:45PM 2 points [-]

Excellent post!

I hope that somewhere along the way you get to the latest neuroscience suggesting that the human motivational system is composed of both model-based and model-free reinforcement mechanisms.

Keep up the good work.

Comment author: Threedee 19 July 2011 08:48:11AM 2 points [-]

Without my dealing here with the other alternatives, do you Yvain, or does any other LW reader think that it is (logically) possible that mental states COULD be ontologically fundamental?

Further, why is that possibility tied to the word "soul", which carries all sorts of irrelevant baggage?

Full disclosure: I do (subjectively) know that I experience red, and other qualia, and try to build that in to my understanding of consciousness, which I also know I experience (:-) (Note that I purposely used the word "know" and not the word "believe".)

Comment author: scav 20 July 2011 02:39:25PM 5 points [-]

Hmm. Unless I'm misunderstanding you completely, I'll assume we can work from the example of the "red" qualium (?)

What would it mean for even just the experience of "red" to be ontologically fundamental? What "essence of experiencing red" could possibly exist as something independent of the workings of the wetware that is experiencing it?

For example, suppose I and a dichromatic human look at the same red object. I and the other human may have more or less the same brain circuitry and are looking at the same thing, but since we are getting different signals from our eyes, what we experience as "red" cannot be exactly the same. A bee or a squid or a duck might have different inputs, and different neural circuitry, and therefore different qualia.

A rock next to the red object would have some reflected "red" light incident upon it. But it has no eyes and as far as I know no perception or mental states at all. Does it make sense to say that the rock can also see its neighbouring object as "red"? I wouldn't say so, outside the realm of poetic metaphor.

So if your qualia are contingent on the circumstances of certain inputs to certain neural networks in your head, are they "ontologically fundamental"? I'd say no. And by extension, I'd say the same of any other mental state.

If you could change the pattern of signals and the connectivity of your brain one neuron at a time, you could create a continuum of experiences from "red" to "intuitively perceiving the 10000th digit of pi" and every indescribable, ineffable inhuman state in between. None of them would be more fundamental than any other; all are sub-patterns in a small corner of a very richly-patterned universe.

Comment author: fubarobfusco 21 July 2011 06:08:09PM 2 points [-]

qualium

"Quale", by the way.

Comment author: Hul-Gil 21 July 2011 09:12:58PM 0 points [-]

How do you know? Do you know Latin, or just how this word works?

I'm not doubting you - just curious. I've always wanted to learn Latin so I can figure this sort of thing out (and then correct people), but I've settled for just looking up specific words when a question arises.

Comment author: fubarobfusco 21 July 2011 11:19:03PM 0 points [-]
Comment author: Threedee 21 July 2011 08:02:01AM *  1 point [-]

I apologize for being too brief. What I meant to say is that I posit that my subjective experience of qualia is real, and not explained by any form of reductionism or eliminativism. That experience of qualia is fundamental in the same way that gravitation and the electromagnetic force are fundamental. Whether the word ontological applies may be a semantic argument.

Basically, I am reprising Chalmers' definition of the Hard Problem, or Thomas Nagel's argument in the paper "What is it like to be a bat?"

Comment author: lessdazed 21 July 2011 10:40:46PM 4 points [-]

Do qualia describe how matter interacts with matter? For example, do they explain why any person says "I have qualia" or "That is red"? Would gravity and electromagnetism, etc. fail to explain all such statements, or just some of them?

If qualia cause such things, is there any entropy when they influence and are influenced by matter? Is energy conserved?

If I remove neurons from a person one by one, is there a point at which qualia no longer are needed to describe how the matter and energy in them relates to the rest of matter and energy? Is it logically possible to detect such a point? If I then replace the critical neuron, why ought I be confident that merely considering, tracking, and simulating local, physical interactions would lead to an incorrect model of the person insofar as I take no account of qualia?

How likely is it that apples are not made of atoms?

Comment author: scav 22 July 2011 08:42:26AM 1 point [-]

You may posit that your subjective experience is not explained by reduction to physical phenomena (including really complex information processes) happening in the neurons of your brain. But to me that would be an extraordinary claim requiring extraordinary evidence.

It seems to me that until we completely understand the physical and informational processes going on in the brain, the burden of proof is on anyone suggesting that such complete understanding would still be in principle insufficient to explain our subjective experiences.

Comment author: Dreaded_Anomaly 22 July 2011 01:52:45AM *  1 point [-]

You should check out the recent series that orthonormal wrote about qualia. It starts with Seeing Red: Dissolving Mary's Room and Qualia.

Comment author: DSimon 21 July 2011 05:38:11PM 0 points [-]

That experience of qualia is fundamental in the same way that gravitation and the electromagnetic force are fundamental.

I don't understand what you mean by this. Could you elaborate?

Comment author: Threedee 24 July 2011 09:12:44PM 1 point [-]

There is no explanation of HOW mass generates or causes gravity, similarly for the lack of explanation of how matter causes or generates forces such as electromagnetism. (Yes I know that some sort of strings have been proposed to subserve gravity, and so far they seem to me to be another false "ether".) So in a shorthand of sorts, it is accepted that gravity and the various other forces exist as fundamentals ("axioms" of nature, if you will accept a metaphor), because their effects and interactions can be meaningfully applied in explanations. No one has seen gravity, no one can point to gravity--it is a fundamental force. Building on Chalmers in one of his earlier writings, I am willing to entertain the idea the qualia are a fundamental force-like dimension of consciousness. Finally every force is a function of something: gravity is a function of amount of mass, electromagnetism is a function of amount of charge. What might qualia and consciousness be a function of? Chalmers and others have suggested "bits of information", although that is an additional speculation.

Comment author: DSimon 24 July 2011 10:02:41PM 0 points [-]

I don't think "[T]heir effects and interactions can be meaningfully applied in explanations" is a good way of determining if something is "fundamental" or not: that description applies pretty nicely to aerodynamics, but aerodynamics is certainly not at the bottom of its chain of reductionism. I think maybe that's the "fundamental" you're going for: the maximum level of reductionism, the turtle at the bottom of the pile.

Anyways: (relativistic) gravity is generally thought not to be a fundamental, because it doesn't mesh with our current quantum theory; hence the search for a Grand Unified Whatsit. Given that gravity, an incredibly well-studied and well-understood force, is at most questionably a fundamental thingie, I think you've got quite a hill to climb before you can say that about consciousness, which is a far slipperier and more data-lacking subject.

Comment author: lessdazed 21 July 2011 02:19:16AM *  6 points [-]

Further, why is that possibility tied to the word "soul", which carries all sorts of irrelevant baggage?

It's just the history of some words. It's not that important.

I experience red, and other qualia

People frequently claim this. One thing missing is a mechanism that gets us from an entity experiencing such fundamental mental states or qualia and that being's talking about it. Reductionism offers an account of why they say such things. If, broadly speaking, the reductionist explanation is true, then this isn't a phenomenon that is something to challenge reductionism with. If the reductionist account is not true, then how can these mental states cause people to talk about them? How does something not reducible to physics influence the world, physically? Is this concept better covered by a word other than "magic"? And if these mental states are partly the result of the environment, then the physical world is influencing them too.

I don't see why it's desirable to posit magic; if I type "I see a red marker" because I see a red marker, why hypothesize that the physical light, received by my eyes and sending signals to my brain, was magically transformed into pure mentality, enabling it to interact with ineffable consciousness, and then magicked back into physics to begin a new physical chain of processes that ends with my typing? Wouldn't I be just as justified in claiming that the process has interruptions at other points?

As the physical emanation "I see red people" may be caused by laws of how physical stuff interacts with other physical stuff, we don't guess it isn't caused by that, particularly as we can think of no coherent other way.

We are used to the good habit of not mistaking the limits of our imaginations for the limits of reality, so we won't say we know it impossible. However, if physics is a description of how stuff interacts with stuff, so I don't see how it's logically possible for stuff to do something ontologically indescribable even as randomness. Interactions can either be according to a pattern, or not, and we have the handy description "not in a pattern, indescribable by compression" to pair with "in a pattern, describable by compression", and how matter interacts with matter ought to fall under one of those. So apparent or even actual random "deviation from the laws of physics" would not be unduly troubling. Systematic deviation from the laws of physics, isn't.

Do you think your position is captured by the statement, "matter sometimes interacts with matter neither a) in a pattern according to rules, nor b) not in a pattern, in deviation from rules"?

Photons go into eyes, people react predictably to them (though this is a crude example, too macro)...something bookended by the laws of physics has no warrant to call itself outside of physics, if the output is predictable from the input. That's English, as it's used for communication, no personal definitions allowed.

Comment author: handoflixue 24 July 2011 01:11:46PM 3 points [-]

if I type "I see a red marker" because I see a red marker, why hypothesize that the physical light, received by my eyes and sending signals to my brain, was magically transformed into pure mentality, enabling it to interact with ineffable consciousness

There's a fascinating psychological phenomena called "blindsight" where the conscious mind doesn't register vision - the person is genuinely convinced they are blind, and they cannot verbally describe anything. However, their automatic reflexes will still navigate the world just fine. If you ask them to put a letter in a slot, they can do it without a problem. It's a very specific sort of neurological damage, and there's been a few studies on it.

I'm not sure if it quite captures the essence of qualia, but "conscious experience" IS very clearly different from the experience which our automatic reflexes rely on to navigate the world!

Comment author: lessdazed 24 July 2011 04:18:33PM 2 points [-]

What if you force them to verbally guess about what's in front of them, can they do better than chance guessing colors, faces, etc.?

Can people get it in just one eye/brain side?

Comment author: handoflixue 25 July 2011 10:27:00PM *  1 point [-]

I've only heard of that particular test once. They shined a light on the wall and forced them to guess where. All I've heard is that they do "better than should be possible for someone who is truly blind", so I'm assuming worse than average but definitely still processing the information to some degree.

Given that it's a neurological condition, I'd expect it to be impossible to have it in just one eye/brain side, since the damage is occurring well after the signal from both eyes is put together.

EDIT: http://en.wikipedia.org/wiki/Blindsight is a decent overview of the phenomena. Apparently it can indeed affect just part of your vision, so I was wrong on that!

Comment author: Alexei 18 July 2011 12:24:01AM 2 points [-]

"Preference is a tendency in a reflective equilibrium." That gets its own Anki card!

Comment author: Vladimir_Nesov 18 July 2011 01:12:08AM *  5 points [-]

Some preferences don't manifest as tendencies. You might not have been given a choice, or weren't ready to find the right answer.

Comment author: Alexei 18 July 2011 05:35:20PM 1 point [-]

I'm not sure I understand. Can you please provide an example?

Comment author: ShardPhoenix 21 July 2011 03:43:05AM 0 points [-]

Then you could include tendency to want something as well as tendency to do something.

Comment author: Vladimir_Nesov 21 July 2011 10:47:58AM 0 points [-]

Or tendency to be yourself, perhaps tendency to have a certain preference. If you relax a concept that much, it becomes useless, a fake explanation.

Comment author: HoverHell 25 July 2011 04:03:59AM 0 points [-]

Again, congratulations on realizing that humans' values are not consistent and especially are not consistent with their claims.

Also, “does not believe mental states are ontologically fundamental - ie … denies the reality of something like a soul” — I do believe that what you (supposedly) call “mental states” is ontologically fundamental; yet I do not believe in supernatural souls (so-called “cartesian dualism”).

Comment author: [deleted] 07 August 2015 05:50:06PM 1 point [-]

So if someone stays in the haunted house despite the creaky stairwell, his preferences are revealed as rationalist?

Personally I would have run away exactly because I would not think the sound to come from a non-existent, and so harmless, ghost!

Comment author: AbyCodes 24 July 2011 07:19:52AM *  1 point [-]

quoted text if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper.

Won't it be the case that someone who tries to escape from a burning building, does so, just to avoid the pain and suffering it inflicts? It would be such a drag to be burned alive rather than a peaceful painless poison death.

Comment author: Caravelle 24 July 2011 09:49:49PM 4 points [-]

That doesn't help much. If people were told they were going to be murdered in a painless way (or something not particularly painful - for example, a shot for someone who isn't afraid of needles and has no problem getting vaccinated) most would consider this a threat and would try to avoid it.

I think most people's practical attitude towards death is a bit like Syrio Forel from Game of Thrones - "not today". We learn to accept that we'll die someday, we might even be okay with it, but we prefer to have it happen as far in the future as we can manage.

Signing up for cryonics is an attempt to avoid dying tomorrow - but we're not that worried about dying tomorrow. Getting out of a burning building means we avoid dying today.

(whether this is a refinement of how to understand our behaviour around death, or a potential generalized utility function, I couldn't say).

Comment author: MixedNuts 25 July 2011 06:53:34AM 3 points [-]

Should be noted that "tomorrow" stands in for "in enough time that we operate in Far mode when thinking about it", as opposed to actual tomorrow, when we very much don't want to die.

Come to think of it, a lot of people are all "Yay, death!" in Far mode (I'm looking at you, Epictetus), but much fewer in Near mode (though those who do are famous). Anecdotal evidence: I was born without an aversion for death in principle, was surprised by sad funerals, thought it was mostly signalling (and selfish mourning for lost company), was utterly baffled by obviously sincere death-bashers. I've met a few other people like that, too. Yet we (except some of the few I met in history books) have normal conservation reflexes.

There's no pressure to want to live in Far mode (in an environment without cryonics and smoking habits, anyway), and there's pressure to say "I don't care about death, I only care about $ideal which I will never compromise" (hat tip Katja Grace).

Comment author: AbyCodes 25 July 2011 09:00:50AM 1 point [-]

I was just pointing to the opinion that, not everyone who tries to escape from death are actually afraid of death per se. They might have other reasons.

Comment author: LeibnizBasher 24 July 2011 09:22:11PM 1 point [-]

Death from old age often involves drowning in the fluid that accumulates in your lungs when you get pneumonia.

Comment author: andrewk 21 July 2011 03:53:37AM 1 point [-]

Interesting that you chose the "burning building" analogy. In the fire sermon the Buddha argued that being incarnated in samsara was like being in a burning building and that the only sensible thing to do was to take steps to ensure the complete ending of the process of reincarnation in samsara ( and dying just doesn't cut it in this regard). The burning building analogy in this case is a terrible one- as we are talking about the difference between a healthy person seeking to avoid pain and disability versus the cryonics argument- which is all about preserving a past its use by date body- at considerable expense and loss of enjoyment of this existence- with no guarantee at all the there will ever be a payoff for the expenditure.

Comment author: zslastman 27 July 2013 06:35:02AM 0 points [-]

This is an excellent post Yvain. How can I socially pressure you into posting the next one? Guilt? Threats against my own wellbeing?

Comment author: [deleted] 05 August 2012 05:57:25PM 0 points [-]

I like to enforce reductionist consistency in my own brain. I like my ehtics universal and contradiction free, mainly because other people can't accuse me of being inconsistent then.

The rest, is akrasia.

Comment author: Curiouskid 26 December 2011 07:19:04PM 0 points [-]

Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

I don't really see how these two philosophies contradict.

Comment author: TylerJay 20 July 2011 11:00:22PM 0 points [-]

Absolutely fantastic post. Extremely clearly written, and made the blue-minimizing robot thought experiment really click for me. Can't wait for the next one.