Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 24 December 2016 11:37:07AM 0 points [-]

They have lots of great arguments.

Which of the arguments do you consider to be great? Where do you think it takes a lot of time to understand the arguments well enough to reject them?

Comment author: pragmatist 25 December 2016 02:19:49AM *  0 points [-]

At least some of the arguments offered by Richard Rorty in Philosophy and the Mirror of Nature are great. Understanding the arguments takes time because they are specific criticisms of a long tradition of philosophy. A neophyte might respond to his arguments by saying "Well, the position he's attacking sounds ridiculous anyway, so I don't see why I should care about his criticisms." To really appreciate and understand the argument, the reader needs to have sense of why prior philosophers were driven to these seemingly ridiculous positions in the first place, and how their commitment to those positions stems from commitment to other very common-sensical positions (like the correspondence theory of truth). Only then can you appreciate how Rorty's arguments are really an attack on those common-sensical positions rather than some outre philosophical ideas.

Comment author: DataPacRat 10 September 2016 02:30:07AM 3 points [-]

Matrix multiplication

Could somebody explain to me, in a way I'd actually understand, how to (remember how to) go about multiplying a pair of matrixes? I've looked at Wikipedia, I've read linear algebra books up to where they supposedly explain matrixes, and I keep bouncing up against a mental wall where I can't seem to remember how to figure out how to get the answer.

Comment author: pragmatist 11 September 2016 02:34:02PM *  1 point [-]

Perhaps explicitly thinking of them as systems of equations (or transformations on a vector) would be helpful.

As an example, suppose you are asked to multiply matrices A and B, where A is [1 2, 0 4, -1 2] (the commas represent the end of a row) and B is [2 1 0, 3 1 2]. Start out by taking the rightmost matrix (B in this case) and converting it into a series of equations, one for each row. So since the first row is 2 1 0, the relevant equation will be 2x + 1y + 0z. Assign each of these equations to some other variable. So we now have

X = 2x + y

Y = 3x + y + 2z

Now do the same thing with the matrix on the left, except this time use the new variables you've introduced (X and Y), so the three equations you end up with (one for each row) will be

X + 2Y

4Y

-X + 2Y

Now that you have these formulae, substitute in the values of X and Y based on your earlier equations. You get

(2x + y) + 2(3x + y + 2z)

4(3x + y + 2z)

-(2x + y) + 2(3x + y + 2z)

Simplifying, you get

8x + 3y + 4z

12x + 4y + 8z

4x + y + 4z

The coefficients of these equations are the result of the multiplication. So the product of the two matrices is [8 3 4, 12 4 8, 4 1 4].

I'll admit this is not the quickest way to go about multiplying matrices, but it might be easier for you to remember since it doesn't seem as arbitrary. And maybe once you get used to thinking about multiplication this way, the usual visual rule will start making more sense to you.

Comment author: woodchopper 03 May 2016 05:05:38PM 0 points [-]

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are "true" I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.

You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants.

I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber "real" minds, then it's likely we are all simulated. I'm not really sure how us being "accurately simulated" minds changes things. It does make it easier to reason outside of our little box - if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

Let's assume I'm trying to make conclusions about the universe. I could be a brain in a vat, but there's not really anything to be gained in assuming that. Whether it's true or not, I may as well act as if the universe can be understood. Let's say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it's impossible to reason your way into believing you're in a simulation. It's self-referential.

I'm going to have to think about this harder, but try and criticise what I'm saying as you have been doing because it certainly helps flesh things out in my mind.

Comment author: pragmatist 04 May 2016 08:30:19AM *  1 point [-]

I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument.

I don't think that's true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.

If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don't get to the point of simulating minds or they choose not to run a significant number of simulations.

If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.

This is why, when Bostrom describes the Simulation Argument, he focuses on "ancestor-simulations". In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).

So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators' ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.

You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.

Comment author: woodchopper 03 May 2016 03:10:00PM *  0 points [-]

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

So I am struggling to understand his reply to my argument. In some ways it simply looks like he's saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren't in a simulation.

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Comment author: pragmatist 03 May 2016 04:01:05PM *  1 point [-]

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Right. When I say "his conclusion is still true", I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not "we are living in a simulation".

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom's conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that's all you're claiming, then you're not disagreeing with the simulation argument.

Comment author: woodchopper 02 May 2016 06:16:47PM *  0 points [-]

The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation's creators. If we're not in a simulation, we're not in a simulation. Either way, the simulation argument is flawed.

Comment author: pragmatist 03 May 2016 10:04:14AM *  7 points [-]

First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.

Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It's worth noting that these two are claims about our universe, not about some parent universe.

In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom's reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn't apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom's response mathematically precise would be a good way to track down the flaw (if any).

Comment author: buybuydandavis 08 April 2016 05:44:44AM 0 points [-]

Anyone got some Deep Questions that aren't just verbal and conceptual confusion?

Comment author: pragmatist 08 April 2016 05:51:23PM *  4 points [-]

"Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all?"

-- David Chalmers

These questions may be a product of conceptual confusion, but they don't seem that way to me. Perhaps I am confused in the same way.

Comment author: Gyrodiot 30 March 2016 01:05:04PM 0 points [-]

Consider P(E) = 1/3. We can consider three worlds, W1, W2 and W3, all with the same probability, with E being true in W3 only. Placing yourself in W3, you can evaluate the probability of H while updating P(E) = 1 (because you're placing yourself in the world where E is true with certainty.

In the same way, by placing yourself in W1 and W2, you evaluate H with P(E) = 0.

The thing is, you're "updating" on an hypothetical fact. You're not certain of being in W1, W2, or W3. So you're not actually updating, you're artificially considering a world where the probabilities are shifted to 0 or 1, and weighting the outcomes by the probabilities of that world happening.

Comment author: pragmatist 31 March 2016 11:24:12AM *  0 points [-]

When you update, you're not simply imagining what you would believe in a world where E was true, you're changing your actual beliefs about this world. The point of updates is to change your behavior in response to evidence. I'm not going to change my behavior in this world simply because I'm imagining what I would believe in a hypothetical world where E is definitely true. I'm going to change my behavior because observation has led me to change the credence I attach to E being true in this world.

Comment author: pragmatist 30 March 2016 12:19:25PM 6 points [-]

Updating by Bayesian conditionalization does assume that you are treating E as if its probability is now 1. If you want an update rule that is consistent with maintaining uncertainty about E, one proposal is Jeffrey conditionalization. If P1 is your initial (pre-evidential) distribution, and P2 is the updated distribution, then Jeffrey conditionalization says:

P2(H) = P1(H | E) * P2(E) + P1(H | ~E) * P2(~E).

Obviously, this reduces to Bayesian conditionalization when P2(E) = 1.

Comment author: pragmatist 16 February 2016 10:38:49AM *  6 points [-]

Credit and accountability seem like good things to me, and so I want to live in a world where people/groups receive credit for good qualities, and are held accountable for bad qualities.

If this is your concern, then you should take into account what sorts of groups are appropriate loci for credit and accountability. This will, of course, depend on what you think is the point of credit/accountability.

If you believe, as I do, that the function of credit and accountability is to influence future behavior, then it seems that the appropriate loci of credit/accountability should be "agential". In other words, objects of credit and blame should be capable of something resembling goal-directed alteration of behavior. Individual people are appropriate loci on this account, since they are (at least, mostly) paradigmatic agents.

Some groups might also qualify as agential, and thus as appropriate loci of credit and blame. Corporations come to mind, as do nations. But that is because those groups have a particular organizational structure that makes them somewhat agent-like. Not every group has this quality. The group of all left-handed people, for instance, is not agent-like in any relevant sense, so I don't see the point of assigning credit or blame to it. Similarly for racial groups or genders.

Comment author: Creutzer 22 October 2015 04:03:41PM *  1 point [-]

In my experience, many people hold that when trying to derive the KI in the groundwork, he just managed to confuse himself, and that the examples of its application as motivated reasoning of a rigid Prussian scholar with an empathy deficit.

The crucial failure is not that it is nonsensical to think about such abstract equilibria - it is very much not, as TDT shows. But in TDT terms, Kant's mistake was this: He thought he could compel you to pretend that everybody else in the world was running TDT. But there is nothing that compels you to assume that, and so you can't pull a substantial binding ethics out of thin air (or pure rationality), as Kant absurdly believed he could.

Comment author: pragmatist 22 October 2015 04:13:47PM *  4 points [-]

I absolutely agree that Kant's system as represented in the Groundwork is unworkable. But you could say the same about pretty much any pre-20th-century philosopher's major work. I think the fact that someone was even trying to think about ethics along essentially game-theoretic lines in the 18th century is pretty revolutionary and worthy of respect, even if he did get important things wrong. As far as I'm aware, no one else was even in the ballpark.

ETA: I do think a lot of philosophers scoff (correctly) at Kant's object-level moral views, not only because of their absurdity (the horrified tone in which he describes masturbation still makes me chuckle) but because of the intellectual contortions he would go through to "prove" them using his system. While I believe he made very important contributions to meta-ethics, his framework was nowhere near precise enough to generate a workable applied ethics. So yeah, Kant's actual ethical positions are pretty scoff-worthy, but the insight driving his moral framework is not.

View more: Next