Comment author: simplicio 29 September 2014 01:39:56PM 3 points [-]

Yay for personal finance, boo for ethics, which is liable to become a mere bully pulpit for teachers' own views.

Comment author: tslarm 29 September 2014 02:17:46PM 1 point [-]

It might be possible (and useful) to design an ethics curriculum that helps students to think more clearly about their own views, though, without giving their teachers much of an excuse to preach.

Comment author: Jayson_Virissimo 21 September 2014 11:02:24PM 1 point [-]

Would philosophers of mathematics agree with physicists on the foundations of mathematics? If not, should they dismiss their views on physics?

Comment author: tslarm 22 September 2014 01:14:04AM *  1 point [-]

I don't think it's as simple as 'agreement = competent; disagreement = incompetent', for at least a couple of reasons.

First, when judging the credibility of a source, their views on a given issue will be weighted according to the confidence with which they're expressed (i.e. the source's level of claimed expertise in that area). Second, disagreement will have more weight the closer the matter is to being one of settled objective fact.

I'm by no means an expert on the philosophy of mathematics, but I imagine that at the very least it's an area in which thoughtful, intelligent, honest people can disagree, and at the most it's one in which there simply isn't a single set of correct answers. So disagreement need not seriously undermine one's confidence in a source, but that doesn't mean that all answers are equally sensible or valid, nor that Hegel can't have been talking credibility-destroying nonsense.

Comment author: Anders_H 13 September 2014 08:53:20PM *  4 points [-]

This discussion and a previous conversation with Nate have helped me crystallize my thoughts on why I prefer CDT to any of the attempts to "fix" it using timelessness. Most of the material on TDT/UDT is too technical for me, so it is entirely possible that I am wrong; if there are errors in my reasoning, I would be very grateful if someone could point it out:

Any decision theory depends on the concept of choice: If there is no choice, there is no need for a decision theory. I have seen a quote attributed to Pearl to the effect that we can only talk about "interventions" at a level of abstraction where free will is apparent. This seems true of any decision theory. (Note: From looking at Google, it appears that the only verified source for this quotation is on Less Wrong).

CDT and TDT differ in how they operationalize choice, and therefore whether the decision theories are consistent with free will. In Causal Decision theory, the agents choose actions from a choice set. In contrast, from my limited understanding of TDT/UDT, it seems as if agents choose their source code. This is not only inconsistent with my (perhaps naive) subjective experience of free will, it also seems like it will lead to an incoherent concept of "choice" due to recursion.

Have I misunderstood something fundamental?

Comment author: tslarm 14 September 2014 08:08:25AM 5 points [-]

I don't think you've misunderstood; in fact I share your position.

Do you also reject compatibilist accounts of free will? I think the basic point at issue here is whether or not a fully determined action can be genuinely 'chosen', any more than the past events that determine it.

The set of assumptions that undermines CDT also ensures that the decision process is nothing more than the deterministic consequence (give or take some irrelevant randomness) of an earlier state of the world + physical law. The 'agent' is a fully determined cog in a causally closed system.

In the same-source-code-PD, at the beginning of the decision process each agent knows that the end result will either be mutual cooperation or mutual defection, and also that the following propositions must either be all true or all false:

  1. 'I was programmed to cooperate'
  2. 'the other agent was programmed to cooperate'
  3. 'I will cooperate'
  4. 'the end result will be mutual cooperation'

The agent wants Proposition 4 -- and therefore all of the other propositions -- to be true.

Since all of the propositions are known to share the same truth value, choosing to make Proposition 3 true is equivalent to choosing to make all four propositions true -- including the two that refer to past events (Propositions 1 and 2). So either the agent can choose the truth value of propositions about the past, or else Proposition 3 is not really under the agent's control.

I'd be interested to know whether those who disagree with me/us see a logical error above, or simply have a concept of choice/agency/free will/control that renders the previous paragraph either false or unproblematic (presumably because it allows you to single out Proposition 3 as uniquely under the agent's control, or it isn't so fussy about temporal order). If the latter, is this ultimately a semantic dispute? (I suspect that some will half-agree with that, but add that the incompatibilist notion of free will is at best empirically false and at worst incoherent. I think the charge of incoherence is false and the charge of empirical falsity is unproven, but I won't go into that now.)

In any case, responses would be appreciated. (And if you think I'm completely mistaken or confused, please bear in mind that I made a genuine attempt to explain my position clearly!)

Comment author: jsteinhardt 14 July 2014 01:18:54AM 5 points [-]

Tangential, but:

usually the best (funded) PhD program you got into is a good choice for you. But only do it if you enjoy research/learning for its own sake.

I'm not sure I agree with this, except insofar as any top-tier or even second-tier program will pay for your graduate education, at least in engineering fields, and so if they do not then that is a major red flag. I would say that research fit with your advisor, caliber of peers, etc. is much more important.

Comment author: tslarm 16 July 2014 03:21:14AM *  8 points [-]

I interpreted "the best (funded) PhD program you got into" to mean 'the best PhD program that offered you a funded place', rather than 'the best-funded PhD program that offered you a place'. So Algernoq's advice need not conflict with yours, unless he did mean 'best' in a very narrow sense.

Comment author: tslarm 30 March 2014 01:30:34AM *  6 points [-]

"It is, however, proper application of Bayesian evidence."

Nonsense.

If the only relevant pieces of information you had were the race of each man, and the average intelligence of each race, then of course it would be rational to estimate that the man from the 'smarter' race were the smarter of the two. But this is very far from the truth. In the Obama-Bush example, there is more than enough evidence on the public record to swamp any racially determined prior.

I think the principle of 'treating people as individuals' exists to combat a couple of things. One is the tendency to form stereotypes on flimsy or non-existent evidence, to over-estimate the generality and force of those stereotypes that are factually based, and to treat prejudice (i.e. group membership-based priors) as a substitute for even very easily-gathered and reliable evidence about the individual. The other is the direct emotional harm done to people by treating them as members of a group first, and individuals second (if at all). It is possible for this harm to outweigh the benefits of otherwise-rational discrimination.

Comment author: tslarm 30 March 2014 01:36:30AM 2 points [-]

Note: I honestly have no idea about the relationship between race and intelligence, so I deliberately set aside the question of who, if anyone, would have the higher prior in the Obama-Bush comparison. These aren't politically correct weasel words; I would have a hard time properly defining intelligence, let alone measuring it in a culturally neutral way, but if I do see good evidence for a racial intelligence gap then I will readily accept it. All of this is beside the point of the dispute between TheAncientGreek and Eugine_Nier, though, for the reasons I gave above.

Comment author: Eugine_Nier 29 March 2014 10:41:53PM -4 points [-]

Someone once told me that Obama must be dumber than GWB because he is black. That is what treating someone as an individual isn't.

It is, however, proper application of Bayesian evidence.

Comment author: tslarm 30 March 2014 01:30:34AM *  6 points [-]

"It is, however, proper application of Bayesian evidence."

Nonsense.

If the only relevant pieces of information you had were the race of each man, and the average intelligence of each race, then of course it would be rational to estimate that the man from the 'smarter' race were the smarter of the two. But this is very far from the truth. In the Obama-Bush example, there is more than enough evidence on the public record to swamp any racially determined prior.

I think the principle of 'treating people as individuals' exists to combat a couple of things. One is the tendency to form stereotypes on flimsy or non-existent evidence, to over-estimate the generality and force of those stereotypes that are factually based, and to treat prejudice (i.e. group membership-based priors) as a substitute for even very easily-gathered and reliable evidence about the individual. The other is the direct emotional harm done to people by treating them as members of a group first, and individuals second (if at all). It is possible for this harm to outweigh the benefits of otherwise-rational discrimination.

Comment author: CCC 16 March 2014 04:06:34AM 1 point [-]

A tricky question.

The obvious, and trivially true, answer is that he who does both does more good than either. But that's not what you asked.

So. It can be hard to compare the two options when considering the actions of a single person, since the beneficiaries of the actions do not overlap. Therefore I shall employ a simple heuristic; I shall assume that the option which does the most good when one person does it is also the option that does the most good when everyone does it.

So, the first option; everyone (who can afford it) makes large donations to efficient charities, while everyone avoids those nearby and is unpleasant when forced to deal with someone else directly.

If I make a few assumptions about the effectiveness (and priorities) of the charities and the sum of the donations, I find myself considering a world where everyone is sufficiently fed, clothed, sheltered, medically cared for and educated. However, the fact that everyone is unpleasant to everyone else leads to everyone being grumpy, irritated, and mildly unhappy.

Considering the second option; charitable donations drastically decrease, but everyone is pleasant and helpful to everyone they meet face-to-face. In this possible world, there are people who go hungry, naked, homeless. But probably fewer than in our current world; because everyone they meet will be helpful, aiding if they can in their plight. And because everyone's pleasant and tries to uplift the mood of those they meet, a large majority of people consider themselves happy.

Comment author: tslarm 16 March 2014 04:25:48AM *  5 points [-]

Therefore I shall employ a simple heuristic; I shall assume that the option which does the most good when one person does it is also the option that does the most good when everyone does it.

This assumption seems trivially false to me, and despite being labeled as a mere 'heuristic', it is the crucial step in your argument. Can you explain why I should take it seriously?

Comment author: hairyfigment 08 September 2013 06:37:22AM *  1 point [-]

At one point I thought I recalled reading about a series of purported experiments by one person. Sadly, I couldn't find it then and I don't intend to try tonight. According to my extremely fallible memory:

  • The Gatekeeper players likely all came from outside the LW community, assuming the AI/blogger didn't make it all up.

  • The fundamentalist Christian woman refused to let the AI out or even discuss the matter past a certain point, saying that Artificial Intelligence (ETA: as a field of endeavor) was immoral. Everyone else let the AI out.

  • The blogger tried to play various different types of AIs, including totally honest ones and possibly some that s/he considered dumber-than-human. The UnFriendly ones got out more quickly on average.

Comment author: tslarm 08 February 2014 05:29:24AM 2 points [-]

I think this is the post you remember reading: http://www.sl4.org/archive/0207/4935.html