Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: bryjnar 24 September 2014 09:56:09PM 6 points [-]

Fantastic post, I think this is right on the money.

Many more Newcomblike scenarios simply don't feel like decision problems: people present ideas to us in specific ways (depending upon their model of how we make choices) and most of us don't fret about how others would have presented us with different opportunities if we had acted in different ways.

I think this is a big deal. Part of the problem is that the decision point (if there was anything so firm) is often quite temporally distant from the point at which the payoff happens. The time when you "decide" to become unreliable (or the period in which you become unreliable) may be quite a while before you actually feel the ill effects of being unreliable.

Comment author: CalmCanary 22 June 2014 07:27:35PM 16 points [-]

You cannot possibly gain new knowledge about physics by doing moral philosophy. At best, you have shown that any version of utilitarianism which adheres to your assumptions must specify a privileged reference frame in order to be coherent, but this does not imply that this reference frame is the true one in any physical sense.

Comment author: bryjnar 23 June 2014 09:07:22PM 0 points [-]

You cannot possibly gain new knowledge about physics by doing moral philosophy.

This seems untrue. If you have high credence in the two premisses:

  • If X were a correct physical theory, then Y.
  • Not Y.

then that should decrease your credence in X. It doesn't matter whether Y is a proposition about the behaviour of gases or about moral philosophy (although the implication is likely to be weaker in the latter case).

Comment author: whpearson 28 February 2013 09:16:49PM 0 points [-]

I tend to like constructivist maths because it makes sense from a computational point of view. A boolean function may return True or False or it may not return at all (a member of the bottom set in Type theory terms) if it gets stuck in an infinite loop.

So I'm going to look at things from a computational point of view, to see if anything shakes loose. What might ~(A and ~A) represent computationally. The one thing I can think of is if A isn't a pure function, it is an unknown monad of some variety (there is some hidden state). A race condition or memory corruption that means when A is evaluated at different points in time it comes to a different value.

Going by the MWI hint, it also seems to be able to represent questions that aren't meaningful to ask. Did a particle travel through that path or did it not travel that path? Well it travelled through all possible paths, which is neither a yes or a no. You may want a logic that can represent such meaningless questions, because you can't apriori know that a question is meaningless.

Comment author: bryjnar 01 March 2013 02:08:27AM 1 point [-]

Constructivist logic works great if you interpret it as saying which statements can be proven, or computed, but I would say it doesn't hold up when interpreted as showing which statements are true (given your axioms). It's therefore not really appropriate for mathematics, unless you want to look at mathematics in the light of its computational or proof-theoretic properties.

Comment author: passive_fist 28 February 2013 08:25:36PM 3 points [-]

Does your new concept have anything to do with dialetheism?

Comment author: bryjnar 01 March 2013 02:06:02AM 1 point [-]

Dialethists requires paraconsistent logic, as you have to be able to reason in the presence of contradictions, but paraconsitent logic can be used to model other things than truth. For example, constructive logic is often given the semantics of showing what statements can be proven, rather than what statements are true. There are similar interpretations for paraconsistent logic.

OTOH, if you think that paraconsistent logic is the correct logic for truth, then you probably do have to be a dialethist.

Comment author: JMiller 07 February 2013 09:21:13PM 4 points [-]

In my intermediate level course, we barely talk about history at all. It is supposed to focus on "developments" in the last thirty years or so. The problem I have is that most profs think that philosophy is able to go about figuring out the truth without things like empirism, scientific study, neuroscience, probability and decision theory. Everything is very "intuitive" and I find that difficult to grasp.

For example, when discussing deontolgy, I asked why there should be absolute "requirements" as an argument against consequentialism, seeing that if it's true that the best consequences would be take these requiremesnts into consequentialist accounts of outcomes, then that is what a conequentialist would (should) say as well! The professor's answer and that of many students was: "That's just the way it is. Some things ought not be done, only because they must ought not be done". That is a hard pill for me to swallow. In this case I am much more comfortable with Eliezer's Ethical Injunctions.

(The prof was not necessarily promoting dentology but was arguing on it's behalf.)

Comment author: bryjnar 08 February 2013 04:18:09AM 1 point [-]

That's pretty weird, considering that so-called "sophisticated" consequentialist theories (where you can say something like: although in this instance it would be better for me to do X than Y, overall it would be better to have a disposition to do Y than X, so I shall have such a disposition) have been a huge area of discussion recently. And yes, it's bloody obvious and it's a scandal it took so long for these kinds of ideas to get into contemporary philosophy.

Perhaps the prof meant that such a consequentialist account appears to tell you to follow certain "deontological" requirements, but for the wrong reason in some way. In much the same way that the existence of a vengeful God might make acting morally also selfishly rational, but if you acted morally out of self-interest then you would be doing it for the wrong reasons, and wouldn't have actually got to the heart of things.

Alternatively, they're just useless. Philosophy has a pretty high rate of that, but don't throw out the baby with the bathwater! ;)

Comment author: bryjnar 24 January 2013 03:05:09AM 1 point [-]

I agree that "right for the wrong reasons" is an indictment of your epsitemic process: it says that you made a prediction that turned out correctly, but that actually you just got lucky. What is important for making future predictions is being able to pick the option that is most likely, since "being lucky" is not a repeatable strategy.

The moral for making better decisions is that we should not praise people who predict prima facie unlikely outcomes -- without presenting a strong rationale for doing so -- but who then happen to be correct. Amongst those who have made unusual but successful predictions we have to distinguish people who are reliably capable of insight from those who were just lucky. Pick your contrarians carefully.

There's a more complex case where your predictions are made for the "wrong" reasons, but they are still reliably correct. Say you have a disorder that makes you feel nauseous in proportion to the unlikeliness of an option, and you habitually avoid options that make you nauseous. In that case, it seems more that you've hit upon a useful heuristic than anything else. Gettier cases aren't really like this because they are usually more about luck than about reliable heuristics that aren't explicitly "rational"

Comment author: bryjnar 06 January 2013 02:05:10AM 2 points [-]

Great post! I wish Harsanyi's papers were better known amongst philosophers.

Comment author: Qiaochu_Yuan 05 January 2013 11:41:54AM *  25 points [-]

it is not logic “all the way down,” it is anchored by certain contingent facts about humanity, bonoboness and so forth.

When we talk about morality, we are talking about those contingent facts, and once we've pinned down precisely what the consequences of those contingent facts are, we have picked out a logical object. We are not trying to explain why we picked this logical object and not some other logical object - that is anchored by contingent facts about humanity, evolutionary biology, etc. We are just trying to describe this logical object.

This point might be made more clearly by Sorting Pebbles Into Correct Heaps. Why the pebblesorting people choose to sort pebbles one way and not another way is anchored by contingent facts about pebblesorting people, evolutionary biology, etc. But the algorithm that decides how the pebblesorting people sort pebbles is a logical object.

It doesn't matter where our morality comes from (except insofar as this helps us figure out what it is); wherever it came from, it's still the same morality.

Comment author: bryjnar 06 January 2013 02:00:24AM 8 points [-]

Mainstream philosophy translation: moral concepts rigidly designate certain natural properties. However, precisely which properties these are was originally fixed by certain contingent facts about the world we live in and human history.

Hence the whole "If the world had been different, then what is denoted by "morality" would have been different, but those actions would still be immoral (given what "morality" actually denotes)" thing.

This position is sometimes referred to as "sythetic ethical naturalism".

Comment author: bryjnar 06 January 2013 01:47:02AM 5 points [-]

I'm still worried about the word "model". You talk about models of second-order logic, but what is a model of second-order logic? Classically speaking, it's a set, and you do talk about ZF proving the existence of models of SOL. But if we need to use set theory to reason about the semantic properties of SOL, then are we not then working within a first-order set theory? And hence we're vulnerable to unexpected "models" of that set theory affecting the theorems we prove about SOL within it.

It seems like you're treating "model" as if it were a fundamental concept, when in fact the way it's used in mathematics is normally embedded within some set theory. But this then means you can't robustly talk about "models" all the way down: at some point your notion of model bottoms out. I don't think I have a solution to this, but it feels like it's a problem worth addressing.

Comment author: bryjnar 04 January 2013 11:26:18AM 4 points [-]

It's like the opposite of considering the Least Convenient Possible World; the Most Convenient Possible World! Where everything on my side turns out as well as possible, and everything on yours turns out as badly as possible.

View more: Next