Will_Newsome comments on Rationality Quotes February 2012 - Less Wrong

5 [deleted] 01 February 2012 09:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (401)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 18 February 2012 11:16:18PM 2 points [-]

This is a dispute over definitions then? On your terms then what should I call the various cognitive habits I have about not jinxing things and so on? (I don't think the analogy to quarks holds, because quarks aren't mysterious agenty things in my environment, they're just some weird detail of some weird model of physics, whereas gods are very phenomenologically present.) It seems there is a distinct set of behaviors that people call "superstition" and that should be called "superstition" even if they are the result of epistemically rational beliefs. The set of behaviors is largely characterized by its presumption of mysterious supernatural agency. I see no reason not to call various of my cognitive habits superstitions, as it'd be harder to characterize them if I couldn't use that word. This despite thinking my superstitions have strong epistemic justification.

Comment author: wedrifid 19 February 2012 12:10:13AM *  2 points [-]

This is a dispute over definitions then?

That, and how the abstract concepts represented by them interact with the insight underlying the quote. Oh, and underneath that and causing the disagreement is a fundamental incompatibility of view of the nature of the universe itself which is in turn caused by, from what you have said in the past, a dispute over how the very act of epistemological thinking should be done.

Comment author: Will_Newsome 19 February 2012 01:04:31AM 2 points [-]

a dispute over how the very act of epistemological thinking should be done.

What's the nature of the difference? I figure we both have some sort of UDT-inspired framework for epistemology, bolstered in certain special cases by intuitions about algorithmic probability, and so any theoretical disagreements we have could presumably be resolved by recourse to such higher level principles. On the practical end of course we're likely to have somewhat varying views simply due to differing cognitive styles and personal histories, and we've likely reached very different conclusions on various particular subjects for various reasons. Is our dispute more on the theoretical or pragmatic side?

Comment author: wedrifid 19 February 2012 07:06:33AM *  3 points [-]

What's the nature of the difference?

I can only make inferences based on what you have described of yourself (for example 'post-rationalist' type descriptions) as well, obviously, as updates based on conclusions that have been reached. Given that the subject is personal I should say explicitly that nothing in this comment is intended to be insulting - I speak only as a matter of interest.

I figure we both have some sort of UDT-inspired framework for epistemology,

I think UDT dominates your epistemology more than it does mine. Roughly speaking UDT considerations don't form the framework of my epistemology but instead determine what part of the epistemology to use when decision making. This (probably among other things that I am not aware of) leads me to make less drastic conclusions about fundamental moralities and gods. Yet UDT considerations remain significant when deciding which things to bother even considering as probabilities in such a way that the diff of will/wedrifid's epistemology kernel almost certainly remains far smaller than wedrifid/average_philosopher.

bolstered in certain special cases by intuitions about algorithmic probability, and so any theoretical disagreements we have could presumably be resolved by recourse to such higher level principles.

Yes, most of our thinking is just a bunch of messy human crap that could be ironed out by such recourse.

Is our dispute more on the theoretical or pragmatic side?

A little of both I think? At least when I interpret that at the level of "theories about theorizing" and "pragmatic theorizing". Not much at all (from what I can see) with respect to actually being pragmatic.

But who knows? Modelling other humans internal models is hard enough even when you are modelling cookie cutter 'normal' ones.

Comment author: Will_Newsome 19 February 2012 09:12:55PM -2 points [-]

This (probably among other things that I am not aware of) leads me to make less drastic conclusions about fundamental moralities and gods.

(I don't know if this at all interests you, but I feel like putting it on the record:) It's true my intuitions about decision theory are largely what drive my belief in objective morality a.k.a. the Thomistic/Platonic God a.k.a. objectively-optimal-decision-theory a.k.a. Chaitin's omega, but my belief in little-g gods is rather removed from my intuitions about decision theory and is more the result of straightforward updating on observed evidence. In my mind my belief in gods and my belief in God are two very distinct nodes and I can totally imagine believing in one but not the other, with the caveat that if that were the case then God would have to be as the Cathars or the Neoplatonists conceptualized Him, rather than my current view where He has a discernible "physical" effect on our experiences. I'm still really confused about what I should make of gods/demons that claim to be the One True God; there's a lot of literature on that subject but I've yet to read it. In the meantime I'd rather not negotiate with potential counterfactual terrorists. (Or have I already consented to negotiation without explicitly admitting it to myself? Bleh bleh bleh bleh...)

Comment author: wedrifid 19 February 2012 09:33:09PM *  1 point [-]

(I don't know if this at all interests you, but I feel like putting it on the record:) It's true my intuitions about decision theory are largely what drive my belief in objective morality a.k.a. the Thomistic/Platonic God a.k.a. objectively-optimal-decision-theory a.k.a. Chaitin's omega, but my belief in little-g gods is rather removed from my intuitions about decision theory and is more the result of straightforward updating on observed evidence.

I was curious actually. I had a fair idea of the general background for the objective morality belief but the basis for the belief in gods was somewhat less clear. I did assume that you had a more esoteric/idiosyncratic basis for the belief in gods than straightforward updating on observed evidence so in that respect I'm a little surprised.

I'm still really confused about what I should make of gods/demons that claim to be the One True God; there's a lot of literature on that subject but I've yet to read it. In the meantime I'd rather not negotiate with potential counterfactual terrorists. (Or have I already consented to negotiation without explicitly admitting it to myself? Bleh bleh bleh bleh...)

By my way of thinking you (and I) have already engaged in the counterfactual negotiated by the act of considering the possibility of such a negotiation and deciding what to do but by implementing the underlying principle behind "I don't negotiate with terrorists" our deciding not to negotiate is equivalent to a non-counterfactual negotiation in which we unequivocally stone-wall - which is functionally equivalent to not having considered the possibility in the first place.

(One of the several fangs of Roko's Basilisk represents an inability in some people to casually stonewall like this in the negotiation that is implicit in becoming aware of the simple thought that is the basilisk.)

Comment author: skepsci 24 February 2012 09:37:11AM *  1 point [-]

I'm very confused* about the alleged relationship between objective morality and Chaitin's omega. Could you please clarify?

*Or rather, if I'm to be honest, I suspect that you may be confused.

Comment author: Will_Newsome 24 February 2012 11:49:37AM -2 points [-]

A rather condensed "clarification": "Objective morality" is equivalent to the objectively optimal decision policy/theory, which my intuition says might warrant the label "objectively optimal" due to reasons hinted at in this thread, though it's possible that "optimal" is the wrong word to use here and "justified" is a more natural choice. An oracle can be constructed from Chaitin's omega, which allows for hypercomputation. A decision policy that didn't make use of knowledge of ALL the bits of Chaitin's omega is less optimal/justified than a decision policy that did make use of that knowledge. Such an omniscient (at least within the standard models of computation) decision policy can serve as an objective standard against which we can compare approximations in the form of imperfect human-like computational processes with highly ambiguous "belief"-"preference" mixtures. By hypothesis the implications of the "existence" of such an objective standard would seem to be subtle and far-reaching.

Comment author: skepsci 24 February 2012 08:27:43PM *  2 points [-]

The decisions produced by any decision theory are not objectively optimal; at best they might be objectively optimal for a specific utility function. A different utility function will produce different "optimal" behavior, such as tiling the universe with paperclips. (Why do you think Eliezer et al are spending so much effort trying to figure out how to design a utility function for an AI?)

I see the connection between omega and decision theories related to Solomonoff induction, but as the choice of utility function is more-or-less arbitrary, it doesn't give you an objective morality.

Comment author: paulfchristiano 25 February 2012 02:09:59AM *  5 points [-]

His point is that if I fix your goals (say, narrow self-interest) the defensible policies still don't look much like short-sighted goal pursuit (in some environments, for some defensible notions of "defensible"). It may be that all sufficiently wise agents pursue the same goals because of decision theoretic considerations, by implicitly bargaining with each other and together pursuing some mixture of all of their values. Perhaps if you were wiser, you too would pursue this "overgoal," and in return your self-interest would be served by other agents in the mixture.

While plausible, this doesn't look super likely right now. Will would get a few Bayes points if it pans out, though the idea isn't due to him. (A continuum of degrees of altruism have been conjectured to be justified from a self-interested perspective, if you are sufficiently wise. This is the most extreme, Drescher has proposed a narrower view which still captures many intuitions about morality, and weaker forms that still capture at least a few important moral intuitions, like cooperation on PD, seem well supported.)

The connection to omega isn't so clear. It looks like it could just be concealing some basic intuitions about computability and approximation. It seems like a way of smuggling in mysticism, which is misleading by being superfluous rather than incoherent.

Comment author: Will_Newsome 25 February 2012 02:40:49AM 2 points [-]

I'm not sure if I'd get many Bayes points for my beliefs, rather than just my intuitions; after taking into account others' intuitions I don't think I think it's that much more plausible than others think it is.

I wish I could respond to the rest of your comment but am too flustered; hopefully I'll be able to later. What stands out as a possible miscontrual-with-a-different-idea is that I'm not sure if this idea of selfness as in narrow self interest even makes sense. If it does make sense then my intuition is probably wrong for the same reason various universal instrumental value hypotheses are probably wrong.

Comment author: Vladimir_Nesov 25 February 2012 02:25:48AM 2 points [-]

It may be that all sufficiently wise agents pursue the same goals because of decision theoretic considerations, by implicitly bargaining with each other and together pursuing some mixture of all of their values.

But how does an agent introduce its values in the mixture? The agent is the way it decides, so at least in one interpretation its values must be reflected in its decisions (reasons for its decisions), seen in them, even if in a different interpretation its decisions reflect the mixed values of all things (for that is one thing the agent might want to take into account, as it becomes more capable of doing so).

Why do I write this comment? I decided to do so, which tells something about the way I decide. Why do I write this comment? According to the laws of physics. There seems to be no interesting connection between such explanations, even though both of them hold, and there is peril in confusing them (for example, nihilist ethical ideas following form physical determinism).

Comment author: Will_Newsome 04 March 2012 06:00:56AM 1 point [-]

The connection to omega isn't so clear. It looks like it could just be concealing some basic intuitions about computability and approximation. It seems like a way of smuggling in mysticism, which is misleading by being superfluous rather than incoherent.

I thought about it some more and remembered one connection. I'll post it to the discussion section if it makes sense upon reflection. The basic idea is that Agent X can manipulate the prior of Agent Y but not its preferences, so Agent X gives Agent Y a perverse prior that forces it to optimize for the preferences of Agent X. Running this in reverse gives us a notion of an objectively false preference.