Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Grognor comments on Our Phyg Is Not Exclusive Enough - Less Wrong

25 [deleted] 14 April 2012 09:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread.

Comment author: Grognor 14 April 2012 09:41:29PM *  5 points [-]

Upvoted.

I agree pretty much completely and I think if you're interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else. Edit, after many replies: This claim is too strong. I should have said instead that people should at least be making an effort to read and understand the sequences if they wish to comment here, not that everyone should read the whole volume before making a single comment.

There are those who think rationality needs to be learned through osmosis or whatever. That's fine, but I don't want it lowering the quality of discussion here.

I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This is probably one of the reasons why.

An IRC conversation I had a while ago left me with a powerful message: people will give lip service to keeping the gardens, but when it comes time to actually do it, nobody is willing to.

Comment author: [deleted] 14 April 2012 10:08:00PM 20 points [-]

I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This is probably one of the reasons why.

This is a pretty hardcore assertion.

I am thinking of lukeprog's and Yvain's stuff as counterexamples.

Comment author: Grognor 14 April 2012 10:12:31PM *  11 points [-]

I think of them (and certain others) as exceptions that prove the rule. If you take away the foundation of the sequences and the small number of awesome people (most of whom, mind you, came here because of Eliezer's sequences), you end up with a place that's indistinguishable from the programmer/atheist/transhumanist/etc. crowd, which is bad if LW is supposed to be making more than nominal progress over time.

Standard disclaimer edit because I have to: The exceptions don't prove the rule in the sense of providing evidence for the rule (indeed, they are technically evidence contrariwise), but they do allow you to notice it. This is what the phrase really means.

Comment author: [deleted] 14 April 2012 10:30:00PM 3 points [-]

Your edit updated me in favour of me being confused about this exception-rule business. Can you link me to something?

Comment author: Grognor 14 April 2012 10:32:22PM *  5 points [-]

"The exception [that] proves the rule" is a frequently confused English idiom. The original meaning of this idiom is that the presence of an exception applying to a specific case establishes that a general rule existed.

-Wikipedia (!!!)

(I should just avoid this phrase from now on, if it's going to cause communication problems.)

Comment author: komponisto 16 April 2012 02:36:43AM *  2 points [-]

I suspect the main cause of misunderstanding (and subsequent misuse) is omission of the relative pronoun "that". The phrase should always be "[that is] the exception that proves the rule", never "the exception proves the rule".

Comment author: thomblake 16 April 2012 08:42:35PM 1 point [-]

Probably even better to just include "in cases not so excepted" at the end.

Comment author: David_Gerard 14 April 2012 11:04:04PM *  3 points [-]

you end up with a place that's indistinguishable from the programmer/atheist/transhumanist/etc. crowd, which is bad if LW is supposed to be making more than nominal progress over time.

Considering how it was subculturally seeded, this should not be surprising. Remember that LW has proceeded in a more or less direct subcultural progression from the Extropians list of the late '90s, with many of the same actual participants.

It's an online community. As such, it's a subculture and it's going to work like one. So you'll see the behaviour of an internet forum, with a bit of the topical stuff on top.

How would you cut down the transhumanist subcultural assumptions in the LW readership?

(If I ever describe LW to people these days it's something like "transhumanists talking philosophy." I believe this is an accurate description.)

Comment author: [deleted] 14 April 2012 11:21:56PM 5 points [-]

Transhumanism isn't the problem. The problem is that when people don't read the sequences, we are no better than any other forum of that community. Too many people are not reading the sequences, and not enough people are calling them out on it.

Comment author: [deleted] 14 April 2012 10:18:38PM *  1 point [-]

Exceptions don't prove rules.

You are mostly right, which is exactly what I was getting at with the "promoted is the only good stuff" comment.

I do think there is a lot of interesting, useful stuff outside of promoted, tho, it's just mixed with the usual programmer/atheist/transhumanist/etc-level stuff.

Comment author: TheOtherDave 16 April 2012 03:14:12AM 0 points [-]

I'd always thought they prove the rule in the sense of testing it.

Comment author: [deleted] 14 April 2012 09:50:29PM 5 points [-]

Thanks.

I was going to include something along those lines, but then I didn't. But really, if you haven't read the sequences, and don't care to, the only thing that seperates LW from r/atheism, rationalwiki, whatever that place is you linked to, and so on is that a lot of people here have read the sequences, which isn't a fair reason to hang out here.

Comment author: wedrifid 14 April 2012 11:56:56PM 12 points [-]

I agree pretty much completely and I think if you're interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else.

I don't consider myself a particularly patient person when it comes to tolerating ignorance or stupidity but even so I don't much mind if people here contribute without having done much background reading. What matters is that they don't behave like an obnoxious prat about it and are interested in learning things.

I do support enforcing high standards of discussion. People who come here straight from their highschool debate club and Introduction to Philosophy 101 and start throwing around sub-lesswrong-standard rhetoric should be downvoted. Likewise for confident declarations of trivially false things. There should be more correction of errors that would probably be accepted (or even rewarded) in many other contexts. These are the kind of thing that don't actively exclude but do have the side effect of raising the barrier to entry. A necessary sacrifice.

Comment author: [deleted] 15 April 2012 12:04:34AM 1 point [-]

The core-sequence fail gets downvoted pretty reliably. I can't say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.

Comment author: Vaniver 15 April 2012 01:14:19AM 12 points [-]

The core-sequence fail gets downvoted pretty reliably. I can't say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.

Isn't the metaethics sequence not liked very much? I haven't read it in a while, and so I'm not sure that I actually read all of the posts, but I found what I read fairly squishy, and not even on the level of, say, Nietzsche's moral thought.

Downvoting people for not understanding that beliefs constrain expectation I'm okay with. Downvoting people for not agreeing with EY's moral intuitions seems... mistaken.

Comment author: Will_Newsome 15 April 2012 06:21:13AM 11 points [-]

Downvoting people for not understanding that beliefs constrain expectation I'm okay with.

Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret "belief" in such a naive fashion;—giving LessWrong a semi-Bayesian justification for this collective failure of hermeneutics is unwise. Maybe beliefs "should" be about anticipation, but LessWrong, like everybody else, can't reliably separate descriptive and normative claims, which is exactly why this "beliefs constrain anticipation" thing is misleading. ...There's a neat level-crossing thingy in there.

Downvoting people for not agreeing with EY's moral intuitions seems... mistaken.

EY thinking of meta-ethics as a "solved problem" is one of the most obvious signs that he's very spotty when it comes to philosophy and can't really be trusted to do AI theory.

(Apologies if I come across as curmudgeonly.)

Comment author: wedrifid 15 April 2012 06:29:51AM 2 points [-]

EY thinking of meta-ethics as a "solved problem" is one of the most obvious signs that he's very spotty when it comes to philosophy and can't really be trusted to do AI theory.

He does? I know he doesn't take it as seriously as other knowledge required for AI but I didn't think he actually thought it was a 'solved problem'.

Comment author: Will_Newsome 15 April 2012 09:23:14AM 6 points [-]
Comment author: wedrifid 15 April 2012 09:47:40AM 4 points [-]

From my favorite post and comments section on Less Wrong thus far:

Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics?

Yes, it looks like Eliezer is mistaken there (or speaking hyperbole).

I agree with:

what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics?

... but would weaken the claim drastically to "Take metaethics, a clearly reducible problem with many technical details to be ironed out". I suspect you would disagree with even that, given that you advocate meta-ethical sentiments that I would negatively label "Deeply Mysterious". This places me approximately equidistant from your respective positions.

Comment author: Will_Newsome 15 April 2012 09:57:40AM 4 points [-]

I only weakly advocate certain (not formally justified) ideas about meta-ethics, and remain deeply confused about certain meta-ethical questions that I wouldn't characterize as mere technical details. One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls "right"; I still don't know what argument, technical or non-technical, could justify such an intuition, and I don't know how Eliezer would make tradeoffs if the two did in fact have different referents. This strikes me as a significant problem in itself, and there are many more problems like it.

(Mildly inebriated, apologies for errors.)

Comment author: gjm 15 April 2012 11:19:56PM 4 points [-]

Are you sure Eliezer does equate reflective consistency with alignment with what-he-calls-"right"? Because my recollection is that he doesn't claim either (1) that a reflectively consistent alien mind need have values at all like what he calls right, or (2) that any individual human being, if made reflectively consistent, would necessarily end up with values much like what he calls right.

(Unless I'm awfully confused, denial of (1) is an important element in his thinking.)

I think he is defining "right" to mean something along the lines of "in line with the CEV of present-day humanity". Maybe that's a sensible way to use the word, maybe not (for what it's worth, I incline towards "not") but it isn't the same thing as identifying "right" with "reflectively consistent", and it doesn't lead to a risk of confusion if the two turn out to have different referents (because they can't).

Comment author: thomblake 16 April 2012 09:26:01PM 1 point [-]

Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls "right"

He most certainly does not.

Comment author: wedrifid 15 April 2012 10:09:27AM 1 point [-]

One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls "right"; I still don't know what argument, technical or non-technical, could justify such an intuition, and I don't know how Eliezer would make tradeoffs if the two did in fact have different referents.

If I understand you correctly then this particular example I don't think I have a problem with, at least not when I assume the kind of disclaimers and limitations of scope that I would include if I were to attempt to formally specify such a thing.

This strikes me as a significant problem in itself, and there are many more problems like it.

I suspect I agree with some of your objections to various degrees.

Comment author: Eugine_Nier 15 April 2012 08:15:47PM 2 points [-]

To take Eliezer's statement one meta-level down:

what are the odds that someone who still thought ethics was a Deep Mystery could come up with a correct metaethics?

Comment author: XiXiDu 15 April 2012 11:01:56AM 1 point [-]

Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn't work.

What did he mean by "I tried that..."?

Comment author: Will_Newsome 15 April 2012 11:05:29AM 1 point [-]

I'm not at all sure, but I think he means CFAI.

Comment author: Mitchell_Porter 15 April 2012 11:24:48AM 2 points [-]

Possibly he means this.

Comment author: whowhowho 30 January 2013 06:57:46PM 0 points [-]

He may have soleved it, but if only he or someone else could say what the solution was.

Comment author: Vaniver 15 April 2012 06:24:05AM 1 point [-]

Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret "belief" in such a naive fashion

Can you give examples of beliefs that aren't about anticipation?

Comment author: wedrifid 15 April 2012 06:59:54AM *  8 points [-]

Can you give examples of beliefs that aren't about anticipation?

Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don't relate to things that leave historical footprints. If you'll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.

In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of 'belief' apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.

Comment author: Will_Newsome 15 April 2012 10:47:32AM *  5 points [-]

Beliefs that aren't easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. "communism is good" with "correctly implemented communism is good", or "whites and blacks have equal average IQ" with "whites and blacks would have equal average IQ if they'd had the same cultural privileges/disadvantages". (Apologies for the necessary political examples. Please don't use this as an opportunity to talk about communism or race.)

Many "beliefs" that aren't politically relevant—which excludes most scientific "knowledge" and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like "do I have one hand, two hands, or three hands?" or "how do I get back to my house from my workplace?" aren't generally beliefs so much as knowledge, and in my opinion "knowledge" is not only epistemologically but cognitively-neurologically a more accurate description, though I don't really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn't try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn't meant to describe or solve, even if it's technically possible to do so.

Comment author: Eugine_Nier 15 April 2012 08:04:42PM 1 point [-]

Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. "communism is good" with "correctly implemented communism is good", or "whites and blacks have equal average IQ" with "whites and blacks would have equal average IQ if they'd had the same cultural privileges/disadvantages".

I believe the common to term for that mistake is "no true Scotsman".

Comment author: Vaniver 15 April 2012 05:06:17PM *  1 point [-]

Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don't relate to things that leave historical footprints. If you'll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.

What do we lose by saying that doesn't count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don't separate out ones we can measure and ones we can't, but reality does separate those, and our terminology fits reality)? Something else?

Comment author: Eugine_Nier 15 April 2012 08:06:24PM 2 points [-]

So if someone you cared about is leaving your future light cone, you wouldn't care if he gets horribly tortured as soon as he's outside of it?

Comment author: Vaniver 15 April 2012 08:14:24PM 1 point [-]

I'm not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they're out of my future light cone whatever happens to them is a sunk cost- I don't see what I (or they) get from my preferring or believing things about them.

Comment author: Will_Newsome 15 April 2012 10:22:55AM 5 points [-]

The best illustration I've seen thus far is this one.

(Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it's usually really bad: proffered explanations are variations on "memetic selection pressures", "confirmation bias", or other fully general "explanations"/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others' beliefs is apparent in posts like "Our Phyg Is Not Exclusive Enough".)

Comment author: Vaniver 15 April 2012 04:26:56PM 3 points [-]

I agree that that post is the sort of thing that I want more of on LW.

It seems to me like Steve_Rayhawk's comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you're talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it's necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.

I don't think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I'm much more sympathetic to the view that rationalizations can use the "beliefs are anticipation" argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don't think that implies that "beliefs are anticipation" is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.

Comment author: Will_Newsome 16 April 2012 06:14:56AM 2 points [-]

it's necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.

I don't think public stated beliefs have to be about anticipation

You seem to be modeling the AGW disputant's decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having "actual belief about AGW" as a latent node that isn't introspectively accessible. That's surely the case sometimes, but I don't think that's usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I'm not sure it's wise to use "belief" to refer to only the (in many cases unidentifiable) "actual anticipation" part of decision policies, either for others or ourselves, especially when we don't have enough time to be abnormally reflective about the causes and purposes of others'/our "beliefs".

(Areas where such caution isn't as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people's policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn't work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)

Another more theoretical reason I encourage caution about the "belief as anticipation" idea is that I don't think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The 'pure' "anticipation" aspect of beliefs only seems relevant in certain cases, e.g. when you don't have "anthropic" uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of "anthropic" uncertainty, and it's mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.

(Note that taking the general decision theoretic perspective doesn't lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)

Comment author: Vaniver 16 April 2012 08:52:06AM 1 point [-]

You seem to be modeling the AGW disputant's decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having "actual belief about AGW" as a latent node that isn't introspectively accessible.

I'm describing it that way but I don't think the introspection is necessary- it's just easier to talk about as if he had full access to his mind. (Private beliefs don't have to be beliefs that the mind's narrator has access to, and oftentimes are kept out of its reach for security purposes!)

But if you want to model actual people's policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn't work or is way too cumbersome. Does your experience differ from mine?

I don't think I've seen any Bayesian modeling of that sort of thing, but I haven't gone looking for it.

Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it's hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn't have a person traverse them unaided.)

If you wanted to code a narrow AI that determined someone's mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.

Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don't see analysis on the level of Steve_Rayhawk's post coming out of a computer-run Bayes net anytime soon, and I don't think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we've got pretty sophisticated dedicated hardware for very similar things.

Another more theoretical reason I encourage caution about the "belief as anticipation" idea is that I don't think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination

Hmm. I'm going to need to sleep on this, but this sort of coordination still smells to me like anticipation.

(A general comment: this conversation has moved me towards thinking that it's useful for the LW norm to be tabooing "belief" and using "anticipation" instead when appropriate, rather than trying to equate the two terms. I don't know if you're advocating for tabooing "belief", though.)

Comment author: Will_Newsome 16 April 2012 06:31:32AM 1 point [-]

(Complement to my other reply: You might not have seen this comment, where I suggest "knowledge" as a better descriptor than "belief" in most mundane settings. (Also I suspect that people's uses of the words "think" versus "believe" are correlated with introspectively distinct kinds of uncertainty.))

Comment author: [deleted] 15 April 2012 06:40:53AM 0 points [-]

Beliefs about primordial cows, etc. Most people's beliefs. He's talking descriptively, not normatively.

Comment author: Vaniver 15 April 2012 05:03:45PM 1 point [-]

Don't my beliefs about primordial cows constrain my anticipation of the fossil record and development of contemporary species?

I think "most people's beliefs" fit the anticipation framework- so long as you express them in a compartmentalized fashion, and my understanding of the point of the 'belief=anticipation' approach is that it helps resist compartmentalization, which is generally positive.

Comment author: wedrifid 15 April 2012 01:19:12AM 2 points [-]

Random factoid: The post by Eliezer that I find most useful for describing (a particular aspect of) moral philosophy is actually a post about probability.

Comment author: Will_Newsome 15 April 2012 11:18:28AM *  0 points [-]

(In general I use most of the same intuitions for values as I do for probability; they share a lot of the same structure, and given the oft-remarked-on non-unique-decomposability of decision policies they seem to be special cases of some more fundamental thing that we don't yet have a satisfactory language for talking about. You might like this post and similar posts by Wei Dai that highlight the similarities between beliefs and values. (BTW, that post alone gets you half the way to my variant of theism.) Also check out this post by Nesov. (One question that intrigues me: is there a nonlinearity that results in non-boring outputs if you have an agent who calculates the expected utility of an action by dividing the universal prior probability of A by the universal prior probability of A (i.e., unity)? (The reason you might expect nonlinearities is that some actions depend on the output of the agent program itself, which is encoded by the universal prior but is undetermined until the agent fills in the blank. Seems to be a decent illustration of the more general timeful/timeless problem.)))

Comment author: gjm 15 April 2012 11:25:43PM 0 points [-]

BTW, that post alone gets you half the way to my variant of theism.

I think you mean that it would get you halfway there. Do you have good reason to think it would do the same for others who aren't already convinced? (It seems like there could be non-question-begging reasons to think that -- e.g., it might turn out that people who've read and understood it quite commonly end up agreeing with you about God.)

Comment author: Will_Newsome 16 April 2012 04:46:11AM -1 points [-]

I think most of the disagreement would be about the use of the "God" label, not about the actual decision theory. Wei Dai asks:

Or is anyone tempted to bite this bullet and claim that we should apply pre-rationality to our utility functions as well?

This is very close to my variant of theism / objective morality, and gets you to the First and Final Cause of morality—the rest is discerning the attributes of said Cause, which we can do to some extent with algorithmic information theory, specifically the properties of Chaitin's number of wisdom, omega. I think I could argue quite forcefully that my God is the same God as the God of Aquinas and especially Leibniz (who was in his time already groping towards algorithmic information theory himself). Thus far the counterarguments I've seen amount to: "Their 'language' doesn't mean anything; if it does mean something then it doesn't mean what you think it means; if it does mean what you think it means then you're both wrong, traitor." I strongly suspect rationalization due to irrational allergies to the "God" word; most people who think that theism is stupid and worthless have very little understanding of what theology actually is. This is pretty much unrelated to the actual contents of my ideas about ethics and decision theory, it's just a debate about labels.

Anyway what I meant wasn't that reading the post halfway convinces the attentive reader of my variant of theism, I meant it allows the attentive reader to halfway understand why I have the intuitions I do, whether or not the reader agrees with those intuitions.

(Apologies if I sound curmudgeonly, really stressed lately.)

Comment author: Wei_Dai 16 April 2012 09:02:58AM *  5 points [-]

Will, may I suggest that you try to work out the details of your objective morality first and explain it to us before linking it with theism/God? For example, how are we supposed to use Chaitin's Omega to "discerning the attributes of said Cause"? I really have no idea at all what you mean by that, but it seems like it would make for a more interesting discussion than whether your God is the same God as the God of Aquinas and Leibniz, and also less likely to trigger people's "allergies".

Comment author: Will_Newsome 16 April 2012 09:52:15AM -3 points [-]

Actually for the last few days I've been thinking about emailing you, because I've been planning on writing a long exegesis explaining my ideas about decision theory and theology, but you've stated that you don't think it's generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions. Although I've independently noticed various ideas about decision theory (probably due to Steve's influence), I haven't at all contributed any new insights, and the only thing I would accomplish with my apologetics is to convince other people that I'm not obviously crazy. You, Nesov, and Steve have made comments that indicate that you recognize that various of my intuitions might be correct, but of course that in itself isn't anything noteworthy: it doesn't help us build FAI. (Speaking of which, do you have any ideas about a better name than "FAI"? 'Friendliness' implies "friendly to humans", which itself is a value judgment. Justified Artificial Intelligence, maybe? Not Regrettable Artificial Intelligence? I was using "computational axiology" for awhile a few years ago, but if there's not a fundamental distinction between epistemology and axiology then that too is sort of misleading.)

Now, I personally think that certain results about decision theory should actually affect what we think of as morally justified, and thus I think my intuitions are actually important for not being damned (whatever that means). But I could easily be wrong about that.

The reason I've made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others' contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of "we only have so much time, we can't check out every crackpot thesis that comes our way": in my experience such arguments are always, without exception the result of motivated cognition.) I would hold this position about normative epistemology even if my intuitions about decision theory didn't happen to support various theological hypotheses.

Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won't be seen as unfairly proselytizing about my own justifiably-ignorable ideas.

(I'm pretty drunk right now, apologies for errors. I might respond to your comment again when I'm sober.)

Comment author: gjm 16 April 2012 09:35:27AM 0 points [-]

I may well be being obtuse, but it seems to me that there's something very odd about the phrase "theism / objective morality", with its suggestion that basically the two are the same thing.

Have you actually argued forcefully that your god is also Aquinas's and Leibniz's? I ask because first you say you could, which kinda suggests you haven't actually done it so far (at least not in public), but then you start talking about "counterarguments", which kinda suggests that you have and people have responded.

I agree with Wei_Dai that it might be interesting to know more about your version of objective morality and how one goes about discerning the attributes of its alleged cause using algorithmic information theory.

Comment author: Will_Newsome 16 April 2012 10:17:29AM 1 point [-]

I may well be being obtuse, but it seems to me that there's something very odd about the phrase "theism / objective morality", with its suggestion that basically the two are the same thing.

This reflects a confusion I have about how popular philosophical opinion is in favor of moral realism, yet against theism. It seems that getting the correct answer to all possible moral problems would require prodigious intelligence, and so I don't really understand the conjunct of moral realism and atheism. This likely reflects my ignorance of the existent philosophical literature, though to be honest like most LessWrongers I'm a little skeptical of the worth of the average philosopher's opinion, especially about subjects outside of his specialty. Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism. Also, there's the algorithm from music appreciation, which is like "look at what good musicians like", which I think would strongly favor theism. Still, I admit I'm confused.

Have you actually argued forcefully that your god is also Aquinas's and Leibniz's? I ask because first you say you could, which kinda suggests you haven't actually done it so far (at least not in public), but then you start talking about "counterarguments", which kinda suggests that you have and people have responded.

I've kinda argued it on the meta-level, i.e. I've argued about when it is or isn't appropriate to assume that you're actually referring to the same concept versus just engaging in syncretism. But IIRC I haven't yet forcefully argued that my god is Leibniz's God. So, yeah, it's a mixture.

I replied to Wei Dai's comment here.

BTW, realistically, I won't be able to reply to your comment re CEV/rightness, though as a result of your comment I do plan on re-reading the meta-ethics sequence to see if "right" is anywhere (implicitly or explicitly) defined as CEV.

(Inebriated, apologies for errors or omissions.)

Comment author: thomblake 16 April 2012 09:13:54PM 1 point [-]

That is an excellent point.

Comment author: [deleted] 15 April 2012 01:40:57AM 4 points [-]

Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn't seem like it's just some wierd opinion of Eliezer's.

After I read it I was like, "Oh, ok. Morality is easy. Just do the right thing. Where 'right' is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it's OK that it's not supernatural or 'objective', and we don't have to 'justify' it to an ideal philosophy student of perfect emptyness". Fake utility functions, and Recursive justification stuff helped.

Maybe there's something wrong with Eliezer's metaethics, but I havn't seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.

Comment author: buybuydandavis 15 April 2012 04:43:15AM 8 points [-]

Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn't seem like it's just some wierd opinion of Eliezer's.

Has it ever been demonstrated that there is a consensus on what point he was trying to make, and that he in fact demonstrated it?

He seems to make a conclusion, but I don't believe demonstrated it, and I never got the sense that he carried the day in the peanut gallery.

Comment author: Vaniver 15 April 2012 01:42:52AM 11 points [-]

Hm. I think I'll put on my project list "reread the metaethics sequence and create an intelligent reply." If that happens, it'll be at least two months out.

Comment author: [deleted] 15 April 2012 01:45:50AM 0 points [-]

I look forward to that.

Comment author: wedrifid 15 April 2012 06:41:11AM 2 points [-]

Maybe there's something wrong with Eliezer's metaethics, but I havn't seen anyone point it out, and have no reason to suspect it.

The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.

Comment author: [deleted] 15 April 2012 06:42:36AM 2 points [-]

What would you like covered? Or is it just that vague "this isn't enough" feeling?

Comment author: wedrifid 15 April 2012 06:47:18AM 5 points [-]

What would you like covered? Or is it just that vague "this isn't enough" feeling?

I can't fully remember - it's been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A 'preferences are subjectively objective' post. A post that explains more completely what he means by 'should' (he has discussed and argued about this in comments).

Comment author: Eugine_Nier 15 April 2012 03:00:46AM 4 points [-]

Maybe there's something wrong with Eliezer's metaethics

Try actually applying it to some real life situations and you'll quickly discover the problems with it.

Comment author: [deleted] 15 April 2012 03:28:40AM 1 point [-]

such as?

Comment author: Eugine_Nier 15 April 2012 03:42:17AM 11 points [-]

Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.

Comment author: [deleted] 15 April 2012 04:06:27AM 2 points [-]

I struggled with that myself, but then figured out a rather nice quantitative solution.

Eliezer's stuff doesn't say much about that topic, but that doesn't mean it fails at it.

Comment author: Eugine_Nier 15 April 2012 05:04:03AM 3 points [-]

I don't think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.

Comment author: orthonormal 16 April 2012 04:56:05PM *  1 point [-]

There's a difference between a metaethics and an ethical theory.

The metaethics sequence is supposed to help dissolve the false dichotomy "either there's a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right". It's not immediately supposed to solve "So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?"

For the second question, we'd want to add an Ethics Sequence (in my opinion, Yvain's Consquentialism FAQ lays some good groundwork for one).

Comment author: whowhowho 30 January 2013 07:00:20PM 0 points [-]

Metaethics sequence is a bit of a mess

It's much worse than that. Nobody on LW seems to be able to understand it at all.

Oh, ok. Morality is easy. Just do the right thing. Where 'right' is some incredibly complex set of preferences that are only represented implicitly in physical human brains.

Nah. Subjectivism. Euthyphro.

Comment author: wedrifid 15 April 2012 12:11:55AM 1 point [-]

I can't say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.

Point taken. There is certainly a lack along those lines.

Comment author: David_Gerard 14 April 2012 11:00:51PM *  11 points [-]

I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This [people not reading them] is probably one of the reasons why.

Um, after I read the sequences I ploughed through every LW post from the start of LW to late 2010 (when I started reading regularly). What I saw was that the sequences were revered, but most of the new and interesting stuff from that intervening couple of years was ignored. (Though it's probably just me.)

At this point A Group Is Its Own Worst Enemy is apposite. Note the description of the fundamentalist smackdown as a stage communities go through. Note it also usually fails when it turns out the oldtimers have differing and incompatible ideas on what the implicit constitution actually was in the good old days.

tl;dr declarations of fundamentalism heuristically strike me as inherently problematic.

edit: So what about this comment rated a downvote?

edit 2: ah - the link to the Shirky essay appears to be giving the essay in the UK, but Viagra spam in the US o_0 I've put a copy up here.

Comment author: Eugine_Nier 15 April 2012 01:50:50AM 5 points [-]

What I saw was that the sequences were revered, but most of the new and interesting stuff from that intervening couple of years was ignored.

I suspect that's because it's poorly indexed. This should be fixed.

Comment author: [deleted] 15 April 2012 01:57:38AM 4 points [-]

This is very much why I have only read some of it.

If the more recent LW stuff was better indexed, that would be sweet.

Comment author: Eugine_Nier 15 April 2012 03:08:26AM 8 points [-]
Comment author: wedrifid 15 April 2012 06:39:32AM 0 points [-]

Exchanges the look two people give each other when they each hope that the other will do something that they both want done but which neither of them wants to do.

Hey, I think "Dominions" should be played but do want to play it and did purchase the particular object at the end of the link. I don't understand why you linked to it though.

Comment author: Eugine_Nier 15 April 2012 06:46:41AM 2 points [-]

The link text is a quote from the game description.

Comment author: wedrifid 15 April 2012 06:48:46AM 0 points [-]

Ahh, now I see it. Clever description all around!

Comment author: David_Gerard 15 April 2012 08:09:13AM 1 point [-]

Yeah, I didn't read it from the wiki index, I read it by going to the end of the chronological list and working forward.

Comment author: [deleted] 14 April 2012 11:57:53PM *  2 points [-]

Am I in some kind of internet black-hole? That link took me to some viagra spam site.

Comment author: David_Gerard 15 April 2012 12:04:04AM *  4 points [-]

It's a talk by Clay Shirky, called "A Group Is Its Own Worst Enemy".

I get the essay ... looking in Google, it appears someone's done some scurvy DNS tricks with shirky.com and the Google cache is corrupted too. Eegh.

I've put up a copy here and changed the link in my comment..

Comment author: TimS 15 April 2012 12:06:59AM *  0 points [-]

shirky.com/writings/group_enemy.html

???

Comment author: TimS 14 April 2012 11:51:06PM 0 points [-]

I thought it was great. Very good link.

Comment author: David_Gerard 14 April 2012 11:54:59PM *  3 points [-]

It's a revelatory document. I've seen so many online communities, of varying sizes, go through precisely what's described there.

(Mark Dery's Flame Wars (1994) - which I've lost my copy of, annoyingly - has a fair bit of material on similar matters, including one chapter that's a blow-by-blow description of such a crisis on a BBS in the late '80s. This was back when people could still seriously call this stuff "cyberspace." This leads me to suspect the progression is some sort of basic fact of online subcultures. This must have had serious attention from sociologists, considering how rabidly they chase subcultures ...)

LW is an online subcultural group and its problems are those of online subcultural groups; these have been faced by many, many groups in the past, and if you think they're reminiscent of things you've seen happen elsewhere, you're likely right.

Comment author: TimS 15 April 2012 12:01:38AM 3 points [-]

Maybe if you reference Evaporative Cooling, which is the converse of the phenomena you describe, you'd get a better reception?

Comment author: David_Gerard 15 April 2012 12:06:29AM *  -1 points [-]

I'm thinking it's because someone appears to have corrupted DNS for Shirky's site for US readers ... I've put up a copy myself here.

I'm not sure it's the same thing as evaporative cooling. At this point I want a clueful sociologist on hand.

Comment author: TimS 15 April 2012 12:22:39AM 6 points [-]

Evaporative cooling is change to average belief from old members leaving.

Your article is about change to average belief from new members joining.

Comment author: David_Gerard 15 April 2012 12:25:49AM *  0 points [-]

Sounds plausibly related, and well spotted ... but it's not obvious to me how they're functionally converses in practice to the degree that you could talk about one in place of talking about the other. This is why I want someone on hand who's thought about it harder than I have.

(And, more appositely, the problem here is specifically a complaint about newbies.)

Comment author: TimS 15 April 2012 12:33:12AM 2 points [-]

I wasn't suggesting that one replaced the other, but that one was conceptually useful in thinking about the other.

Comment author: David_Gerard 15 April 2012 12:35:42AM 1 point [-]

Definitely useful, yes. I wonder if anyone's sent Shirky the evaporative cooling essay.

Comment author: Manfred 15 April 2012 10:59:00PM 2 points [-]

I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This is probably one of the reasons why.

<shameless self-promotion> My recent post explains how to get true beliefs in situations like the anthropic trilemma, which post begins with the words "speaking of problems I don't know how to solve." </shameless self-promotion>

However, there is a bit of a remaining problem, since I don't know how to model the wrong way of doing things (naive application of Bayes' rule to questionable interpretations). well enough to tell whether it's fixable or not, so although the problem is solved, it is not dissolved.

Comment author: Grognor 15 April 2012 11:15:52PM *  0 points [-]

I quietly downvoted your post when you made it for its annoying style and because I didn't think it really solved any problems, just asserted that it did.

Comment author: Manfred 16 April 2012 02:31:29AM 4 points [-]

What could I do to improve the style of my writing?

Comment author: Bugmaster 15 April 2012 01:00:56AM 2 points [-]

I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general.

How do you measure "progress", exactly ? I'm not sure what the word means in this context.

Comment author: David_Gerard 15 April 2012 08:39:54AM -1 points [-]

Yes, this needs clarification. Is it "I like it better/I don't like it better" or something a third party can at least see?

Comment author: faul_sname 14 April 2012 10:20:27PM 2 points [-]

Where, specifically, do you not see progress? I see much better recognition of, say, regression to the mean here than in the general population, despite it never being covered in the sequences.

Comment author: Grognor 14 April 2012 10:30:24PM 0 points [-]

This is a very interesting question. I cannot cite a lack of something. But maybe what I'm saying here will be obvious-in-retrospect if I put it like this:

This post is terrible. But some of the comments pointing out its mistakes are great. On the other hand, it's easier to point out the mistakes in other people's posts than to be right yourself. Where are the new posts saying thunderously correct things, rather than mediocre posts with great comments pointing out what's wrong with them?

Comment author: David_Gerard 15 April 2012 08:50:04AM *  5 points [-]

That terrible post is hardly an example of a newbie problem - it's clearly a one-off by someone who read one post and isn't interested in anything else about the site, but was sufficiently angry to create a login and post.

That is, it's genuine feedback from the outside world. As such, trying really hard to eliminate this sort of post strikes me as something you should be cautious about.

Also, such posts are rare.

Comment author: Multiheaded 15 April 2012 07:32:01AM *  2 points [-]

LW has made zero progress in general

I'm insulted (not in an emotional way! I just want to state my strong personal objection!). Many of us challenge the notion of "progress" being possible or even desirable on topics like Torture vs Specks. And while I've still much to learn, there are people like Konkvistador, who's IMO quite adept at resisting the lure of naive utilitarianism and can put a "small-c conservative" (meaning not ideologically conservative, but technically so) approach to metaethics to good use.

Comment author: Konkvistador 11 June 2012 06:29:32PM *  0 points [-]

Oh I would agree progress here is questionable, but I agree with Grognor in the sense of LessWrong, at least in top level posts isn't as intellectually productive as it could be. Worse it seems to be a rather closed thing, unwilling to update on information from outside.

Comment author: Alsadius 15 April 2012 03:46:55PM 0 points [-]

Demanding that people read tomes of text before you're willing to talk to them seems about the easiest way imaginable to silence any possible dissent. Anyone who disagrees with you won't bother to read your holy books, and anyone who hasn't will be peremptorily ignored. You're engaging in a pretty basic logical fallacy in an attempt to preserve rationality. Engage the argument, not the arguer.

Comment author: paper-machine 15 April 2012 04:07:05PM 7 points [-]

Expecting your interlocutors to have a passing familiarity with the subject under discussion is not a logical fallacy.

Comment author: Alsadius 15 April 2012 06:14:33PM 0 points [-]

There's ways to have a passing familiarity with rational debate that don't involve reading a million words of Eliezer Yudkowsky's writings.

Comment author: paper-machine 15 April 2012 06:40:35PM 0 points [-]

That has nothing to do with whether or not something you believe to be a logical fallacy is or is not.

Comment author: Alsadius 15 April 2012 07:31:59PM -3 points [-]

The fallacy I was referring to is basic ad hominem - "You haven't read X, therefore I can safely ignore your argument". It says nothing about the validity of the argument, and everything about an arbitrary set of demands you've made.

I'm all for demanding a high standard of discussion, and of asking for people to argue rationally. But I don't care about how someone goes about doing so - a good argument from someone who hasn't read the sequences is still better than a bad argument from someone who has. You're advocating a highly restrictive filter, and it's one that I suspect will not be nearly as effective as you want it to be(other than at creating an echo chamber, of course).

Comment author: paper-machine 15 April 2012 08:15:58PM *  3 points [-]

We're more or less talking past each other at this point. I haven't advocated for a highly restrictive filter, nor have I made any arbitrary demands on my interlocutors -- rather, very specific ones. Very well.

The fallacy I was referring to is basic ad hominem - "You haven't read X, therefore I can safely ignore your argument".

I can't say for certain that that situation doesn't occur, but I haven't seen it in recent memory. The situation I've seen more frequently runs like this:

A: "Here's an argument showing why this post is wrong."

B: "EY covers this objection here in this [linked sequence]."

A: "Oh, I haven't read the sequences."

At this point, I would say B is justified in ignoring A's argument, and doing so doesn't constitute a logical fallacy.