Every few months, I post a summary of my beliefs to my blog. This has several advantages:

  1. It helps to clarify where I'm "coming from" in general.
  2. It clears up reader confusion arising from the fact that my beliefs change.
  3. It's really fun to look back on past posts and assess how my beliefs have changed, and why.
  4. It makes my positions easier to criticize, because they are clearly stated and organized into one place.
  5. It's an opportunity for people to very quickly "get to know me."

To those who are willing: I invite you to post your own web of beliefs. I offer my own, below, as an example (previously posted here). Because my world is philosophy, I frame my web of beliefs in those terms, but others need not do the same:

 

 

My Web of Beliefs (Feb. 2011)

Philosophy

Philosophy is not a matter of opinion. As in science, some positions are much better supported by reasons than others are. I do philosophy as a form of inquiry, continuous with science.

But I don’t have patience for the pace of mainstream philosophy. Philosophical questions need answers, and quickly.

Scientists know how to move on when a problem is solved, but philosophers generally don’t. Scientists don’t still debate the fact of evolution or the germ theory of disease just because alternatives are (1) logically possible, (2) appeal to many people’s intuitions, (3) are “supported” by convoluted metaphysical arguments, or (4) fit our use of language better. But philosophers still argue about Cartesian dualism and theism and contra-causal free will as if these weren’t settled questions.

How many times must the universe beat us over the head with evidence before we will listen? Relinquish your dogmas; be as light as a feather in the winds of evidence.

Epistemology

My epistemology is one part cognitive science, one part probability theory.

We encounter reality and form beliefs about it by way of our brains. So the study of how our brains do that is central to epistemology. (Quine would be pleased.) In apparent ignorance of cognitive science and experimental psychology, most philosophers make heavy use of intuition. Many others have failed to heed the lessons of history about how badly traditional philosophical methods fare compared to scientific methods. I have little patience for this kind of philosophy, and see myself as practicing a kind of ruthlessly reductionistic naturalistic philosophy.

do not care whether certain beliefs qualify as “knowledge” or as being “rational” according to varying definitions of those terms. Instead, I try to think quantitatively about beliefs. How strongly should I believe P? How should I adjust my probability for P in the face of new evidence X? There is a single, exactly correct answer to each such question, and it is provided by Bayes’ Theorem. We may never know the correct answer, but we can plug estimated numbers into the equation and update our beliefs accordingly. This may seem too subjective, but remember that you are always giving subjective, uncertain probabilities. Whenever you use words like “likely” and “probable”, you are doing math. So stop pretending you aren’t doing math, and do the math correctly, according to the proven theorem of how probable P given X is – even if we are always burdened by uncertainty.1

Language

Though I was recently sympathetic to the Austin / Searle / Grice / Avramides family of approaches to language, I now see that no simple theory of meaning can capture every use (and hypothetical use) of human languages. Besides, categorizing every way in which humans use speech and writing to have an effect on themselves and others is a job for scientists, not armchair philosophers.

However, it is useful to develop an account of language that captures most of our discourse systematically – specifically for use in formal argument and artificial intelligence. To this end, I think something like the Devitt / Sterelny account may be the most useful.

A huge percentage of Anglophone philosophy is still done in service of conceptual analysis, which I see as a mostly misguided attempt to build a Super Dictionary full of definitions for common terms that are (1) self-consistent, (2) fit the facts if they are meant to, and (3) agree with our use of and intuitions about each term. But I don’t think we should protect our naive use of words too much – rather, we should use our words to carve reality at its joints, because that allows us to communicate more effectively. And effective communication is the point of language, no? If your argument doesn’t help us solve problems when you play Taboo with your key terms and replace them with their substantive meaning, then what is the point of the argument if not to build a Super Dictionary?

A Super Dictionary would be nice, but humanity has more urgent and important problems that require a great many philosophical problems to be solved. Conceptual analysis is something of a lost purpose.

Normativity

The only source of normativity I know how to justify is the hypothetical imperative: “If you desire that P, then you ought to do Y in order to realize P.” This reduces (roughly) to the prediction: “If you do Y, you are likely to objectively satisfy your desire that P.”2

For me, then, the normativity of epistemology is: “If you want to have more true beliefs and fewer false beliefs, engage in belief-forming practices XY, and Z.”

The normativity of logic is: “If you want to be speaking the same language as everyone else, don’t say things like ‘The ball is all green and all blue at the same time in the same way.’”

Ethics, if there is anything worth calling by that name (not that it matters much; see the language section), must also be a system of hypothetical imperatives of some kind. Alonzo Fyfe and I are explaining our version of this here.

Focus

Recently, the focus of my research efforts has turned to the normative (not technical) problems of how to design the motivational system of a self-improving superintelligent machine. My work on this will eventually be gathered here. A bibliography on the subject is here.

 

 

New Comment
30 comments, sorted by Click to highlight new comments since:

I think this paragraph reflects a very serious confusion that is seen on LW regularly:

How strongly should I believe P? How should I adjust my probability for P in the face of new evidence X? There is a single, exactly correct answer to each such question, and it is provided by Bayes’ Theorem. We may never know the correct answer, but we can plug estimated numbers into the equation and update our beliefs accordingly.

Most of your beliefs are not produced by some process that you can break into its component parts and analyze mathematically so as to assign a numerical probability. Rather, they are produced by opaque black-box circuits in your brain, about whose internal functioning you know little or nothing. Often these circuits function very well and let you form very reliable judgments, but without the ability to reverse-engineer and analyze them in detail, which you presently don't have, you cannot know what would be the correct probability (by any definition) assigned to their outputs, except for the vague feeling of certainty that they typically produce along with their results.

If instead of relying on your brain's internal specialized black-box circuits you use some formal calculation procedure to produce probability estimates, then yes, these numbers can make sense. However, the important points are that: (1) the numbers produced this way do not pertain to the outputs of your brain's opaque circuits, but only to the output of the formal procedure itself, and (2) these opaque circuits, as little as we know about how they actually work, very often produce much more reliable judgments than any formal models we have. Assigning probability numbers produced by explicit formal procedures to beliefs produced by opaque procedures in one's head is a total fallacy, and discarding the latter in favor of the former makes it impossible to grapple with the real world at all.

I meant to capture some of what you've said here in the footnote included above, but let me see if I can get clear on the rest of what you're saying...

I agree that beliefs are formed by a process that is currently, almost entirely opaque to us. But I'm not sure what you mean when you say that "the numbers produced this way to not pertain to the outputs of your brain's opaque circuits, but only to the output of the formal procedure itself." Well of course, but the point of what I'm saying is that I can try to revise my belief strength to correspond to the outputs of the formal process. Or, less mysteriously, I can make choices on the basis of personal utility estimates and the probabilistic outputs of the formal epistemological process. (That is, I can make some decisions on the basis of a formal decision procedure.)

You write that "Assigning probability numbers produced by explicit formal procedures to beliefs produced by opaque procedures in one's head is a total fallacy..." But again, I'm not trying to say that I take the output of a formal procedure and then "assign" that value to my beliefs. Rather, I try to adjust my beliefs to the output of the formal procedure.

Again, I'm not trying to say that I use Bayes' Theorem when guessing which way Starbucks is on the basis of three people's conflicting testimony. But Bayes' Theorem can be useful in a great many applications where one has time to use it.

But before I continue, let me check... perhaps I've misunderstood you?

It seems like I misunderstood your claim as somewhat stronger than what you actually meant. (Perhaps partly because I missed your footnotes -- you might consider making them more conspicuous.)

Still, even now that I (hopefully) understand your position better, I disagree with it. The overwhelming part of our beliefs is based on opaque processes in our heads, and even in cases where we have workable formal models, the ultimate justification for why the model is a reliably accurate description of reality is typically (and arguably always) based on an opaque intuitive judgment. This is why despite the mathematical elegance of a Bayesian approach epistemology remains messy and difficult in practice.

Now, you say:

Whenever you use words like “likely” and “probable”, you are doing math. So stop pretending you aren’t doing math, and do the math correctly, according to the proven theorem of how probable P given X is – even if we are always burdened by uncertainty.

But in reality, it isn't really "you" who's doing the math -- it's some black-box module in your brain, so that you have access only to the end-product of this procedure. Typically you have no way at all to "do the math correctly," because the best available formal procedure is likely to be altogether inferior to the ill-understood and opaque but effective mechanisms in your head, and its results will buy you absolutely nothing.

To take a mundane but instructive example, your brain constantly produces beliefs based on its modules for physics calculations, whose internals are completely opaque to you, but whose results are nevertheless highly accurate on average, or otherwise you'd soon injure or kill yourself. (Sometimes of course they are inaccurate and people injure or kill themselves.) In the overwhelming majority of cases, trying to supplement the results of these opaque calculations with some formal procedure is useless, since the relevant physics and physiology are far too complex. Most beliefs of any consequence are analogous to these, and even those that involve a significant role of formal models must in turn involve beliefs about the connection between the models and reality, themselves a product of opaque intuitions.

With this situation in mind, I believe that reducing epistemology to Bayesianism in the present situation is at best like reducing chemistry to physics: doable in principle, but altogether impractical.

I'm not sure how much we disagree. Obviously it all comes back to opaque brain processes in the end, and thus epistemology remains messy. I don't think anything I said in my original post denies this.

As for a black-box module in my brain doing math, yes, that's part of what I call "me." What I'm doing there is responding to a common objection to Bayesianism - that it's all "subjective." Well yes, it requires subjective probability assessments. So does every method of epistemology. But at least with Bayesian methods you can mathematically model your uncertainty. That's all I was trying to say, there, and I find it hard to believe that you disagree with that point. As far as I can tell, you're extrapolating what I said far beyond what I intended to communicate with it.

As for reducing epistemology to Bayesianism, my footnote said it was impractical, and I also said it's incomplete without cognitive science, which addresses the fact that, for example, our belief-forming processes remain mostly opaque to this day.

Fair enough. We don't seem to disagree much then, if at all, when it comes to the correctness of what you wrote.

However, in that case, I would still object to your summary in that given the realistic limitations of our current position, we have to use all sorts of messy and questionable procedures to force our opaque and unreliable brains to yield workable and useful knowledge. With this in mind, saying that epistemology is reducible to cognitive science and Bayesian probability, however true in principle, is definitely not true in any practically useful sense. (The situation is actually much worse than in the analogous example of our practical inability to reduce chemistry to physics, since the insight necessary to perform the complete and correct reduction of epistemology, if it ever comes, will have to be somehow obtained using the tools of our present messy and unreliable epistemology.)

Therefore, what is missing from your summary is the statement of the messy and unreliable parts currently incorporated into your epistemology, which is a supremely relevant question precisely because they are so difficult to analyze and describe accurately, since their imperfections will interfere with the very process of their analysis. Another important consideration is that a bold reductionist position may lead one to dismiss too quickly various ideas that can offer a lot of useful insight in this present imperfect position, despite their metaphysical and other baggage.

The list of "what is missing from [my] summary" is indeed long! Hence, a "summary."

I recently had an insight about this while taking a shower or something like that: the opaque circuits can get quite good at identifying the saliencies in a situation. For example, oftentimes the key to a solution just pops into my awareness. Other times, the 3 or so keys or clues I need to arrive at a solution just make themselves known to me through some process opaque to me.

These "saliency identification routines" are so reliable that in domains I am expert in, I can even arrive at a high degree of confidence that I have identified all the important considerations on which a decision turns without my having searched deliberately through even a small fraction of the factors and combinations of factors that impinge on the decision.

The observation I just made takes some of the sting out of Vladimir M's pessimistic observations (most of the brain's being opaque to introspection, the opaque parts' not outputting numerical probabilities) because although a typical decision you or I face is impinged on by millions of factors, it usually turns on only 2 or 3.

Of course, you still have to train the opaque circuits (and ensure feedback from reality during training).

I'd like to see a post on this, especially if you have any insights or knowledge on how we can make those black-box circuits work better, or how to best combine formal probability calculations with those black-box circuits.

Well, that would be a very ambitious idea for an article! One angle I think might be worth exploring would be a classification of problems with regards to how the outputs of the black-box circuits (i.e. our intuitions) perform compared to the formal models we have. Clearly, among the problems we face in practice, we can point out great extremes in all four directions: problems can be trivial for both intuition and formal models, or altogether intractable, or easily solvable with formal models but awfully counterintuitive (e.g. the Monty Hall problem), or easily handled by intuition but outside of the reach of our present formal models (e.g. many AI-complete problems). I think a systematic classification along these lines might open the way for some general insight about how to best reconcile, and perhaps even combine productively, our intuitions with the best available formal calculations. But this is just a half-baked idea I have, which may or may not evolve into more systematic thoughts worth posting.

I'll post this here and put a copy on my desktop so I remember to check it in a few months. My beliefs tend to change very often and very quickly.

Beliefs that I think are most likely to change:

  • Existence is very tied up with relative causal significance. Classical subjective anticipation makes little sense. Quantum immortality should be replaced with causal immortality.
  • Reflective consistency is slippery due to impermanence of agency and may be significantly less compelling than I had previously thought.
  • Something like human 'morality' (possibly more relevant to actions pre-Singularity than Eliezer's conception of 'good' as humanity-CEV) might be important for reasons having to do with acausal control and, to a lesser extent, smarter-than-human intelligences in the multiverse looking at humanity and other alien civilizations as evidence of what patterns of action could be recognized as morally justified.
  • Building a seed AI that doesn't converge on a design something like the one outlined in Creating Friendly AI may be impossible due to convergent decision theoretic reflective self re-engineering (and the grounding problem). Of course, for all purposes this intuition doesn't matter, as we still have to prove something like Friendliness.
  • Solving Friendliness (minus the AGI part (which would be rather integrated so this is kind of vague)) is somewhat easier than cleanly engineered seed AI, independent of the previous bullet point.
  • Death the way it is normally conceptualized is a confusion. The Buddhist conception of rebirth is more accurate. (And it doesn't mean the transparently stupid thing that most Westerners imagine.)
  • Most of Less Wrong's intuitions about how the world works are based on an advanced form of naive realism that just doesn't work. "It all adds up to normality" is either tautologous or just plain wrong. What you thought of as normality normally isn't.
  • Ensemble universe theories are almost certainly correct.
  • Suicide with the intent of ending subjective experience is downright impossible. (The idea of a 'self' that is suffering is also a confusion, but anyway...) The only form of liberation from suffering is Enlightenment in the Buddhist sense.
  • I am the main character to the extent that 'I' 'am'. At the very least I should act as if I am for instrumentally rational reasons.
  • People that are good at 'doing things' will never have the epistemic rationality necessary to build FAI or seed AI, due to limits of human psychology. Whoever does build AI will probably have some form of schizoid personality disorder or, ironically somewhat oppositely, autistic spectrum disorder.
  • My intuition is totally awesome and can be used reliably to see important themes in scientific, philosophical, and spiritual fields.

Could you explain/link to an explanation of the Buddhist bits?

I am the main character to the extent that 'I' 'am'. At the very least I should act as if I am for instrumentally rational reasons.

I'd be interested to hear about this one in more detail. There are a lot of possible interpretations of it, but most of them seem egoist in a way that doesn't seem to mesh well with the spirit of your other comments.

[-][anonymous]10

Death the way it is normally conceptualized is a confusion. The Buddhist conception of rebirth is more accurate. (And it doesn't mean the transparently stupid thing that most Westerners imagine.)

Seconded for Buddhist clarifications, particularly the one that goes to the quote above.

Nice.

Causal immortality seems more and more true to me over time (I would be surprised if any of the major SIAI donors, including older ones, die before the Singularity) but could definitely use some explanation. Though I'm not sure of the consequences of encouraging people to maximize their causal significance. Almost definitely not good.

"But philosophers still argue about ... theism ... as if these weren’t settled questions."

If this is really what you think, then why do you continue with your blog?

For outreach, mostly. And for occasional curiosity. But really, philosophy of religion is thunderously boring to me now. I have to force myself to churn out the occasional post on religion to avoid losing 95% of my audience, but if I stop caring about audience size, you'd probably see me write one post on religion every 6 months. :)

Already, I only post once on religion each week, usually, whereas in the past it was usually 5 times a week about religion.

Well.

Why hold it off.

There, I did it.

thunderously boring

I'd like to note that this phrase cracks me up horribly.

My Web of Opinions, FWIW:

Ontology and Epistemology: Philosophy is very much a matter of opinion. But that doesn't mean that one philosophical position is as good as another. Some are better than others - but not because they are better supported by reasons. "Support" has nothing whatever to do with the goodness of a philosophical position. Instead, positions should be judged by their merits as a vantage point. Philosophers shouldn't seek positions based on truth, they should seek positions based on fruitfulness.

Language and Logic: Bertrand Russell and W.V.O. Quine have much to answer for. Michael Dummett, Saul Kripke, and David Lewis have repaired some of the damage. Eliezer's naive realism and over-enthusiasm for reductionism set my teeth on edge. We need a more pluralist account of theories, models, and science. Type theory and constructivism are the wave of the future in foundational mathematics.

Ethics: Ethics answers the question "What actions deserve approval or disapproval?" (Not the question "What ought I to do?") The question is answered by Nash(1953) in the two-person case. Actions that do not at least conform to the (unique, correct) bargain deserve disapproval and punishment. Notice that the question presumes rational agents with perfect information - that is why what you ought (ethically) to do may sometimes differ from what you ought (practically) to do. Future utilities should be discounted at a rate of at least 1% per year (50% per lifetime).

Rationality and decision theory. I suspect that TDT/UDT are going in the wrong direction, but perhaps I just don't understand them yet. The most fruitful issues for research here seem to be in modeling agents as coalitions of subagents, creating models in which it can be rational to change one's own utility function, biology-inspired modeling using Price equations and Hamilton's rule, and the question of coalition formation and structure in multi-agent Nash bargaining. Oh yeah: And rational creation/destruction of one agent by another.

Futurism: The biggest existential risk facing mankind is uFAI. Trying to build a FOOMing AI which has the fixed goal of advancing extrapolated human values seems incredibly dangerous to me. Instead, I think we should try to avoid a singleton and do all we can to prevent the creation of AIs with long-term goals. But at this stage, that is just a guess. There is a 50%+ probability, though, that there is no big AI risk at all, and that super-intelligences will not be all that much more powerful than smart people and organizations.

Note: posting this feels very self-indulgent. Though I do see value in setting it down as a milestone for later comparison.

Perplexed,

I'm curious to know what you mean by saying that philosophy is a matter of opinion. From your paragraph, it appears you would say that even the most highly confirmed and productive theories of physics and chemistry are also matters of "opinion." For me, that's an odd way to use the term "opinion", but unfortunately, I don't own a trademark on the term! :)

Have I understood you correctly?

Have I understood you correctly?

I don't think you have completely misunderstood. It is certainly possible to think of General Relativity or Molecular Orbital Theory as simply a very "fruitful vantage point". But that is not really what I intended. After all, the final arbiter of the goodness of these scientific theories is experiment.

The same cannot be said about philosophical positions in metaphysics, ontology, metaethics, etc. There is no experiment which can confirm or refute the statements of Quine, or Kripke, or Dummett, or Chalmers, or Dennett. IMHO, it is useless to judge based on who has exhibited the best arguments. Instead, I believe, you need to try to understand their systems, to see the world through their eyes, and then to judge whether doing so makes things seem clearer.

Most philosophy is 'a matter of opinion' simply because there are no experiments to appeal to, and no proofs to analyze. But it is not completely meaningless, either, even though you cannot 'pay the rent in expected experience'. Because you can sometimes 'pay the rent' in insight gained.

I guess, then, the reason I have difficulty understanding your position is that I don't see a sharp distinction between science and philosophy, for standard Quinean reasons. The kind of philosophy that interests me is very much dependent on experiment. For example, my own meta-ethical views consist of a list of factual propositions that are amenable to experiment. (I've started listing them here.)

But of course, a great deal of philosophy is purely analytic, like mathematics. Surely the theorems of various logics are not mere opinion?

As for those (admittedly numerous) synthetic claims for which decent evidence is unavailable, I'm not much interested in them, either. Perhaps this is the subset of philosophical claims you consider to be "opinion"? Even then, I think the word "opinion" is misleading. This class of claims contains many that are either confused and incoherent, or else coherent and factual but probably unknowable.

If I told you I might get stuck in it, like a fly.

I haven't. Posting one's web of beliefs is not an immunization against viewquakes, I don't think.

Yes, I guess if I'm not worrying about it for individual beliefs there's no reason why I should be worrying about it for the set of all of them.

Nice post!

By the way, "superdictionary" made me think of OmegaWiki, a sort-of-fork from Wiktionary to build a multilingual translating dictionary. (No, it's not about Omega ;-)

"The normativity of logic is: “If you want to be speaking the same language as everyone else, don’t say things like ‘The ball is all green and all blue at the same time in the same way.’”"

You surely don't mean this: everyone one else is logical, why not me?

For a start, is everyone else logical? And even if they are, is that the best justification we have for logic?

I don't understand your question.

Logic begins with a chosen set of axioms, and they're not the only axioms you could choose as the basis of a formal system. If you reject the axioms, I can't condemn you for failing a categorical imperative. Instead, I'll just note that you're not talking the same language as the rest of us are.

[-]gjm00

I can't imagine how anything short enough to be bearable by others could possibly qualify as a summary of my beliefs. There are too many things to have beliefs about. After reading Luke's list, I don't feel that I know what Luke thinks about anything much, other than a tiny set of highly-meta things (which are, yes, interesting and important).

I like the idea of publicly summarizing one's beliefs, but if I were going to do that I'd need something much more comprehensive before I'd feel at all like calling it "my web of beliefs" or "a summary of my beliefs".