Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I'm wrong.
Software developer and EA with interests including programming language design, international auxiliary languages, rationalism, climate science and the psychology of its denial.
Looking for someone similar to myself to be my new best friend:
❖ Close friendship, preferably sharing a house ❖ Rationalist-appreciating epistemology; a love of accuracy and precision to the extent it is useful or important (but not excessively pedantic) ❖ Geeky, curious, and interested in improving the world ❖ Liberal/humanist values, such as a dislike of extreme inequality based on minor or irrelevant differences in starting points, and a like for ideas that may lead to solving such inequality. (OTOH, minor inequalities are certainly necessary and acceptable, and a high floor is clearly better than a low ceiling: an "equality" in which all are impoverished would be very bad) ❖ A love of freedom ❖ Utilitarian/consequentialist-leaning; preferably negative utilitarian ❖ High openness to experience: tolerance of ambiguity, low dogmatism, unconventionality, and again, intellectual curiosity ❖ I'm a nudist and would like someone who can participate at least sometimes ❖ Agnostic, atheist, or at least feeling doubts
Bayesianism says that we should ideally reason in terms of:
Where is it defined this way?
I read the six volumes of Yudkowsky's Rationality A-Z and nodded along, then saw somebody treating "bayesianism" as basically "subjective degrees of belief plus subjective updating"―which struck me as a dumb watering-down. Reading through this list I was uncomfortable with #1 (because good reasoning can be somehow richer than binary, and as Richard says, fuzzy), #4 (we want our subjective credences to behave like real probabilities, but I don't really expect them to), and #5 (again we'd like to, but can at best approximate). Now, at the top it says we should "ideally" reason this way, which accounts for such human failings, but #5 also requires a strong sense of priors and where they come from, and I never got that by reading Rationality A-Z.
Re: #2/#5 I read a nice article at some point that I can no longer find, which introduced a concept whose name I forgot. The concept was a sense of solidity or justification of belief, where if an expert on country X gives you a 50% chance that event E happens in X in the next year, that could (in principle) be a really solid 50%, way better than 50% from a layman. ChatGPT tells me this distinction is "credence"―a terrible name, as "credence" is often used to refer to subjective probability itself (the 50%). Claude OTOH offered "resilience of credence" or "robustness of probability", but I think in the article I read it was just a single word. Anyone remember this? It's weird how little we talk about this, for it is required for a proper bayesian update.
They distinguish between the syntactic content of a theory (the axioms of the theory) and its semantic content (the models for which those axioms hold).
How confusing. I do see this terminology used in a Wikipedia article that I don't really understand, but I would call sequences like [For, any, two, points, exactly, one, line, lies, on, both] the syntax. The axioms would be the "semantic" content (...or simply "the axioms"), and the models for which the axioms hold the "domain" or "scope".
Facts may or may not be beliefs; there are infinite facts that no one knows (in addition to those facts that most people disbelieve, which tend to be the sorts of facts that, if I were to mention them, would make some people angry, so I won't.)
Hmm: you say "Chalmers denies that he's an epiphenomenalist" and has a "core objection to interactionism", but Chalmers said he endorses "the thesis (Z) that zombies are logically possible", and Bentham's Bulldog says the zombie argument is an argument for non-physicalism, which he implies means "Consciousness is its own separate thing that is not explainable just in terms of the way matter behaves" and which is subdivided into dualism (epiphenomenalism and interactionism) and "niche views". Does Chalmers, then, endorse one of the "niche views" like "idealism and panpsychism"?
Not exactly, as Chalmers said:
substance dualism (in its epiphenomenalist and interactionist forms) and Russellian monism (in its panpsychist and panprotopsychist forms) are the two serious contenders in the metaphysics of consciousness, at least once one has given up on standard physicalism. (I divide my own credence fairly equally between them.)
So I'm pretty confused about what Chalmers' opinion is. Maybe it changed over time? Even so, I think it's pretty bad that when Chalmers tried to correct EY's 2008 post 4 days after he posted it, EY not only didn't understand Chalmers' argument (rather, he argues that Chalmers was thinking incorrectly), he didn't even take notice that Chalmers said he doesn't believe what EY says he believes, so when he reposts nearly the same article 8 years later, he misrepresents Chalmers' beliefs in the same way again.
In the short term, yes. In the medium term, the entire economy would be transformed by AGI or quasi-AGI, likely increasing broad stock indices (I also expect there will be various other effects I can't predict, maybe including factors that mute stock prices, whether dystopian disaster or non-obvious human herd behavior).
I've seen a lot of finance videos talking about the stock market and macroeconomics/future trends that never once mention AI/AGI. And many who do talk about AI think it's just a bubble and/or that AI ≅ LLMs/DALL·E; prices seem high for such a person. And as Francois Chollet noted, "LLMs have sucked the oxygen out of the room" which I think could possibly slow down progress toward AGI enough that a traditional Gartner hype cycle plays out, leading to temporarily cooler investment/prices... hope so, fingers crossed.
I'm curious, what makes it more of an AI stock than... whatever you're comparing it to?
Well, yeah, it bothers me that the "bayesian" part of rationalism doesn't seem very bayesian―otherwise we'd be having a lot of discussions about where priors come from, how to best accomplish the necessary mental arithmetic, how to go about counting evidence and dealing with ambiguous counts (if my friends Alice and Bob both tell me X, it could be two pieces of evidence for X or just one depending on what generated the claims; how should I count evidence by default, and are there things I should be doing to find the underlying evidence?)
So―vulnerable in the current culture, but rationalists should strive to be the opposite of the "gishy" dark-epistemic people I have on my mind. Having many reasons to think X isn't necessarily a sin, but dark-epistemic people gather many reasons and have many sins, which are a good guide of what not to do.
TBH, a central object of interest to me is people using Dark Epistemics. People with a bad case of DE typically have "gishiness" as a central characteristic, and use all kinds of fallacies of which motte-and-bailey (hidden or not) is just one. I describe them together just because I haven't seen LW articles on them before. If I were specifically naming major DE syndromes, I might propose the "backstop of conspiracy" (a sense that whatever the evidence at hand doesn't explain is probably still explained by some kind of conspiracy) and projection (a tendency to loudly complain that one's political enemies have whatever negative characteristics you yourself, or your political heroes, are currently exhibiting). Such things seem very effective at protecting the person's beliefs from challenge. I think there's also a social element ("my friends all believe the same thing") but this is kept well-hidden. EDIT: other telltale signs include refusing to acknowledge that one got anything wrong or made any mistake, no matter how small; refusing to acknowledge that the 'opponent' is right about anything, no matter how minor; an allergy to detail (refusing to look at the details of any subtopic); and shifting the playfield repeatedly (changing the topic when one appears to be losing the argument).
To think about this more clearly, we should split propositions into syntax and semantics (in the usual sense, not in this article's sense).
"Is there any water in the refrigerator" is syntax. Your brain has in mind a meaning (the semantics) that includes free-flowing liquid, and this, not the original statement, is the "real" proposition for the purposes of reasoning, including "bayesian" reasoning. You assign a low probability that such water is in the refrigerator, but you have also temporarily retained a memory of the syntax. Then, when you hear "In the cells of the eggplant", your brain re-evaluates the syntax to produce a different meaning, a different proposition which you evaluate again to get a new probability (this time near 1).
Syntax is important for communicating but I wouldn't count it as part of reasoning.
A good brain can track multiple separate meanings for a statement (ambiguity), but these can and should be reasoned about separately.
Propositions themselves can refer to vague/fuzzy concepts, but cannot be ambiguous in my way of thinking. They can "be" ambiguous when written down, but then are not propositions. For example, "The urn contains only blue eggs or cubes" is ambiguous and decodes to four separate propositions: "The urn contains cubes or blue eggs, but not both", "The urn contains blue cubes or blue eggs, but not both", "Each object in the urn is either a blue egg or a blue cube", "Each object in the urn is either a cube or a blue egg".
With this split in place, propositions can incorporate vagueness and approximation, but "nonsense propositions" are not propositions because they cannot be decoded, and "context" is a temporary state that modulates how syntax is decoded into propositions (context may also color our reasoning, but shouldn't, though it may reasonably color how we encode our reasoning back into words.)
That just leaves vagueness vs approximation. I'm not sure it's worth splitting hairs between those two.
P.S. "water" is naturally vague (single-handled), but can be reframed as ambiguous (multi-handled) by enumerating its senses (pure and free-flowing vs water in a mixture vs water vapor vs ice); its vague form is just H₂O. Vagueness always seems to fit more compactly in the mind than ambiguity, unless you have a special handle for the ambiguity (like ICE, which is either frozen water emphasized, or U.S. immigration officers).