When it comes to many issues in theoretical physics I don't have an opinion and I know that any opinion I would form wouldn't be worth much. I don't want to dive deep enough in the subject. I'm okay with not knowing.
I care a bit more about AI risk but it's still not a topic at which I want to be an expert and thus I don't try to form big hypothesis about the subject.
This is more important in cases where you care about being right eventually, less important in cases where it pretty much doesn't matter if you're acting based on mistaken beliefs.
I don't think my personal actions have a huge influence on how AI risk get's solved. I think it's fine if other people focus on having a really good understanding and solving it.
Meta: I really like the three top-level comments you made to structure the conversation. Is this something you came up with, or perhaps picked up elsewhere? Does it have a name?
Thanks! I came up with it (during a conversation with the Arbital team, so some credit's due to them), post forthcoming to explain it.
Claim 1: "Be wrong." Articulating your models and implied beliefs about the world is an important step in improving your understanding. The simple act of explicitly constraining your anticipations so that you'll be able to tell if you're wrong will lead to updating your beliefs in response to evidence.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
I'm not sure exactly what you meant, so not ultimately sure whether I disagree, but I at least felt uncomfortable with this claim.
I think it's because:
On the other hand I do support generating explicit hypotheses, and articulating concrete models.
I think this clarifies an important area of disagreement:
I claim that there are lots of areas where people have implicit strong beliefs, and it's important to make those explicit to double-check. Credences are important for any remaining ambiguity, but for cognitive efficiency, you should partition off as much as you can as binary beliefs first, so you can do inference on them - and change your mind when your assumptions turn out to be obviously wrong. This might not be particularly salient to you because you're already very good at this in many domains.
This is what I was trying to do with my series of blog posts on GiveWell, for instance - partition off some parts of my beliefs as a disjunction I could be confident enough in to think about it as a set of beliefs I could reason logically about. (For instance, Good Ventures either has increasing returns to scale, or diminishing, or constant, at its given endowment.) What remains is substantial uncertainty about which branch of the disjunction we're in, and that should be parsed as a credence - but scenario analysis requires crisp scenarios, or at least crisp axes to simulate variation along.
Another way of saying this is that from many epistemic starting points it's not even worth figuring out where you are in credence-space on the uncertain parts, because examining your comparatively certain premises will lead to corrections that fundamentally alter your credence-space.
This was helpful to me, thanks.
I think I'd still endorse a bit more of a push towards thinking in credences (where you're at a threshold of that being a reasonable thing to do), but I'll consider further.
I'm all about epistemology. (my blog is at pancrit.org) But in order to engage in or start a conversation, it's important to take one of the things you place credence in and advocate for it. If you're wishy-washy, in many circumstances, people won't actually engage with your hypothesis, so you won't learn anything about it. Take a stand, even if you're on slippery ground.
Per my reply to Owen, I think fine to say "X% A, 100-X% not-A" as a way to start a discussion, and even to be fuzzy about the %, but it's then important to be pretty clear about the structure of A and not-A, and to have some clear "A OR not-A" belief, and beliefs about what it looks like if A is true vs false.
I think I've gotten a lot out of trying to make my confusions public, because a lot of the time when I'm confused the source is also confused, and if not then I get my confusion resolved quickly instead of slowly.
I typically hesitate before recommending this to other people, because I don't know what the cost/benefit ratio looks like at difficult base rates of confusion. If you actually are wrong nine times out of ten, what does being open about this look like?
There is a trade-off because people (at least those who are not spherical cows in vacuum) tend to stick to initial positions. So if you picked a model/belief early, you might find abandoning it difficult. Plus there is the whole feedback loop where having an existing belief will affect which evidence you choose to see.
So it's not the case of "the more the better", declaring a flimsily-supported position has its costs which have to be taken into account. In practice I tend to delay articulating my models until I need to -- there is no point in formulating a position when it will not affect anything.
I think this objection is comparatively strong for non-operationalized "shoulds," post-dictions, and highly general statements that haven't yet been cashed out to specific anticipations. I think that there's very little harm in making a model more explicit, and a lot of benefit, when you get all they way down to expecting to observe specific things in a hard-to-mistake way.
That's a separate skill that's needed in order to make this advice beneficial, and it's important to keep in mind the overall skill tree, so thanks for bringing up this issue.
One problem with this is that we are not perfectly rational beings.
If you don't think you have enough evidence yet to form an opinion on something, it may be better to hold off and not form an opinion yet. Because once you form an opinion, it will inherently bias the way you precieve new information you get, and even the most rational of us tend to be biased towards wanting to think that our current opinion is already right.
One correlary is that even when you don't have enough evidence to form an opinion, you can create and start to test a hypothesis for yourself without actually deciding (even in secret) that it is "your opinion". That way you can get the advantages you are talking about without precomittimg yourself to something that might bias you.
I feel like this goes hand in hand with epistemic humility (often showcased by Yvain) that I try to emulate. If you expose the inner wiring of your argument, including where you're uncertain, and literally invite people to shoot holes in it, you're going to learn a lot faster.
You could argue that Slate Star Codex is one big exercise in "Here are some thoughts I'm mildly/moderately/completely sure about, tell me what's wrong with them and I'll actually pay attention to your feedback."
"Not if they change their minds when confronted with the evidence."
"Would you do that?"
"Yeah."
This is where I think the chain of logic makes a misstep. It is assumed that you will be able to distinguish evidence which should change your mind from evidence that is not sufficient to change your mind. But doing so is not trivial. Especially in complicated fields, simply being able to understand new evidence enough to update on it is a task that can require significant education.
I would not encourage a layperson to have an opinion on the quantization of gravity, regardless of how willing they might be to update based on new evidence, because they're not going to be able to understand new evidence. And that's assuming they can even understand the issue well enough to have a coherent opinion at all. I do work pretty adjacent to the field of quantized gravity and I barely understand the issue well enough to grasp the different positions. I wouldn't trust myself to meaningfully update based on new papers (beyond how the authors of the papers tell me to update), let alone a layperson.
The capacity to change a wrong belief is more than just the will to do so. And in cases where one cannot reliably interpret data well enough to reject wrong beliefs, it is incredibly important to not hold beliefs. Instead cultivate good criteria for trusting relevant authority figures or, lacking trusted authority figures, simply acknowledge your ignorance and that any decision you make will be rooted in loose guesswork.
I like this post, because I have always been nervous expressing opinions online, and have kind of had to work up to it. The ideas in here sound like good steps. I'm getting a little bit past the 'be secretly wrong' stage at last now, and one helpful step beyond that can be to get a tumblr or similar throwaway blog and just experiment, starting with topics you feel reasonably comfortable talking about. Then slowly crank up the bold claims quotient :)
Another thing you can do is, once you've secretly articulated your bold claims, and in that space figured out which you believe and to what extent, publicly express your more endorsed, moderated judgment. It's often important to have some workspace where you make bold but dubious claim X, in order seriously consider it as a hypothesis and explore what the evidence is and what it predicts. But this can help you figure out what qualifiers to add, or at least be able to confidently say something like, "I'm not sure this is right, but if it's wrong, I have no idea how or in what direction."
Claim 2: The social inhibition against making strong claims can interfere with the learning process by making people reluctant to articulate their beliefs, for reasons mostly unrelated to epistemic humility.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
One thing I dislike about public debates (outside of LW) is when I make a statement other people disagree with, and some of them keep bringing it up repeatedly later, in completely unrelated topics. My most extreme experience of this: on some website I once criticized a politician for doing something controversial, and one of his fans kept stalking me for a few months, and whenever I posted an unrelated article, he always posted a comment saying "hey, you are the guy who said that the politician X did a bad thing, but you are completely wrong!". (And I was like: "dude, how exactly is this relevant to my Gimp tutorial? can't we just keep the discussion on-topic?". Unfortunately, there was no way for me to remove comments, and the admins refused to classify this as a spam.)
Similarly, since I've posted a few LW links on Facebook, some people start to prefix their disagreements with me by: "If you call yourself a rationalist, you shouldn't believe X." (They refuse my offers to make bets about X, though. They just believe that it is inappropriate for a rational person to disagree with them. And no, I don't call myself a rationalist in such situations.)
So it feels like if you try to keep your identity small, you better register a separate nick for each article you post. Which of course would be too inconvenient. (Or maybe a nick per topic. Or rather, per opinion.)
Contributions to Less Wrong generally either add value or subtract value. It's hard for a user to predict in advance which will be the case. I think Benquo is correct that the average Less Wrong user is excessively worried about subtracting value.
However, the law of equal and opposite advice applies. Just because the average Less Wrong user needs to hear Benquo's message doesn't mean you do.
The strategy of making confident though erroneous claims in order to get others to explain things to you is one that demonstrates almost no concern for the possibility that you are subtracting value.
I like reading objections that are friendly, coherent, non-obvious, plausible, interesting. But sometimes I get the impression you are making an objection just for the sake of making an objection, without thinking very hard about whether it's useful or accurate, and that gets tiresome. Especially since you never make toplevel posts yourself.
Adding or subtracting value is at least a three-argument function (what, to whom, in which context) :-)
demonstrates almost no concern for the possibility that you are subtracting value
So don't dance around so much and say outright whether you think I do. I promise not to faint.
toplevel posts
You seem to consider posts much more valuable than comments. Why so? I usually find commentary more interesting and useful than the post itself.
Well, this is how I react to your debating style:
If I want to write a meaningful comment explaining something that feels important to me, I usually get a feeling "and now Lumifer will immediately reply with something predictably useless, and that will kill the whole debate". About half of the time I later feel I was right about this. This discourages me from writing meaningful comments.
On the other hand, when writing Facebook-style comments (more or less like this one, but shorter), I don't give a fuck about your reaction, because throwing inane stuff at each other is how the game is played.
To me it feels like you are defecting at some kind of Prisonner's Dilemma we have here, and then the only rational move for me is to also defect, or avoid the game.
It's not just the kind of the comments you write, but also that you reply to pretty much everything, whether you have something useful to contribute or not. It doesn't seem like you plan to change this, so... uhm, I actually can't even downvote you anymore... sigh.
You seem to consider posts much more valuable than comments. Why so?
Because posts containing as little value as your comments would immediately get downvoted.
Okay, that was a bit too aggressive. But your debating style is about attacking other people's ideas, and exposing as little as possible of your own (obviously, so you can't be attacked back in the same style). This is only possible because other people keep bringing new ideas. If everyone would switch to your style, it would be like Facebook. So other people are providing value, and you are giving them some negative reinforcement in return. Maybe you believe that this kind of predictable negative reinforcement is actually super valuable. I disagree.
and now Lumifer will immediately reply with something predictably useless, and that will kill the whole debate
Why would it kill the debate? My shadow isn't that large and that dense that nothing will grow in it. I'll repeat what I said before: I'm ignorable. If you think my comment is useless, just skip past it.
Or do you think that there's something poisonous/infectious in my comments so that they create a zone of creep around them?
you are defecting at some kind of Prisonner's Dilemma we have here
What kind of a Prisoner's Dilemma do we have here? I've noticed that expression tends to be heavily overused to mean THOU SHALL ALWAYS COOPERATE OTHERWISE YOU ARE BAD and I'm not a very cooperative creature. Is there a Prisoner's Dilemma, technically speaking?
a bit too aggressive
Fails a reality check, too :-P
obviously, so you can't be attacked back in the same style
You are making the assumption that I'm mostly interested in collecting Internet Debate Points. That is not the case -- if I were, I wouldn't hang around at LW which isn't a terribly convenient place for such activities. And anyway, a bit upthread I'm being chided for "[t]he strategy of making confident though erroneous claims in order to get others to explain things to you". So what is it, am I making too few claims or too many?
So other people are providing value, and you are giving them some negative reinforcement in return.
That's a general-purpose argument against any criticism, isn't it?
LW's problem isn't only that good posts became scarce, it is also that nature abhors vacuum and so shit started to flow in to fill that empty space. If you want any content, that's easy, but you'd better set up some filters before the place gets filled with "Video using humor to spread rationality" and "This one equation may be the root of intelligence".
Is this supposed to be yet another "confident though erroneous claim in order to get others to explain things to you"?
You see, I am unable to say when you are playing games and when you are not. I just have a rule of thumb of avoiding debates with people whom I suspect of playing games. I simply don't enjoy this kind of games.
The problem here is my lack of trust that you are debating in good faith (as opposed to trolling for reaction). Maybe I completely misjugde you. Maybe you are doing something that contributes to such misjudgement.
I would argue this claim understates the problem. In my experience, claims can be made at one of two levels, either strongly held or weakly held. One cannot perfectly phrase a claim such that nobody confuses one for the other, and other people tend to ignore weakly held claims.
It would be really useful to be able to say "I am moderately confident that the Purple party's policy will lead to Bad Things" and get feedback that raises or lowers my confidence in that claim. Most responses unfortunately will assume that I was more confident than I actually was. (This issue might be summarized as "Screwtape needs to spend less time on facebook.")
Something that might help lower this inhibition is an ability to retract or update on claims usefully. Technically, a blog lets you make a statement and then later edit the post or post an update, but you can't rely on people rereading the post or following your blog to see updates. This might be solved socially (a norm of checking someone still believes a claim before responding to that claim or propagating what they said?) or with technology (rss feeds and edit buttons on posts) but will probably need to be solved with a combination of the two.
More time on non-Facebook venues that you already perceive to be better for this seems like a good partial solution.
Other things you can do are, instead of just focusing on the claim and probability, list what you see as the major considerations each way, and their evidential strength. Sometimes reporting the structure of your beliefs can be additionally helpful.
Specifying what you see as the major considerations will help you and others avoid double-counting evidence or giving arguments that aren't actually relevant (because you already believe them or don't accept their premises).
I think the point in time when an individual stops keeping a belief secret and is willing to declare it in public is an interesting one. There are two main variables: how confident is the individual in their belief being true? and how much does the individual care about the social and cultural stigma against being wrong? These thresh-holds would be different for different individuals, and would govern the point at which they are willing to make a public claim.
Related to this, I see two major reasons a person would form a secret belief: they are unsure if they are correct, or they are afraid of social or cultural backlash of being wrong (and certainly it could be a combination of both).
I wonder, does advocating for 'secret beliefs' create a system in which public beliefs are put on a higher pedestal, thus increasing the social and cultural barriers to making a public claim? You advocate for pursuing truth through public debate, yet by identifying the importance of secret beliefs you help to affirm the social and cultural barriers that impede public debate in the first place. Its a bit of a conundrum.
In any case, it seems that publicly presenting a belief leads to better outcomes than harbouring it as a secret, in the sense that in doing so the individual will maximize the amount of information they receive as feedback, thus helping to either confirm or refute their stated claim. So it follows that a culture that embraces public debate will be superior, as a lower barrier to the acceptability of 'wrongness' will allow individuals to most efficiently discover logical beliefs.
Obviously eliminating the 'secret space' of the mind is impossible, it is the dialogue of the self! So it seems that in an ideal case, once an ontology has been built in the mind, an individual can join the public discourse and have the tools to understand what constitutes truth, using collective knowledge to discover it more efficiently than they could on their own.
afraid of social or cultural backlash of being wrong
Fear of backlash is not a function of being wrong, but of being seen to be wrong.
"Do you think other people think less of those who do that?"
That's not the problem though, right? People don't engage in fact-checking, evidence presenting or steelmanning. People engage in tribal signalling, strawmanning and outgroup shaming.
That's what to fear. I do endorse expressing secretly your opinion, but at what point does it become "being wrong", if you avoid confrontation?
Claim 3: "In secret." Because of claims 1 and 2, if you notice yourself unable or unwilling to articulate and test your beliefs in front of others, you can get much of the benefit for comparatively little of the difficulty, by doing so in secret.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
I have a "secret blog" that I share with only a handful of friends. It is extremely psychologically useful to have a place where I can post stupid, half-formed thoughts. It gets the thoughts out of my head, allows me to either advance them or abandon them. Doing that "in public" would merely waste everyone's time.
I see this as being related to the creative phenomenon of "churning" and the practice of writing "morning pages". You do benefit from having a space to just vomit thoughts without restraint or filters so you can pin them down and move on. You won't usually want those thoughts to be publicly attached to your name.
That makes a lot of sense - I have a lot of unfinished drafts, but I think my "morning pages" workspace is just my brain. I often find myself rehearsing little speeches about various topics, refining them, saying stuff in my head and then assessing whether it's right. This isn't something I talk about a lot, or something that would have occurred to me as relevant when advising someone who's trying to get better at writing, but I effectively get huge amounts of practice "writing" that people who don't do this don't get.
I suspect this is a big source of variation from person to person in writing/communication ability, that "morning pages" is sort of a hack for. If you don't automatically verbalize a bunch of your thoughts before they're ready to share, making time to do so (e.g. "morning pages") can help close the practice gap.
(A friend has reported an analogous "superpower" where they tend to automatically imagine future scenarios such as how an interpersonal interaction might go, which lets them rapidly iterate on plans using "inner simulator" before taking the comparatively expensive step of trying one out in practice. (ETA: The commonality here is that we both have mental processes that seem to automatically, effortlessly, as background processes, precalculate a lot of stuff in ways that just doesn't happen for other people unless they make a deliberate effort. This leads to other people seeming inexplicably bad at a thing, when the truth is we just unknowingly put a lot more work into it.))
My recommendation is that once you get past the idle verbalization (if you're me) or morning pages stage, making clearer more specific claims is something that you can also do in private until you're ready to do it in public.
One advantage of physical capture versus using my brain is that my brain will entertain the same thoughts for years, until I try to write them out and finally see how absurd and incoherent they are. Or if they aren't incoherent, then once I write them down, they actually advance to the next mental "paragraph".
It's similar to how I might have some grand idea for an app that I'm sure will change the world, and within five minutes of opening a blank code editor I realize that my idea is trivial, or AI-hard, or that I don't really care about it as much as I imagined, or dumb in some other way. I never really find that out until I start coding, though.
I also do the 'saying stuff in my head' thing a lot and it is definitely a useful form of writing practice - my main one, in fact, as I'm relatively new to actually writing things down frequently.
I find it's mainly good for practice at the sentence/paragraph level, though, at least at my level of discipline. I tend to end up with fragments that sound good locally, but drift around pretty aimlessly at the global level. Trying to write something down makes me notice that. It's helped me realise that I have a lot to to work on when it comes to focus and structure.
"(A friend has reported an analogous "superpower" where they tend to automatically imagine future scenarios such as how an interpersonal interaction might go, which lets them rapidly iterate on plans using "inner simulator" before taking the comparatively expensive step of trying one out in practice.)"
Hmm...I tend to do this too, and assumed it was a common cognitive ability. Getting accurate sims of the future is nontrivial and hard though...a problem that gets worse if one starts extending the simulation temporally. I agree this is a good idea generally speaking...but there is a tradeoff here...every sim needs data to constrain future versions and for model refinement, so I suspect there would be situational constraints on how many such iterations one can run internally. After all, paralysis by analysis is a thing.
A lot of my otherwise very smart and thoughtful friends seem to have a mental block around thinking on certain topics, because they're the sort of topics Important People have Important Opinions around. There seem to be two very different reasons for this sort of block:
Be wrong
If you don't have an opinion, you can hold onto the fantasy that someday, once you figure the thing out, you'll end up having a right opinion. But if you put yourself out there with an opinion that's unmistakably your own, you don't have that excuse anymore.
This is related to the desire to pass tests. The smart kids go through school and are taught - explicitly or tacitly - that as long as they get good grades they're doing OK, and if they try at all they can get good grades. So when they bump up against a problem that might actually be hard, there's a strong impulse to look away, to redirect to something else. So they do.
You have to understand that this system is not real, it's just a game. In real life you have to be straight-up wrong sometimes. So you may as well get it over with.
If you expect to be wrong when you guess, then you're already wrong, and paying the price for it. As Eugene Gendlin said:
What you would be mistaken about, you're already mistaken about. Owning up to it doesn't make you any more mistaken. Not being open about it doesn't make it go away.
"You're already "wrong" in the sense that your anticipations aren't perfectly aligned with reality. You just haven't put yourself in a situation where you've openly tried to guess the teacher's password. But if you want more power over the world, you need to focus your uncertainty - and this only reliably makes you righter if you repeatedly test your beliefs. Which means sometimes being wrong, and noticing. (And then, of course, changing your mind.)
Being wrong is how you learn - by testing hypotheses.
In secret
Getting used to being wrong - forming the boldest hypotheses your current beliefs can truly justify so that you can correct your model based on the data - is painful and I don't have a good solution to getting over it except to tough it out. But there's a part of the problem we can separate out, which is - the pain of being wrong publicly.
When I attended a Toastmasters club, one of the things I liked a lot about giving speeches there was that the stakes were low in terms of the content. If I were giving a presentation at work, I had to worry about my generic presentation skills, but also whether the way I was presenting it was a good match for my audience, and also whether the idea I was pitching was a good strategic move for the company or my career, and also whether the information I was presenting was accurate. At Toastmasters, all the content-related stakes were gone. No one with the power to promote or fire me was present. Everyone was on my side, and the group was all about helping each other get better. So all I had to think about was the form of my speech.
Once I'd learned some general presentations at Toastmasters, it became easier to give talks where I did care about the content and there were real-world consequences to the quality of the talk. I'd gotten practice on the form of public speaking separately - so now I could relax about that, and just focus on getting the content right.
Similarly, expressing opinions publicly can be stressful because of the work of generating likely hypotheses, and revealing to yourself that you are farther behind in understanding things than you thought - but also because of the perceived social consequences of sounding stupid. You can at least isolate the last factor, by starting out thinking things through in secret. This works by separating epistemic uncertainty from social confidence. (This is closely related to the dichotomy between social and objective respect.)
Of course, as soon as you can stand to do this in public, that's better - you'll learn faster, you'll get help. But if you're not there yet, this is a step along the way. If the choice is between having private opinions and having none, have private opinions. (Also related: If we can't lie to others, we will lie to ourselves.)
Read and discuss a book on a topic you want to have opinions about, with one trusted friend. Start a secret blog - or just take notes. Practice having opinions at all, that you can be wrong about, before you worry about being accountable for your opinions. One step at a time.
Before you're publicly right, consider being secretly wrong. Better to be secretly wrong, than secretly not even wrong.
(Cross-posted at my personal blog.)