All of Stephen Bennett's Comments + Replies

IIRC, dentists have some of the highest rates of depression and suicide of any profession.

I'm pretty sure that this is incorrect compared to healthcare more broadly, although the best I can come up with is this meta-analysis: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0226361&type=printable

Which has this to say:

As there are few exploitable studies about dental surgeons, nurses and other health-care workers, we didn’t treat them in that meta-analysis

Congratulations! I wish we could have collaborated while I was in school, but I don't think we were researching at the same time. I haven't read your actual papers, so feel free to answer "you should check out the paper" to my comments.

For chapter 4: From the high level summary here it sounds like you're offloading the task of aggregation to the forecasters themselves. It's odd to me that you're describing this as arbitrage. Also, I have frequently seen the scoring rule be used with some intermediary function to determine monetary rewards. For example, whe... (read more)

7Eric Neyman
Thanks! Here are some brief responses: Here's what I say about this anticipated objection in the thesis:   This would indeed be arbitrage-free, but likely not proper: it wouldn't necessarily incentivize each expert to report their true belief; instead, an expert's optimal report is going to be some sort of function of the expert's belief about the joint probability distribution over the experts' beliefs. (I'm not sure how much this matters in practice -- I defer to you on that.) In Chapter 4, we are thinking of experts as having immutable beliefs, rather than beliefs that change upon hearing other experts' beliefs. Is this a silly model? If you want, you can think of these beliefs as each expert's belief after talking to the other experts a bunch. In theory(?) the experts' beliefs should converge (though I'm not actually clear what happens if the experts are computationally bounded); but in practice, experts often don't converge (see e.g. the FRI adversarial collaboration on AI risk). Yup -- in my summary I described "robust aggregation" as "finding an aggregation strategy that works as well as possible in the worst case over a broad class of possible information structures." In fact, you can't do anything interesting in the worse case over all information structures. The assumption I make in the chapter in order to get interesting results is, roughly, that experts' information is substitutable rather than complementary (on average over the information structure). The sort of scenario you describe in your example is the type of example where Alice and Bob's information might be complementary.

Soft downvoted for encouraging self-talk that I think will be harmful for most of the people here. Some people might be able to jest at themselves well, but I suspect most will have their self image slightly negatively affected by thinking of themselves as an idiot.

Most of the individual things you recommend considering are indeed worth considering.

Interesting work, congrats on achieving human-ish performance!


I expect your model would look relatively better under other proper scoring rules. For example, logarithmic scoring would punish the human crowd for giving >1% probabilities to events that even sometimes happen. Under the Brier score, the worst possible score is either a 1 or a 2 depending on how it's formulated (from skimming your paper, it looks like 1 to me). Under a logarithmic score, such forecasts would be severely punished. I don't think this is something you should lead with, since ... (read more)

4Fred Zhang
We will be updating the paper with log scores.  I think human forecasters collaborating with their AI counterparts (in an assistance / debate setup) is a super interesting future direction. I imagine the strongest possible system we can build today will be of this sort. This related work explored this direction with some positive results. Definitely both. But more coming from the fact that the models don't like to say extreme values (like, <5%), even when the evidence suggests so. This doesn't necessarily hurt calibration, though, since calibration only cares about the error within each bin of the predicted probabilities.  Yes, so we didn't do all of the multiple choice questions, only those that are already splitted into binary questions by the platforms. For example, if you query Metaculus API, some multiple choice questions are broken down into binary subquestions (each with their own community predictions etc). Our dataset is not dominated by such multiple-choice-turned-binary questions. No, and we didn't try very hard to fully improve this. Similarly, if you ask the model the same binary question, but in the reverse way, the answers in general do not sum to 1. I think future systems should try to overcome this issue by enforcing the constraints in some way. By accuracy, we mean 0-1 error: so you round the probabilistic forecast to 0 or 1 whichever is the nearest, and measure the 0-1 loss. This means that as long as you are directionally correct, you will have good accuracy. (This is not a standard metric, but we choose to report it mostly to compare with prior works.) So the kind of hedging behavior doesn't hurt accuracy, in general. This is a good point! We'll add a bit more on how to interpret these qualitative examples. To be fair, these are hand-picked and I would caution against drawing strong conclusions from them.  1-indexed.

The second thing that I find surprising is that a lie detector based on ambiguous elicitation questions works. Again, this is not something I would have predicted before doing the experiments, but it doesn’t seem outrageous, either.

I think we can broadly put our ambiguous questions into 4 categories (although it would be easy to find more questions from more categories):

 

Somewhat interestingly, humans who answer nonsensical questions (rather than skipping them) generally do worse at tasks: pdf. There's some other citations in there of nonsensical... (read more)

See this comment.

You edited your parent comment significantly in such a way that my response no longer makes sense. In particular, you had said that Elizabeth summarizing this comment thread as someone else being misleading was itself misleading.

In my opinion, editing your own content in this way without indicating that this is what you have done is dishonest and a breach of internet etiquette. If you wanted to do this in a more appropriate way, you might say something like "Whoops, I meant X. I'll edit the parent comment to say so." and then edit the p... (read more)

3Natália
Hi, that was an oversight, I've edited it now.

I took Tristan to be using "sustainability" in the sense of "lessened environmental impact", not "requiring little willpower"

4Tristan Williams
While I think the environmental sustainability angle is also an active thing to think about here (because beef potentially involves less suffering for the animals, but relatively more harm to the environment), I did actually intend sustainability in the spirit of "able to stick with it for a long period of time" or something like that. Probably could have been clearer. 

The section "Frame control" does not link to the conversation you had with wilkox, but I believe you intended for there to be one (you encourage readers to read the exchange). The link is here: https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one?commentId=uh8w6JeLAfuZF2sxQ

3Elizabeth
Okay, it looks like the problem mostly occurred when I copy/pasted from google docs to Wordpress, which lost a lot of the links (but not all? maybe the problem was that it lost some images, and when I copied them over I lost the links?). Lightcone just launched a resync-to-RSS feature that has hopefully worked and updated this post. If it hasn't I am currently too battered and broken by Wordpress's shitty editor that apparently can't gracefully handle posts of this size to do more tonight.
3Elizabeth
oh god damn it. lesswrong doesn't manually pick up edits in response to updates on my own blog, so I copy pasted, and it looks like all the image links were lost. This isn't feasible to fix so for now I've put up a warning and bugged the lesswrong team about it Thanks for catching this, would have been a huge issue. 

In the comment thread you linked, Elizabeth stated outright what she found misleading: https://forum.effectivealtruism.org/posts/3Lv4NyFm2aohRKJCH/change-my-mind-veganism-entails-trade-offs-and-health-is-one?commentId=mYwzeJijWdzZw2aAg

Getting the paper author on EAF did seem like an unreasonable stroke of good luck.

I wrote out my full thoughts here, before I saw your response, but the above captures a lot of it. The data in the paper is very different than what you described. I think it was especially misleading to give all the caveats you did without

... (read more)
2Natália
See this comment.

I don't think that's the central question here.

So far as I can tell, the central question Elizabeth has been trying to answer is "Do the people who convert to veganism because they get involved in EA have systemic health problems?" Those health problems might be easily solvable with supplementation (Great!), systemic to having a fully vegan diet but only requires some modest amount of animal product, or something more complicated. She has several self-reported people coming to her saying they tried veganism, had health problems, and stopped. So, "At wha... (read more)

4Matthew Barnett
As I argued in my original comment, self-reported data is unreliable for answering this question. I simply do not trust people's ability to attribute the causal impact of diets on their health. Separately, I think people frequently misreport their motives. Even if vegan diets caused no health effects, a substantial fraction of people could still report desisting from veganism for health reasons. I'm honestly not sure why think that self-reported data is more reliable than proper scientific studies like RTCs when trying to shed light on this question. The RCTs should be better able to tell us the actual health effects of adopting veganism, which is key to understanding how many people would be forced to abandon the diet for health reasons.

I'm aware that people have written scientific papers that include the word vegan in the text, including the people at Cochrane. I'm confused why you thought that would be helpful. Does a study that relates health outcomes in vegans with vegan desistance exist, such that we can actually answer the question "At what rate do vegans desist for health reasons?"

2Matthew Barnett
I don't think that's the central question here. We were mostly talking about whether vegan diets are healthy. I argued that self-reported data is not reliable for answering this question. The self-reported data might provide reliable evidence regarding people's motives for abandoning vegan diets, but it doesn't reliably inform us whether vegan diets are healthy. Analogously, a survey of healing crystal buyers doesn't reliably tell us whether healing crystals improve health. Even if such a survey is useful for explaining motives, it's clearly less valuable than an RCT when it comes to the important question of whether they actually work.

Does such a study exist?

From what I remember of Elizabeth's posts on the subject, her opinion is the literature surrounding this topic is abysmal. To resolve the question of why some veg*ns desist, we would need one that records objective clinical outcomes of health and veg*n/non-veg*n diet compliance. What I recall from Elizabeth's posts was that no study even approaches this bar, and so she used other less reliable metrics.

-3Natália
[deleted]

I took your original comment to be saying "self-report is of limited value", so I'm surprised that you're confused by Elizabeth's response. In your second comment, you seem to be treating your initial comment to have said something closer to "self-report is so low value that it should not materially alter your beliefs." Those seem like very different statements to me.

1Matthew Barnett
In the original comment I said "I'm highly skeptical that you can get reliable information about the causal impacts of diets by asking people about their self-reported health after trying the diets". It's subjective whether that means I'm saying self-report data has "limited value" vs. "very little value" but I assumed Elizabeth had interpreted me as saying the latter, and that's what I meant.

Thanks!

If you're taking UI recommendations, I'd have been more decisive with my change if it said it was a one-time change.

Could I get rid of the (Previously GWS) in my username? I changed my name from GWS to this, and planned on changing it to just Stephen Bennett after a while, then as far as I can tell you removed the ability to edit your own username.

3Raemon
We let users edit their name once but not multiple times to avoid users doing shenanigany impersonation things. I’ll change it

Obviously one trial isn’t conclusive, but I’m giving up on the water pick. Next step: test flossing.

Did you follow through on the flossing experiment?

8Elizabeth
yeah, that side maybe looked slightly better but not to the point where the dentist spontaneously noticed a difference. And I've had a (different) dentist spontaneously notice when I started using oral antibiotics, even though that can't be constrained to half the mouth, so I think that's a thing they're capable of.

The coin does not have a fixed probability on each flip.

Boy howdy was I having trouble with spoiler text on markdown.

I didn't provide quotes from my text when the mismatch was obvious enough from any read/skim of the text.

It was not obvious to me, although that's largely because after reading what you've written I had difficulty understanding what your position was at all precisely. It also definitely wasn't obvious to jimrandomh, who wrote that Elizabeth's summary of your position is accurate. It might be obvious to you, but as written this is a factual statement about the world that is demonstrably false.

My proposal is not suppressing public discussion of plant-ba

... (read more)

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.

Oh whoops, I misunderstood the UI. I saw your name under the confusion tag and thought it was a positive vote. I didn't realize it listed emote-downvotes in red.

4Neel Nanda
Oh huh, I also misunderstood that, I thought red meant OP or something

For the record, I also misunderstood the UI in the same way. Perhaps it should be made clearer somehow.

Since I'm getting a fair number of confused reactions, I'll add some probably-needed context:

Some of Elizabeth's frustration with the EA Vegan discourse seems to stem from general commenting norms of lesswrong (and, relatedly, the EA forums). Specifically, the frustrations remind me of those of Duncan Sabien, who left lesswrong in part because he believed there was an asymmetry between commenters and posters wherein the commenters were allowed to take pot-shots at the main post, misrepresent the main post, and put forth claims they don't really endorse tha... (read more)

6Elizabeth
TBC I voted against confusion because I found your comment easy to understand. But seems like lots of people didn't, and I'm glad they had an easy way to express that. I have some hope for emojis doing exactly what you describe here, cheaply and without much inflammation. Elsewhere in the comments I've been able to mark specific claims as locally invalid, or ask for examples, or expressed appreciation, with it being a Whole Thing, and that's been great. 

I encourage you to respond to any comment of mine that you believe...

  • ...actively suppresses inconvenient questions with "fuck you, the truth is important."
  • ...ignores the arguments you made with "bro read the article."
  • ...leaves you in a fuzzy daze of maybe-disagreement and general malaise with "?????"
  • ...is hostile without indicating a concrete disagreement of substance with "that's a lot of hot air"
  • ...has citations that are of even possibly dubious quality with "legit?". And if you dig through one of my citations and think either I am misleading by in
... (read more)

Since I'm getting a fair number of confused reactions, I'll add some probably-needed context:

Some of Elizabeth's frustration with the EA Vegan discourse seems to stem from general commenting norms of lesswrong (and, relatedly, the EA forums). Specifically, the frustrations remind me of those of Duncan Sabien, who left lesswrong in part because he believed there was an asymmetry between commenters and posters wherein the commenters were allowed to take pot-shots at the main post, misrepresent the main post, and put forth claims they don't really endorse tha... (read more)

This seems like a fairly hot take on a throwaway tangent in the parent post, so I'm very confused why you posted it. My current top contender is that it was a joke I didn't get, but I'm very low confidence in that.

2Ansel
The parent post amusingly equated "accurately communicating your epistemic status", which is the value I selected in the poll, with eating babies. So I adopted that euphemism (dysphemism?) in my tongue-in-cheek response. Also, this: https://en.wikipedia.org/wiki/A_Modest_Proposal

I'm not Steven, but I know a handful of people who have no care for the truth and will say whatever they think will make them look good in the short term or give them immediate pleasure. They lie a lot. Some of them are sufficiently sophisticated to try to only tell plausible lies. For them debates are games wherein the goal is to appear victorious, preferably while defending the stance that is high status. When interacting with them, I know ahead of time to disbelieve nearly everything they say. I also know that I should only engage with them in debates/d... (read more)

7Sabiola
They're bullshitters.  "Both in lying and in telling the truth people are guided by their beliefs concerning the way things are. These guide them as they endeavour either to describe the world correctly or to describe it deceitfully. For this reason, telling lies does not tend to unfit a person for telling the truth in the same way that bullshitting tends to. ...The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are." —Harry G. Frankfurt, On Bullshit

That's the wrong search query, you're asking google to find pages about the Ukraine War that also include mentions of the term "rationalist"; you're not asking google to search for rationalist discussions of the Ukraine War. Instead I'd do something like this.

2Conor
Yes, but instead of searching one domain (lesswrong), it would search ~100+ curated domains. Google currently limits the domains to ten.

In the paper, they claim to be responding to people such as Charles Moser and Scott Alexander, and as I said Charles Moser and Scott Alexander are talking about AGP in trans women.

From my understanding, they're talking about AGP in natal males of any kind as compared to AGP in cis women. Scott and others found evidence of "yes, cis women have some AGP", whereas they find that the degree to which cis women have AGP is much less than those for whom AGP is a major component of their sexual life. I don't think it's crazy to then go on to say "no, really, wh... (read more)

4tailcalled
Yes, that's my point. Charles Moser and Scott Alexander made a claim about autogynephilia in trans women and cis women, Michael Bailey decided that he could just ignore the "trans women" part and replace it with "highly active members of online erotic AGP communities". They found similar, arguably lower rates of AGP in ordinary male samples compared to ordinary female samples. It is when they filter for highly active members of online erotic AGP communities that they find the highest degrees of AGP. I did actually make the argument:

Even if the specific point of AGP in cis women doesn't move you much (I don't think it should[2]), this dysfunctional discourse might make you tempted to infer that Blanchardians do a lot of other shenanigans to make their theories look better than they really are. And I think you would be right to make that inference, because I have a lot of points of critique on my gender blog that go unaddressed.[3] But my critiques aren't the core point I'm raising here, rather I'm pointing out that people have good reasons to be exhausted with autogynephilia the

... (read more)
5tailcalled
In the paper, they claim to be responding to people such as Charles Moser and Scott Alexander, and as I said Charles Moser and Scott Alexander are talking about AGP in trans women. Furthermore, elsewhere on social media they claim that their paper is a rebuttal of these papers they are responding to. I don't understand why you consider my description in the post dishonest when it seems to me that it is basically the same as how you describe it: As I described in the post, I think it's dishonest because of the greater context of the debate.

If you're coming from the Rest Of The Internet, you may be surprised by hard far LessWrong takes this.

 

I believe this should say "surprised by how far"

Counterpoint while working within the metaphor: early speedruns usually look like exceptional runs of the game played casually, with a few impressive/technical/insane moves thrown in.

Would you actually prefer that all the jesters left (except the last one)?

I believe you when you say that interacting with the jesters is annoying in the moment. I trust that you do indeed anticipate having to drudge through many misconceptions of your writing when your mouse hovers over "publish". If you'll indulge an extended metaphor: it seems as though you're expressing displeasure at engaging in sorties to keep the farmland from burning even though it's the fortress you actually care about. People would question the legitimacy of the fortress if the s... (read more)

4DirectedEvolution
  You're pointing out that part of what makes criticism so frustrating for Duncan is that, as a popular writer, he gets so many nitpicks that it becomes overwhelming. A less-popular writer might welcome any attention at all, even brief and critical, as long as it wasn't overtly hostile. Fear of posting and receiving no response might inhibit new writers finding their voice as much as the fear of overwhelming nitpickiness might inhibit more established writers. It's interesting to consider the dynamic this would create if it's a pattern. Newer writers eventually find the confidence to post anyway. They get a bit of attention, probably fairly negative, because they're new and figuring out how to express themselves. But they appreciate it and hopefully keep writing. If they get too successful, though, they get overwhelmed by the nitpicks, and eventually leave. This could also happen if they find they have less of a sustained capacity for dealing with 1-2 comments that are consistently negative and nitpicky if there's little else that's positive or more engaged. This would tend to generate an evaporative cooling dynamic for writers, of the kind Duncan describes. I increasingly think that established writers who are bothered by negative comments should use the tools at their disposal to insulate themselves - returning the burden onto the nitpicker, downvotes, or user-specific bans from commenting on their posts. That seems to mostly solve the problem of overwhelm that Duncan describes, provided he's right that the problem is a gang of Socrati rather than a problem among the entire user base, without impacting the experience of anybody else, including the Socrati. After all, if the choice is between not posting or banning Socrati from commenting, the Socrati face the same set of options either way.

Is amelia currently able to respond to your comment, or is she unable to respond to comments on her post because she posted this? If so, that seems like a rather large flaw in the system. I realize you're working on a solution tailored to this, but perhaps a less clunky system could be used, such as a 7/week limit?

3Raemon
I think they can actually make one more comment (there's a separate rate limit for comments and posts in the current system). The effort involved in making it 7/week is roughly the same as the effort to just allow unlimited commenting on your own posts and I'll just try and fix that soon.

Yeah I agree, I think your post points at something distinct from Eternal September, but what Raemon was talking about seemed very similar.

5Raemon
Yes, Eternal September is the basically the name of the problem I outline, the thing that made it seem relevant to this post is that the solution is sort of the same to dealing with Lizardmen.
5CronoDAS
It's the same problem that "this is not a feminism 101 space" was a complaint about.
8Duncan Sabien (Deactivated)
Feels very closely related in my mind, as well. The reason I didn't run with Eternal September as title or as major example is that Eternal September is about cultures being changed by an influx of unacculturated people, whereas I suspect the problem I'm describing is present in every culture regardless of immigration/emigration. Like, I think that even quiet towns of 5000 people out in the middle of nowhere have their 200 lizardmen, and avoid the problems gestured at above primarily via everybody knowing who they are and discounting accordingly. (A similarity with Eternal September is "that kind of high-context solution failing at scale.") EDIT: Oh, I'm dumb; I thought this was generically responding to the essay and I missed that it's responding to Ray; the problem Ray describes is VERY Eternal September.

One of my friends studied humor for a bit during his PhD, and my goodness is it difficult to get the average person to be funny with just "hey, tell me a joke" type prompts. Even when you hold their hand, and give them lots of potentially humorous pieces to work with (a-la cards against humanity), they really struggle. So, I'm honestly reasonably impressed with GPT-4's ability to occasionally tell a funny joke.

By the way, I disagree with the assumption that Aumann's theorem vindicates any such "standpoint epistemology".

That also stood out to me as a bit of a leap. It seems to me that for Aumann's theorem to apply to standpoint epistemology, everyone would have to share all their experiences and believe everyone else about their own experiences.

2tailcalled
My take on Aumann's Agreement Theorem is Don't Get Distracted By The Boilerplate. Yes, it's usually phrased with certain technical conditions that might not fully apply in the real world, but the basic implications of the theorem constantly appear in everyday interactions, for spiritually similar reasons to the technical conditions required by the theorem, even if those technical conditions don't literally hold.

Fair enough. If I were to pay attention to them, that is probably what I would do. Fortunately I do not have to pay attention to them, so I can take their mockery at face value and condemn it for being mockery.

Yes, I even find most criticism useful.

I have never clicked on a link to sneerclub and then been glad I did so, so I'll pass.

9the gears to ascension
Hmm. I'd like to change that via attempting to make tools or skills for extracting value from potentially high-conflict contexts like that subreddit; I am consistently glad to have read it, though unhappy in the moment until I can get their attitude out of my head. It does often take me a while meditating to integrate what I think of their takes. Eg, I think their critique here is expressing worry about worker treatment. They are consistently negative in ways that mean you can only rely on them to give direction (edit: as in, relative direction of their critique on a given post compared to their other critiques), not magnitude. But those directions are very often insight that was missing from the LW perspective, and I think that that's the case here.

Sneerclub is interested in sneering at me, it is not interested in bettering me. Why should I interpret their mockery as legitimate criticism?

3Vladimir_Nesov
Because it's the useful way of interpreting it? Other ways are less useful. If you succeed, there is something to learn. If you don't try, there is nothing to learn. The problem is not impossibility of this step, but in it not being a good use of effort.
1TAG
Do you accept any criticism as legitimate?

If I'm reading this right, you object to Jensen's initial comment that uses "cringy" and that your objection is largely due to the fact that "cringy" is a property mostly about the observer (as opposed to the thing itself).

Do you think the same is true of "mind-killy" from logan's comment?

This seems hypocritical to me. I think that your real objection is something else, possibly that you just really don't like "cringy" for some other reason (perhaps you cringe at its usage?)

(I wrote a bunch more words but deleted them - let's see how nondefensive {offensiv... (read more)

1Jensen
No, I would agree with Logan that calling something "cringy" is mindkilly, since it instills a strong sense of defensiveness in the accused. I'm not even sure that the cringiness I felt was rooted in the fact the post seemed fake, but it was real nonetheless. For this particular post, it seems that the average lesswronger doesn't think it seems cringy but I doubt I am alone in thinking this way. 

I used to have a lot more fun writing, enjoying the vividness of language, and while I thank LessWrong for improving many aspects of my thinking, it has also stripped away almost all my verve for language. I think that's coming from the defensiveness-nuance complex I'm describing, and since the internet is what it is, I guess I'd like to start by changing myself. But my own self-advice may not be right for others.

I have about a 2:1 ratio of unsubmitted to submitted comments. The most common source of deletion is no longer really caring about what I have... (read more)

3DirectedEvolution
An open-faced shit sandwich. That's some standup comedy gold :D At least filter them! You're trying to draw a signal from yourself and the world, then condition and analyze it. Good critics help you troubleshoot the circuit, or test the limits of the device you've built. A successful critic understands who the author was trying to help, and bases their criticism on helping the author achieve that goal. I like the framework of "true, helpful, and kind." Usually, I've seen it as "strive for at least two." Another way to look at it is "be at least OK at all three."

I'm not sure what happened here, but if I had to guess (in order of likelihood, not all are mutually exclusive):

  • Bad joke (accident)

  • Got flustered, said the first thing that popped into her head

  • Bad joke (on purpose)

  • Flirting

  • Was actually watching porn, and thought that coming clean would in some way be better, or that saying the truth but in a weird way would mask the truth

  • Wanted to get fired but didn't want to quit, somehow this was more socially acceptable than quitting

Ben5046

I think an important theory is missing here:

  • She was telling him "None of your business!" in an intentionally rude way.  He walked in, she obviously snapped her computer shut. Then he asked what she was intentionally keeping from him. I think its plausible she was saying "I snapped the computer shut because it is private. Why would you ask after something when I signalled so clearly it was private. Do you really expect me to just tell you the answer immediately after making it clear you are not supposed to know?" This means it could be her doing work o
... (read more)

I'm missing "is deeply committed to saying the truth in all circumstances" from your list. Seems at least possible, right?

[Quote removed at Trevor1's request, he has substantially changed his comment since this one].

I expect that the opposite of this is closer to the truth. In particular, I expect that the more often power bends to reason, the easier it will become to make it do so in the future.

2the gears to ascension
I agree with this strongly with some complicated caveats I'm not sure how to specify precisely.

This post does three things simultaneously, and I think those things are all at odds with one another:

  • Summarizes Duncan Sabien's post.

  • Provides commentary on the post.

  • Edits the post.

First, what is a summary and what are its goals? A summary should be a condensed and context-less version of the original that is shorter to read while still getting the main points across. A reader coming into a summary is therefore not expected to have any knowledge of the reference material. That reader shouldn't expect the level of detail that the source material... (read more)

I expect that if you actually ran this experiment, the answer would be a point because the ice cube would stop swinging before all that much melting had occurred. Additionally, even in situations where the ice cube swings indefinitely along an unchanging trajectory, warm sand evaporates drops of water quite quickly, so a trajectory that isn't a line would probably end up a fairly odd shape.

This is all because ice melting is by far the slowest of the things that are relevant for the problem.

I was feeling the beginning of sickness (slight fever, runny nose, scratchy throat) while at the airport around a year ago when returning from a trip. I made the same decision you did: prioritized masking, distance where feasible, and getting home as quickly as possible instead of taking on ~$1k of hotels/food to wait until I was healthy. I think I made the right decision and agree with yours here.

It turned out I had the ordinary flu, not covid. I don't think the prosocial decision making is substantially different between the flu & covid at this point in time.

Answer by Stephen Bennett41

It is possible for a lottery to be +EV in dollars and -EV in utility due to the fact of diminishing marginal utility . As you get more of something, the value of gaining another of that thing goes down. The difference between owning 0 homes and owning your first home is substantial, but the difference between owning 99 homes and 100 homes is barely noticeable despite costing just as much money. This is as true of money as it is of everything else since the value of money is in its ability to purchase things (all of which have diminishing marginal utility).... (read more)

2jmh
I agree that both DMV/DMU of money units is true and worth considering. However, I think it might be a bit more complex than that since I think one can make a case for network effects/economies of scale type aspects. For example, the marginal value of the next dollar I add to my wealth today is pretty small. Clearly if I had 300 million additional dollars the MV of the next dollar absolutely will be smaller. But the MV/MU of having 50 million to put into my "this pays for my day to day life" and this other 100 million goes into some research projects I would like to see done but in no way could pursue that effectively now, and this other 100 million can go to some other useful (in my assessment) efforts that I might want to support and the remaining could be "wasted" on gifts and assistance to those I think deserve more than life has given them. So I think understanding just what the margin is matter a lot in this type of view.

it's confusing other people don't have this objection

For me, the cow has left the barn on "reality" referring only to the physical world I inhabit, so it doesn't register as inaccurate (although I would agree it's imprecise). "Reality" without other qualifiers points me towards "not fictional".

"emotional resonance" ... "shared facts" or "shared worldview"

I notice I'm resistant to these proposals, but was pretty happy about the term "shared reality". Here are some things I like about "shared reality" that I would be giving up if I adopted one of your... (read more)

No one clicks on links, maybe ~25% of users click even one in a giant post.

Two comments, with detail below: (1) make sure you have the relevant denominator and (2) be careful about taking action based on this information.

(1) What counts as a user in this context? Someone who comes to the page, reads a sentence, and then closes the page wouldn't even have time to click a link, for example, but they don't represent who your readership actually is. Similarly, users can end up double counted where, for example, they read through the post on their phone, and th... (read more)

Load More