I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
Interesting, I've actually gone in the opposite direction.
I now write posts in a Google doc first, then I copy that over to where I'm going to post that (either Substack, which then gets imported to LessWrong, or LessWrong directly).
Reasons for doing that:
Plus then I have an extra backup should anything happen to LessWrong or Substack.
Advaita begins from an entirely different starting point. According to the Upanisads (c. 600 BCE), the deepest truth is that the self (atman) and ultimate reality (Brahman) are not two but one. Sankara’s commentaries on the Brahma Sutras and the Bhagavad Gita (8th c. CE) develop this insight into a rigorous philosophical system. A great and newer introduction to non-dualism can be found in I'm that by Nisargadatta Maharaj.
This seems quite misleading to me.
You seem to be using "non-dual" here to mean simply "not dualism", as Advaita argues for monism, not non-dualism, as we commonly understand it in Buddhist philosophy, where "non-dualism" means "neither monism nor dualism but both and neither" (the tetralemma).
I'm also generally pro-alcohol-in-moderation, but I seem to recall studies showing that most of alcohol's social effects are psychosomatic and can be achieved by simply tricking someone into believing they are consuming alcohol. I've experienced this myself during times I couldn't drink but was hanging out with drinkers and having mocktails or NA beer and could feel myself having permission to behave in ways that are allowed when drunk even though I was having no alcohol myself.
Now of course maybe these studies didn't survive the replication crisis, I'm not sure, but I think there is something to be said for alcohol's positive effects being more socially programmed than physiologically created.
Yes, just as the choice of an exponential should be justified.
Most research papers at least do this as far as explaining why pick an exponential over a linear regression, which I think is a good practice because in many applications picking something other than a linear regression results in better fit of past data but worse fit of future data. But the point stands that needing to justify a sigmoidal or sigmoid-like model is the same problem as justifying the choice of an exponential.
Yes, some people are more careful than others.
I'm on Twitter all day. People love charts that go up and to the right. Feels like every week I see multiple tweets from people model some growth process, usually related to AI, with a naive exponential and no consideration of the fact that growth ends.
I saw enough of these that yesterday it finally clicked that I should write this post.
Yes, this all seems quite reasonable, and I think it's a failure if we fail to at least acknowledge that the model is going to break down at some point and give some guesses about when it will break down, which is what I see happening a lot when I read about exponential growth models (the modeler presents a curve, but not a model or even of theory of how growth might end, which to me feels like I'm only getting half a model, and it makes the model not very useful because it has such limited predictive power and it's not even attempting to quantifying the limitations).
Like I'm okay with saying we don't know how to quantify something at all, but once we start quantifying, I expect to see the quantifying carried through. Making a bet is a great way to quantify!
Why does it seem good to you to provide incomplete models that we know are incomplete and fail to acknowledge that incompleteness, because that's how most exponential models of growth look to me?
Yes, let's not fit to noise, but fitting to an exponential is also fitting to "noise" in the sense that it's fitting to a growth pattern that long run is wrong.
If not the sigmoid, there are other functions that produce S-curves that can be used like Gompertz function and the Bass diffusion model. These address some of the issues with sigmoids.
But I also find arguments about modeling being hard fall flat for me. Yes, of course it is hard. Exponential models make one kind of extremely obvious error. Sigmoids make another kind of error. It's not that hard to at least have a piece-wise model that says "yes, our model predicts growth will end around here" even if that's hard to fit into a single function, and I see little benefit to not doing that (other than it's hard and people don't want to or they are incentivized to do the easy thing and show a model that keeps going up).
There are special cases like disease modeling (though even there predictions can be wildly off), but in general I think the exponential fit is almost always better than a sigmoid/function that doesn't grow without bound unless you have strong reason to believe you've identified both the dampening term and the upper bound.
I find these arguments about fit strange, like they are failing to remember what we're trying to do with predictions.
We want to have an accurate model of the future. That exponentials fit data better is nice, and maybe they are easier to predict the next point under many circumstances, but they are leaving something out.
Like suppose I'm trying to predict if I'll be alive tomorrow. I could have a naive model that predicts I will be because I was alive every previous day. But this model is wrong in a very important way: one day I will die! That the model fails to account for this fact makes the model less useful, because even if it's right for a long time, eventually it won't be, and it's a failure of the model that I'd get surprised to be dead.
Ah, right. I'm opposed to footnotes, so I don't use them, and I rarely use image captions, preferring to address any images directly in the text. Those would change the calculus, though, if I wanted them.