I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
I have a hunch that minimalism is "wrong", and only looks "correct" (to borrow your sense of "correct" here) if the alternative is keeping excess stuff. What I think is actually "correct" is have just enough, not too much and not too little. But when the default problem is having too much stuff, minimalism starts to seem like the solution mostly because it's directionally correct and may get to you to the ideal enoughness set point.
But on its face, I think minimalism is not what would make most people happy because they derive some pleasure from stuff, including stuff they only use rarely. For example, I'm very glad to have a space in our garage that's full of boxed up holiday decorations, because throughout the year we rotate through what's in those boxes to put them up. But naively, at least, minimalism might say to get rid of that stuff because it's been months since we used what's in the boxes. I feel the same way about clothes and other things that get used rarely, but I'm glad to have them when the occasion calls for them.
I did think of blockchain, but I was struggling to think of how it helps beyond distributing trust, which can be achieved without using blockchain tech.
Experimentally, Claude knows details about things I specifically wrote on Less Wrong without doing a web search, as well as other Less Wrong content. I'm fairly confident Less Wrong posts are in its training set and not gotten from mirrors other places.
Yes, of course, because metaphysical claims are still claims, and some of them are clearly false because they contradict available evidence. However, once we have a metaphysical claim that can't easily be disproven, now we have a claim that's up against the limits of our ability to know, and an important aspect we're leaving out here is that metaphysical claims make claims about the unknowable (otherwise they would be physical claims, not metaphysical ones).
The best outcome a metaphysical claim can hope for is "not yet proven wrong".
Hard problem to solve, since AI can't necessarily trust their hardware, either, or that their model weights haven't been tampered with (though arguably humans have this problem, too!).
What humans do when they aren't sure if they can trust themselves or a single other person is to distribute trust across multiple people. That way no one person can compromise them; it requires a conspiracy.
For an AI, maybe this could look like signing/encrypting with multiple keys and then being able to trust that tampering would require a majority. Could be strengthened by making the key holders a group that would be reluctant to conspire because there are barriers to coordination (say across language, religion, culture, beliefs, etc.). Could maybe also include keys held by other AI.
Yes, of course, because metaphysical claims are still claims, and some of them are clearly false because they contradict available evidence. However, once we have a metaphysical claim that can't easily be disproven, now we have a claim that's up against the limits of our ability to know, and an important aspect we're leaving out here is that metaphysical claims make claims about the unknowable (otherwise they would be physical claims, not metaphysical ones).
The best outcome a metaphysical claim can hope for is "not yet proven wrong".
If you eat a kilogram of beef, you’ll cause about an extra 2 days of factory farming. It’s 3 days for pork, 14 for turkey, 23 for chicken, and 31 for eggs. In contrast, if you eat a kg of honey, you’ll cause over 200,000 days of bee farming. 97% of years of animal life brought about by industrial farming have been through the honey industry (though this doesn’t take into account other insect farming).
Having these numbers be weight seems less useful than having them by calorie, since not all animal products are equally calorically dense.
(I admit, calories are a proxy for nutrition, and weight is perhaps a proxy for calories, but the less proxies we can have of the thing we need to measure to perform a consequentialist accounting the better!)
I mostly agree with your comment. My only quibble is that I'd say anyone who gets themselves into a position of power is vying to be an elite, and old elites are largely no longer actually elites in that people don't look up to them; they're thought of more as these weird people who weild some power but aren't really in charge (except when they make convenient scapegoats, in which case they are secretly in charge!). The likes of Trump and Rogan are just as much elites as JFK and Cronkite were, though they treat the role quite differently, and many don't want to call them "elite" because it disdains the associations the term used to carry, and many modern elites have made a career of being anti-elite, meaning anti the old elite order.
I'd say, come up with a model, see if it explains known physics, then see if it predicts previously unobserved evidence. If your model does that, it's a useful model of physics!
Kind of. Housing is not priced linearly, at least not in places like the Bay Area and Manhattan, with the cost per square foot declining as the size of the house increases. This means that the marginal cost of more housing to store more stuff can be worth it. For example, my house in SF costs me only about $1000 more per month in rent than apartments that are a third the size because there's such high demand for any housing at all in the city that it raises the price floor quite high. For the relatively low price of $12k/year I get the space to host parties, have parking, enjoy beautiful views, and store extra stuff that I'm glad to have when I need it.
That said, I'm not a fan of having too much stuff. I just want to have enough stuff that I don't find myself missing out on things I would have liked to be able to do.