IMO this argument also wants, like, support vector machines and distributional shift? The 'tails come apart' version feels too much like "political disagreement" and not enough like "uncertainty about the future". You probably know whether you're a Catholic or utilitarian, and don't know whether you're a hyper-Catholic or a cyber-Catholic, because you haven't come across the arguments or test cases that differentiate them.
What do you think of authentic relating / circling w/ relationalists, as opposed to rationalists? I don't think they have particularly good epistemic hygiene (or, that is, the ones that do I think become rationalists also) but I think they might have a way to embody philanthropy which is easier to tune into (than one based on, like, respecting the competence or thoughtfulness of humans as they are).
My spouse and I donated $100k, iirc twice what we did last year. This is mostly downstream of having less of our wealth tied up in private equity, rather than being twice as impressed with Lightcone's output; I also didn't take a salary for my work on Inkhaven, which is roughly a ~$10k contribution.
LessWrong has been my online home for a long time now; Lighthaven continues to be an impressive space that runs great events.
Sure; I'm not sure what fraction of the relevant innovations were additions instead of replacements (which might not impact total memory burden much).
Several people who worked at MIRI thought the book had new and interesting content for them; I don't remember having the "learned something new" experience myself, but I nevertheless enjoyed reading it.
I think it's called a reverse sear because the 'sear' step happens second--after the low-and-slow cooking--whereas it's a more common technique in cooking to start with the high heat to get the browning, and then lower the temperature.
Note that bacteria grow faster in hotter temperatures, until you reach the temperature where they die. (125°F, one of the temperatures mentioned in the article, is not hot enough to kill bacteria, and is thus one of the worst parts of the Danger Zone.) For large cuts of meat like a steak, you're mostly worried about stuff that's on the outside of it, and so a quick sear at a high temperature will kill stuff that's on the outside, and then you can comfortably cook at a lower temperature. My best guess is this is not a major problem at the times discussed here (30 minutes in the danger zone is within USDA guidelines) but probably was a worse idea when food safety was worse. Also note that when you put the steak in the oven, the oven temperature will be safe, in a way that means you don't need to be worried about the outside or contamination from the oven.
[As mentioned in a linked article, the commonly stated justification was to "lock in the juices", which isn't true, but it wouldn't surprise me if food safety was the actual impetus behind that advice.]
[[edit: I should also note that lots of recipes, like stew, start off with something that you want to fry (cook at temperatures higher than water boils at) and then later add something that you want to boil or steam (cook at water's boiling temperature). It is way easier to fry the meat and then add it to the boiling water than it is to boil the stew for a while, separate out the meat, and then fry it at the end.]]
mmm beef tallow is pretty in these days? I also think there's got to be some mileage from optimization to find the bliss point.
I don't think the decision theory described here is correct. (I've read Planecrash.)
Specifically, there's an idea in glowfic that it should be possible for lawful deities to follow a policy wherein counterparties can give them arbitrary information, on the condition that information is not used to harm the information-provider. This could be as drastic as "I am enacting my plan to assassinate you now, and would like you to propose edits that we both would want to make to the plan"!
I think this requires agreement ahead of time, and is not the default mode of conversation. ("Can I tell you something, and you won't get mad?" is a request, not a magic spell to prevent people from getting mad at you.) I think it also is arguably something that people should rarely agree to. Many people don't agree to the weaker condition of secrecy, because the information they're about to receive is probably less valuable than the costs of partitioning their mind or keeping information secret. In situations where you can't use the information against your enemies (like two glowfic gods interacting), the value of the information is going to be even lower, and situations where it makes sense to do such an exchange even rarer. (Well, except for the part where glowfic gods can very cheaply partition their minds and so keeping secrets or doing pseudohypothetical reasoning is in fact much cheaper for them than it is for humans.)
That is, I think this is mostly a plot device that allows for neat narratives, not a norm that you should expect people to be expecting to follow or get called out.
[This is not a complete treatment of the issue; I think most treatments of it only handle one pathway, the "this lets you get information you can use for harm reduction" pathway, and in fact in order to determine whether or not an agent should do it, you must consider all relevant pathways. But I think the presumption should not be "the math pencils out here", and I definitely don't think the math pencils out in interacting with Oli. I think characterizing that as "Oli is a bad counterparty" instead of something like "Oli doesn't follow glowfic!lawful deity norms" or "I regret having Oli as a counterparty" is impolite.]
Specifically, this is the privacy policy inherited from when LessWrong was a MIRI project; to the best of my knowledge, it hasn't been updated.
Also I think it's worth considering the position that AIs will do better than humans, at figuring out philosophical dilemmas; to the extent philosophical maturity involves careful integration of many different factors, models might be superhuman at that as well.
[I think there's significant reason to think human judgment is worthwhile, here, but it is not particularly straightforward and requires building out some other models.]