A counterpoint: when I skip showers, my cat appears strongly in favor of smell of my armpits- occasionally going so far as to burrow into my shirt sleeves and bite my armpit hair (which, to both my and my cat's distress, is extremely ticklish). Since studies suggest that cats have a much more sensitive olfactory sense than humans (see https://www.mdpi.com/2076-2615/14/24/3590), it stands to reason that their judgement regarding whether smelling nice is good or bad should hold more weight than our own. And while my own cat's preference for me smelling bad is only anecdotal evidence, it does seem to suggest at least that more studies are required to fully resolve the question.
I think it's a very bad idea to dismiss the entirety of news as a "propaganda machine". Certainly some sources are almost entirely propaganda. More reputable sources like the AP and Reuters will combine some predictable bias with largely trustworthy independent journalism. Identifying those more reliable sources and compensating for their bias takes effort and media literacy, but I think that effort is quite valuable- both individually and collectively for society.
Of course, we have to be very careful with our news consumption- even the most sober, reliable sources will drive engagement by cherry-picking stories, which can skew our understanding of the frequency of all kinds of problems. But availability bias is a problem we have to learn to compensate for in all sorts of different domains- it would be amazing if we were able to build a rich model of important global events by consuming only purely unbiased information, but that isn't the world we live in. The news is the best we've got, and we ought to use it.
So, the current death rate for an American in their 30s is about 0.2%. That probably increases another 0.5% or so when you consider black swan events like nuclear war and bioterrorism. Let's call "unsafe" a ~3x increase in that expected death rate to 2%.
An increase that large would take something a lot more dramatic than the kind of politics we're used to in the US, but while political changes that dramatic are rare historically, I think we're at a moment where the risk is elevated enough that we ought to think about the odds.
I might, for example, give odds for a collapse of democracy in the US over the next couple of years at ~2-5%- if the US were to elect 20 presidents similar to the current one over a century, I'd expect better than even odds of one of them making themselves into a Putinesque dictator. A collapse like that would substantially increase the risk of war, I'd argue, including raising a real possibility of nuclear civil war. That might increase the expected death rate for young and middle-aged adults in that scenario by a point or two on its own. It might also introduce a small risk of extremely large atrocities against minorities or political opponents, which could increase the expected death rate by a few tenths of a percent.
There's also a small risk of economic collapse. Something like a political takeover of the Fed combined with expensive, poorly considered populist policies might trigger hyperinflation of the dollar. When that sort of thing happens overseas, you'll often see reduced health outcomes and breakdown in civil order increasing the death rate by up to a percent- and, of course, it would introduce new tail risks, increasing the expected death rate further.
I should note that I don't think the odds of any of this are high enough to worry about my safety now- but needing to emigrate is much more likely outcome than actually being threatened, and that's a headache I am mildly worried about.
That's a crazy low probability.
Honestly, my odds of this have been swinging anywhere from 2% to 15% recently. Note that this would be the odds of our democratic institutions deteriorating enough that fleeing the country would seem like the only reasonable option- p(fascism) more in the sense of a government that most future historians would assign that or a similar label to, rather than just a disturbingly cruel and authoritarian administration still held somewhat in check by democracy.
I wonder: what odds would people here put on the US becoming a somewhat unsafe place to live even for citizens in the next couple of years due to politics? That is, what combined odds should we put on things like significant erosion of rights and legal protections for outspoken liberal or LGBT people, violent instability escalating to an unprecedented degree, the government launching the kind of war that endangers the homeland, etc.?
My gut says it's now at least 5%, which seems easily high enough to start putting together an emigration plan. Is that alarmist?
More generally, what would be an appropriate smoke alarm for this sort of thing?
One interesting example of humans managing to do this kind of compression in software: .kkrieger is a fully-functional first person shooter game with varied levels, detailed textures and lighting, multiple weapons and enemies and a full soundtrack. Replicating it in a modern game engine would probably produce a program at least a gigabyte large, but because of some incredibly clever procedural generation, .kkrieger managed to do it in under 100kb.
Could how you update your priors be dependent on what concepts you choose to represent the situation with?
I mean, suppose the parent says "I have two children, at least one of whom is a boy. So, I have a boy and another child whose gender I'm not mentioning". It seems like that second sentence doesn't add any new information- it parses to me like just a rephrasing of the first sentence. But now you've been presented with two seemingly incompatible ways of conceptualizing the scenario- either as two children of unknown gender, of whom one is a boy (suggesting a 2/3 chance of both being boys), or as one boy and one child of unknown gender (suggesting a 1/2 chance of both being boys). Having been prompted which both models, which should you choose?
It seems like one ought to have more predictive power than the other, and therefore ought to be chosen regardless of exactly how the parent phrases the statement. But it's hard to think of a way to determine which would be more predictive in practice. If I were to select all of the pairs of two siblings in the world, discard the pairs of sisters, choose one at random and ask you to bet on whether they were both boys, you'd be wise to bet at 2/3 odds. But if I were to select all of the brothers with one sibling in the world and choose one along with their sibling at random, you'd want to bet at 1/2 odds. In the scenario above, are the unknown factors determining whether both children are boys more like that first randomization process, or more like the second? Or, maybe we have so little information about the process generating the statement that we really have no basis for deciding which is more predictive, and should just choose the simpler model?
I've been wondering: is there a standard counter-argument in decision theory to the idea that these Omega problems are all examples of an ordinary collective action problem, only between your past and future selves rather than separate people?
That is, when Omega is predicting your future, you rationally want to be the kind of person who one-boxes/pulls the lever, then later you rationally want to be the kind of person who two-boxes/doesn't- and just like with a multi-person collective action problem, everyone acting rationally according to their interests results in a worse outcome than the alternative, with the solution being to come up with some kind of enforcement mechanism to change the incentives, like a deontological commitment to one-box/lever-pull.
I mean, situations where the same utility function with the same information disagree about the same decision just because they exist at different times are pretty counter-intuitive. But it does seem like examples of that sort of thing exist- if you value two things with different discount rates, for example, then as you get closer to a decision between them, which one you prefer may flip. So, like, you wake up in the morning determined to get some work done rather than play a video game, but that preference later predictably flips, since the prospect of immediate fun is much more appealing than the earlier prospect of future fun. That seems like a conflict that requires a strong commitment to act against your incentives to resolve.
Or take commitments in general. When you agree to a legal contract or internalize a moral standard, you're choosing to constrain the decisions of yourself in the future. Doesn't that suggest a conflict? And if so, couldn't these Omega scenarios represent another example of that?
If the first sister's experience is equivalent to the original Sleeping Beauty problem, then wouldn't the second sister's experience also have to be equivalent by the same logic? And, of course, the second sister will give 100% odds to it being Monday.
Suppose we run the sister experiment, but somehow suppress their memories of which sister they are. If they each reason that there's a two-thirds chance that they're the first sister, since their current experience is certain for her but only 50% likely for the second sister, then their odds of it being Monday are the same as in the thirder position- a one-third chance of the odds being 100%, plus a two-thirds chance of the odds being 50%.
If instead they reason that there's a one-half chance that they're the first sister, since they have no information to update on, then their odds of it being Monday should be one half of 100% plus one half of 50%, for 75%. Which is a really odd result.
I don't think that would help much, unfortunately. Any accurate model of the world will also model malicious agents, even if the modeller only ever learns about them second-hand. So the concepts would still be there for the agent to use if it was motivated to do so.
Censoring anything written by malicious people would probably make it harder to learn about some specific techniques of manipulation that aren't discussed much by non-malicious people or which appear much in fiction- but I doubt that would be much more than a brief speed bump for a real misaligned ASI, and probably at the expense of reducing useful capabilities in earlier models like the ability to identify maliciousness, which would give an advantage to competitors.