F-35 aren't the crucial component to winning the kind of wars in Iraq or Afghanistan. They also aren't the kind of weapon that are important to defend Taiwan. They are just what the airforce culture wants instead of being a choice made by a hypercompetent military.
I mostly agree with your perception of state (or something) competence, but this seems to me like a sloppy argument? True, the US does have to prepare for the most likely wars, but they also have to be prepared for all other wars that don't happen because they were prepared, aka. deterrence...
I immediately recognize the pattern that's being playing out in this post and in the comments. I've seen it so many times, in so many forms.
Some people know the "game" and the "not-game", because they learned the lesson the hard way. They nod along, because to them it's obvious.
Some people only know the "game". They think the argument is about "game" vs "game-but-with-some-quirks", and object because those quirks don't seem important.
Some people only know the "not-game". They think the argument is about "not-game" vs "not-game-but-with-some-quirks", and ob...
(Meta: writing this in separate comment to enable voting / agreement / discussion separately)
If you want to make the case for tactical nuclear deployment not happening (which I hope is the likely outcome), I want to see a model of how you see things developing differently
I'll list a few possible timelines. I don't think any of these is particularly likely, but they are plausible, and together with many other similar courses of events they account for significant chunks of probability mass.
On Nord Stream sabotage:
Thus I claim we don't know whether people see dreams.
That's a pretty bold claim just a few sentences after claiming to have aphantasia.
Some of my dreams have no visuals at all, just a vague awareness of the setting and plot points. Others are as vivid and detailed as waking experience (or even more, honestly), at least as far as vision is concerned. Dreams can fall anywhere on a spectrum between these extremes, and sometimes they can even be a mixture (e.g. a visual experience of the place and an awareness of characters in that place that don't appear visually).
Yes, people do see dreams. I'm fairly certain I can tell the difference.
Yes, I'm aware of all that, and I agree with your premises, but your argument doesn't prove what you think it does. Let's try to reductio it ad absurdum, and turn the same argument against the possibility of fast technological or scientific feedback cycles.
If you live in a technologically backwards society (think bronze age), you can't become more advanced technologically yourself, because you'll starve spending your time trying to do science. The technology of society (including agriculture, communication, tools, etc.) needs to progress as a whole. ...
It seems pretty likely that moral and social progress are just inherently harder problems, given that you can't [...] have fast feedback cycles from reality (like you do when trying to make scientific, technological and industrial progress).
We can't? Have we tried? Have you tried? Is there some law of physics I'm missing? What would a real, genuine attempt to do just that even look like? Would you recognize it if it was done right in front of you?
There are multiple meanings of "progress" afoot here. Tabooing the word, my reading of your point is "moving toward any specific imagined future state of the world we all agree is good is good, therefore moving forward is good".
(Another non-native having a go at it...)
When your advice both ways seems fine,
Calibrate, then make it rhyme.
more transparent to outsiders
There is the danger of it being more transparency-illuding instead. (Yeah, I just invented that term, but what did I mean by it?)
My gut feeling is that attracting more attention to a metric, no matter how good, will inevitably Goodhart it.
That is a good gut feeling to have, and Goodhart certainly does need to be invoked in the discussion. But the proposal is about using a different metric with a (perhaps) higher level of attention directed towards it, not just directing more attention to the same metric. Different metrics create different incentive landscapes to optimizers (LessWrongers, in this case), and not all incentive landscapes are equal relative to the goal of a Good LessWro...
The way this topic appears to me is that there are different tasks or considerations that require different levels of conscientiousness for the optimal solution. In this frame, one should just always apply the appropriate level of conscientiousness in every context, and the trait conscientiousness is just a bias people have in one direction or the other that one should try to eliminate.
This frame is useful, because it opens up the possibility to do things like "assess required conscientiousness for task", "become aware of bias", "reduce bias", etc. But it ...
I think it's empirical observation.
The world doesn't just happen to behave in a certain way. The probability that all examples point in a single direction without some actual mechanism causing it is negligible.
I ended up using mathematical language because I found it really difficult to articulate my intuitions. My intuition told me that something like this had to be true mathematically, but the fact that you don't seem to know about it makes me consider this significantly less likely.
If we have a collection of variables , and , then is positively correlated in practice with most expressed simply in terms of the variables.
Yes, but also happens to be very strongly correlated with most that are e...
You have a true goal, . Then you take the set of all potential proxies that have an observed correlation with , let's call this . By Goodhart's law, this set has the property that any will with probability 1 be uncorrelated with outside the observed domain.
Then you can take the set . This set will have the property that any will with probability 1 be uncorrelated with outside the observed domain. This is Goodhart's law, and it still applies.
Your claim is that ...
Your is correlated with , and that's cheating for all practical purposes. The premise of Goodhart's law is that you can't measure your true goal well. That's why you need a proxy in the first place.
If you select a proxy at random with the only condition that it's correlated with your true goal in the domain of your past experiences, Goodhart's law claims that it will almost certainly not be correlated near the optimum. Emphasis on "only condition". If you specify further conditions, like, say, that your proxy is your true goal, then, wel...
Some frames worth considering:
The first layer of internal visual experience I have when reading is a degree of synesthesia (letters have colors). Most of the time I'm not aware that this is happening. It does make recalling writing easier (I sometimes deduce missing letters, words or numbers from the color).
Then there is the "internal blackboard", which I use for equations or formulas. I use conscious effort to make the equation appear as a visual experience (in its written form). I can then manipulate this image as if the individual symbols or symbol groups were physical objects that ...
Absence of evidence of X is evidence of absence of X.
A claim about the absence of evidence of X is evidence of:
No paradox to resolve here.
Non sequitur. Buying isn't the inverse operation of selling. Both cost positive amounts of time and both have risks you may not have thought of. But it probably is a good idea to go back in time and unsell your soul. Except that going back in time is probably a bad idea too. Never mind. It's probably a good investment to turn your attention to somewhere other than the soul market.
These rituals are inefficient in cases where there is mutual trust between all participants. But sticking to formality is a great Schelling fence against those trying to gain an advantage by exploiting unwitting bureaucrats.
The basis of the original post isn't existential threats, but narratives - ways of organizing the exponential complexity of all the events in the world into a comparatively simple story-like structure.
Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take
Memetic tribes are only tangentially relevant here. I didn't really intend to present any argument, just a set of narratives present in some other communities you probably haven't encountered.
The above narratives seem to be extremely focused into a tiny part of narrative-space, and it's actually a fairly good representation of what makes LessWrong a memetic tribe. I will try to give some examples of narratives that are... fundamentally different, from the outside view; or weird and stupid, from the inside view. (I'll also try to do some translation between conceptual frameworks.) Some of these narratives you already know - just look around the political spectrum, and notice what narratives people live in. There are aslo some narratives I find b...
You'd also have to consider the long-term effects on the incentive landscape of e.g. establishing the precedent of companies getting $4B deals in case of a pandemic regardless of whether their vaccine works or not. In general, doing things the reasonable way has the downside of incentivizing bad actors to extract any free energy you put into the system by being reasonable until you're potentially no better off than the way Delenda Est Club is handling the situation right now. In any case, I don't see any long-term systemic effects even being considered here, so I'd be surprised if the suggestions didn't have some significant fallout further down the line.
Lockdown incentivized politicians to establish positions on a lockdown, which has led to people having strong opinions about it. Even assuming no damage from further polarization, we have a roughly 50% chance of having an anti-lockdown government when the next pandemic hits, with a 10% chance of this new incentive being the deciding factor in not enacting a lockdown (or failing to implement it). Even if we assume that only 10% of the effects of this polarization is the result of the lockdown actually happening, with a 1% yearly chance of a pandemic more da...
I live in a social environment where expressing opinions or otherwise giving information about myself could have negative consequences, ranging from mild inconvenience to serious discrimination. I have no intention to hide my real identity from those who know the account, but I do want to hide my account from those who know my real identity (and aren't close friends). I use this name for most online activity.
I've been aware for a while now that having enough awareness to notice being trapped is not enough to step outside the pattern, but I can't step outside this pattern. I also believe that admitting that there is no substitute for practice isn't going to be causally linked to me actually practicing (due to a special case of the same trap), so I'll just go on staying trapped for now I guess.
Being self-sufficient and robust as a national economy is accepting a competitive disadvantage relative to a global just-in-time supply chain in times of prosperity in exchange for a competitive advantage during a crisis. Selection pressures will push economies accepting this tradeoff towards being actively interested in a world with more crises.
Question: how does postrationality and instrumental rationality relate to each other? To me it appears that you are simply arguing for instrumental rationality over epistemic rationality, or am I missing something?
However, if this is really what 'postrationality' is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.
It feels like calling someone's philosophy poisonous and harmful doesn't advance the conversation, regardless of its truth value, and this proves the point of the main post well.
If a philosophy is poisonous and harmful, I think it commendable and necessary to say so.
Two points:
Advancing the conversation is not the only reason I would write such a thing, but actually it serves a different purpose: protecting other readers of this site from forming a false belief that there's some kind of consensus here that this philosophy is not poisonous and harmful. Now the reader is aware that there is at least debate on the topic.
It doesn't prove the OP's point at all. The OP was about beliefs (and "making sense of the world"). But I can have the belief "postrationality is poisonous and harmful" without having to post a comm
Being able to speak is probably more important than being as smart as a human. Cultural / memetic evolution is orders of magnitude faster than biological, but its ability to function is dependent on having a memory better than mortal minds. Speech gives some limited non-mortal memory, as does writing the printing press, or the internet. These inventions enable more efficient evolution. AI will ramp up evolution to even higher speeds, since external memory will be replaced with internal 1) lossless and 2) intelligent memory. As such I am unconvinced that th...
People not being able to come up with any idea but that diseases are a curse of the gods is strong evidence not for diseases being a curse of gods but for the ignorance of those people. The most likely answer to that question is either something no one will think of for centuries to come or simply that the model of separating objects into "sorts of things" is not useful for deciphering the misteries of the universe despite being an evolutionary advantage on the ancestral savanna.
You might have gone too far with speculation - your theory can be tested. If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems. It is not immediately obvious how to run such an experiment, though.
I'm not an expert either, and I won't try to end the F-35 debate in a few sentences. I maintain my position that the original argument was sloppy. "F-35 isn't the best for specific wars X, Y and Z, therefore it wasn't a competent military decision" is non sequitur. "Experts X, Y and Z believe that the F-35 wasn't a competent decision" would be better in this case, because that seems to be the real reason why you believe what you believe.