A minor point, perhaps a nitpick: both biological systems and electronic ones depend on directed diffusion. In our bodies diffusion is often directed by chemical potentials, and in electronics it is directed by electric or vector potentials. It's the strength of the 'direction' versus the strength of the diffusion that makes the difference. (See: https://en.m.wikipedia.org/wiki/Diffusion_current)
Except in superconductors, of course.
So the reason why the time value of money works, and it makes sense to say that we can say that the utility of $1000 today and $1050 in a year are about the same, is because of the existence of the wider financial system. In other words, this isn't necessarily true in a vacuum; however if I wanted $1050 in a year, I can invest the $1000 I have right now into 1 year treasuries. The converse is more complex; if I am guaranteed $1050 in a year I may not be able to get a loan for $1000 right now from a bank because I'm not the fed and loans to me have a higher interest rate, but perhaps I can play some tricks on the options market to do this? At any rate, I can get pretty close if I were getting an asset-backed loan, such as a mortgage.
Note that I'm not saying that actors are indifferent to which option they get, but that it is viewed with equal utility (when discounted by your cost of financing, basically).
This is a bit of a cop-out, but I would say modelling the utility of money without considering the wider world is a bit silly anyway, because money only has value due to its use as a medium of exchange and as a store of value, both of which depend on the existence of the rest of the world. The utility of money thus cannot be truly divorced from the influence of eg. finance.
Is the fifth requirement not a little vague, in the context of agents with external memory and/or few-shot learning?
I haven't heard of this, but I definitely do this.
I'm not sure why you keep bringing up social media, I haven't so it's quite irrelevant to my point.
Your specific point was that LW is better than predicting
96 of the last one civil wars and two depressions
I'm curious if you just think that, or if you actually have evidence demonstrating that LW as a community has a quantifiably better track record than social media. That's completely beside my point though, since I was never talking about social media.
Regarding overconfidence, GPT-4 is actually very very well-calibrated before RLHF post-training (see paper Fig. 8). I would not be surprised if the RLHF processes imparted other biases too, perhaps even in the human direction.
How?
Edit:
Also, are you asking me for sources that people have been worried about democratic backsliding for over 5 years? I mean, sure, but I'm genuinely a little surprised that this isn't common knowledge. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C44&q=democratic+backsliding+united+states&btnG=&oq=democratic+ba
A few specific examples of both academic and non-academic articles:
How has the discourse on LW about democratic backsliding been better than these ~5 year old articles?
Remember, the "exception throwing" behavior involves taking the entire space of outcomes and splitting it into two things: "Normal" and "Error." If we say this is what we ought to do in the general case, that's basically saying this binary property is inherent in the structure of the universe.
I think it works in the specific context of programming because for a lot of functions (in the functional context for simplicity), behaviours are essentially bimodal distributions. They are rather well behaved for some inputs, and completely misbehaving (according to specification) for others. In the former category you still don't have perfect performance; you could have quantisation/floating-point errors, for example, but it's a tightly clustered region of performing mostly to-spec. In the second, the results would almost never be just a little wrong; instead, you'd often just get unspecified behaviour or results that aren't even correlated to the correct one. Behaviours in between are quite rare.
I think you're also saying that when you predict that people are limited or stunted in some capacity, that we have to intervene to limit them or stunt them even more, because there is some danger in letting them operate in their original capacity.
It's like, "Well they could be useful, if they believed what I wanted them to. But they don't, and so, it's better to prevent them from working at all."
If you were right, we'd all be hand-optimising assembly for perfect high performance in HPC. Ultimately, many people do minimal work to accomplish our task, sometimes to the detriment of the task at hand. I believe that I'm not alone in this thinking, and you'd need quite a lot of evidence to convince others. Look at the development of languages over the years, with newer languages (Rust, Julia, as examples) doing their best to leave less room for user errors and poor practices that impact both performance and security.
I'm mostly talking about academic discourse. Also, what a weird hollier than thou attitude; are you implying LW is better? In what way?
You would be deceiving someone regarding the strength of your belief. You know your belief is far weaker than can be supported by your statement, and in our general understanding of language a simple statement like 'X is happening tonight' is interpreted as having a strong degree of belief.
If you actually truly disagree with that, then it wouldn't be deception, it would be miscommunication, but then again I don't think someone who has trouble assessing approximate Bayesian belief from simple statements would be able to function in society at all.