Haiku has not written any posts yet.

I wouldn't be so sure that e.g. Mark Kelly was implying that the President himself had given unlawful orders. (I am open to evidence that this is what was being implied, or that this actually occurred.) The boat double-tap incident in particular suggested that unlawful orders may have been given by someone in the chain of command. Minus any speculative or actual nth-order effects, I think it was a sensible time to remind service members not to follow unlawful orders.
And of course, the POTUS himself frequently declines to defer to laws that would constrain him, so the idea that he might give unlawful orders shouldn't be surprising to people in any given political camp.
This is very unlikely to matter. All of these companies are trying to create something that could kill everyone, and none of these companies are working on safety at all in a way that could actually prevent that from happening.
If none of these companies had any kind of safety teams, the world would be slightly safer, because nothing of value in preventing human extinction would be lost, and it would be even easier to lobby to get them all shut down.
Wow. The "Ryazan Miracle" incident is almost unbelievable. It's hard for me to imagine one person making decisions that are so egregiously short-sighted, let alone a whole committee.
Larionov ordered almost all cattle to be slaughtered ('the women, and the children too' at that), and then promised more beef next year. What was going on in his mind? Was there so much pressure that he stopped caring whether he would even live that long? Did the incentives just bring forward the same kind of impulse that causes someone to steal half a paycheck's worth of money from the register, rather than just coming into work and getting paid?
Even post-mortem he was not stripped of his title of Hero of Labour.
...Ah. Mostly depressingly, maybe Larionov was smart after all.
the point of my original post was that there's a limit to how good you can get doing only that, without going out and gathering new information.
That is true. Human forecasters mostly don't do this, though, so if an AI forecaster did maximize cost-effective information-gathering, it could still gain an advantage from doing so. The cost of AI doing the gathering could also presumably drop below the cost of humans doing the gathering, which would create a strict advantage on both effective gathering of information and effective use of information.
Bots are already outperforming humans on some markets because of speed.
Markets, yes. Reactivity faster than a few hours is usually not relevant to... (read more)
It's an interesting point that some information isn't worth trying to gain. AI could still pareto-dominate human pros, though, myself readily included.
I don't see why AI would need to participate in a real money prediction market, or even a market at all. AI systems aren't motivated by money, and non-market prediction aggregators have fewer failure modes. The only cost would be the cost to run the models, which would eventually be extremely cheap per question compared to human pros. I think it would suffice to create an AI-only version of basically Metaculus, subsidized by businesses and governments who benefit from well-calibrated forecasts on a wide variety of topics (sans the degenerate examples... (read more)
Why would anyone want a galaxy? I don't even want a very big house.
If all your friends have galaxies, do you all still get to live in the same city and play games and make each other laugh? If so, what are the galaxies for? If not... what are the galaxies for?
Hm. I found a Twitter thread on the topic, with some leads: https://x.com/GrantSlatton/status/1830302697125478630
I have undergone the exact same move, but I think my political beliefs are not sophisticated enough for me to be able to identify a solid target to "believe already." My time on the right gave me some pieces of information that strongly falsified a few beliefs often bucketed with the left, even as I moved leftward, which has helped me moderate my trust that continuing leftward would capture the things I expect to believe in the future.
Put another way, politics is multivariate / high dimensional. A clear trend in one specific dimension isn't meaningless, but is so lossy that I wouldn't be surprised if it stopped or apparently reversed slightly.
Adding some descriptors I have frequently used:
General-purpose AI Systems -- Unwieldy. Possibly overemphasizes their tool nature.
Digital Minds / Digital Brains -- Very accurate in some important ways, allergically disputed in others. Not technical.
Some further shots from the hip:
Broad AI -- Not narrow, without claiming full generality. Highly unspecific.
Digital Cognition Engines -- Anything "engine" doesn't acknowledge the system as being whole unto itself. Also this is sci-fi name territory.
Cognition Manifolds -- Also sci-fi, but scratches an itch in my brain. I like this one a little too much and I am a little sad now that this isn't the accepted term.
My personal strategy has been to not think about it very hard.
I am sufficiently fortunate that I can put a normal amount of funds into retirement, and I have continued to do so on the off chance that my colleagues and I succeed at preventing the emergence of AGI/ASI and the world remains mostly normal. I also don't want to frighten my partner with my financial choices, and giving her peace of mind is worth quite a lot to me.
If superintelligence emerges and doesn't kill everyone or worse, then I don't have any strong preferences as to what my role is in the new social order, since I expect to be about as well-off as I am now or more.