Has anyone looked into the recent Chinese paper claiming to have reversed aging in monkeys?
Is it real or BS?
Thanks for clarifying, please keep in mind LessWrongs policy on AI generated content: https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
This seems to me like it was partly generated or rewritten by an LLM. Is that correct?
Your complaining about how the graph is drawn, and hope to fix that by drawing a graph that is almost certainly wrong? At least the graph they drew only relies on actual past data.
I agree they would do better to acknowledge that whilst the growth is currently exponential, it will have to stop at some point but we have no idea when. That gets a bit tiring after a while though.
Because it has the wrong shape in every way that matters if they draw an s curve with us at the halfway point (which seems to be the natural failure mode), and they're actually at the 1 percent mark. The s curve isn't particularly illuminating over simply saying this exponential will stop at some point but we don't know when, but unfortunately tends to lend itself to overconfident predictions.
Whilst technically true, those who attempt to use this fact to predict future growth tend to end up just as wrong as those who don't.
All exponentials end, but if you don't know when you need to prepare equally for the scenario where it ends tomorrow, and the one where it ends with the universe in paperclips.
I think that it's good to think concretely about what multiverse trading actually looks like, but I think problem 1 is a red herring - Darwinian selective pressure is irrelevant where there's only one entity, and ASIs should ensure that at least over a wide swathe of the universe there is only one entity. At the boundaries between two ASIs if defence is simpler than offense there'll be plenty of slack for non-selective preferences.
My bigger problem is that multiverse acausal trade requires that agent A in universe 1, can simulate that universe 2 exists, with agent B, which will simulate agent A in universe 1. Which is not theoretically impossible (if for example the amount of available compute increases without bound in both universes, or if it's possible to prove facts about the other universe without needing to simulate the whole thing), but does seem incredibly unlikely - and almost certainly not worth the cost required to attempt to search for such an agent.
Currently on 9%
That's the level where manifold can't really tell you about exact probability because betting no ties up a lot of capital for minimal upside.
Also by 2026 I'd expect to have GPT 4 level LLMs with 1/10th the parameter count just due to algorithmic improvements (maybe I'm wildly wrong here) so doing the same but with different architecture isn't necessarily as indicative as it seems.
More details here https://x.com/cremieuxrecueil/status/1975079984776577153